/srv/irclogs.ubuntu.com/2013/11/29/#juju.txt

Luca__Anybody there having some experiences in deploying openstack using charms?04:22
=== freeflying is now known as freeflying_away
=== freeflying_away is now known as freeflying
Luca__Hi there. Anyone with some experience in deploying openstack with charms?06:52
=== freeflying is now known as freeflying_away
=== CyberJacob|Away is now known as CyberJacob
=== freeflying_away is now known as freeflying
=== freeflying is now known as freeflying_away
jamespageyolanda, whats the stack trace?11:28
yolandajamespage, finished with heat charm, now i'm testing a full deployment with heat again, then you should review it?12:00
jamespageyolanda, happy to take a look - have you pushed your changes?12:03
yolandajamespage, yes, latest changes are pushed12:03
jamespageyolanda, some test failures: http://paste.ubuntu.com/6493740/12:05
jamespageyou need to stubb out os12:05
yolandait ran without tests for me12:05
yolandawithout test failures12:05
yolandamm, i see12:05
yolandamaybe i had write permissions there12:06
jamespageyolanda, something is also called apt-get update12:07
jamespagemaybe in install hook?12:07
yolandalooking now12:08
yolandaok, pushed new changes12:14
yolandai think i was running the tests with root user, so it worked because of that12:14
yolandajamespage ^12:18
jamespageyolanda, you still need to sub out around the creation of /etc/heat better13:09
jamespageit does not exist on my system, so the tests fail13:09
jamespageyolanda, README needs a tidy to remove ceph stuff, and metadata.yaml contains some trailing whitespace and odd indentations for categories13:12
jamespageyolanda, re templates - you can drop the etc_heat from the api-paste.ini13:13
jamespageits not require13:13
jamespageand we should probably set auth_encryption_key to something sensible13:14
jamespagemaybe provide via configuration and set to something random if not supplied - but that has to persist (i.e. it needs to be stored on disk)13:14
jamespageyolanda, re the [keystone_authtoken] in heat.conf13:14
jamespagethat looks like it needs completing - but I'm not 100% sure how that interacts with api-paste.ini13:15
jamespageyolanda, in the install hook you need to deal with openstack-origin: distro on 12.0413:16
jamespageI think the cinder charm does this best - basically it changes distro -> cloud:precise-XXX if on precise13:16
jamespagetake a look13:16
jamespageyolanda, actually re the use of os.path and os.mkdir - charmhelpers has an mkdir function somewhere that encapsulates all of that13:17
jamespageso you could just stubb that out instead of dealing with os directly13:17
yolandajamespage, ok, i'll work on that fixes13:18
jamespageyolanda, other than that looks pretty good - nice work!13:18
jamespagenot tried it yet - will do later today once you have updated13:19
yolandamost complicated part was to make it work, i needed a 16G compute machine for that to run :)13:19
jamespageyolanda, oh - the icon is a ceph one btw13:19
jamespagedo ceilometer and heat have official icons just yet?13:19
jamespageI don't think so - yolanda - just drop the icon for the time being13:21
jamespagewe can add that later13:21
yolandaok13:21
jamespageditto ceilometer13:21
jamespagegonna work on that next13:21
yolandai bet ceilometer has changed a lot since i wrote it13:23
=== rogpeppe1 is now known as rogpeppe
dweaverCan't seem to find an equivalent to  jitsu  watch UNIT --state=started using juju-core is there any equivalent or should I be parsing juju status output myself instead?13:43
X-warrior`How could I get out of a 'failed state' when juju resolved doesn't work?13:47
X-warrior`:S13:47
X-warrior`marcoceppi: are u around?14:26
=== _jmp__ is now known as _jmp_
=== freeflying_away is now known as freeflying
dweaverX-warrior`, try juju resolved multiple times.15:28
X-warrior`I already tried it15:28
X-warrior`A LOT15:28
dweaverjuju destroy-environment and re-deploy will work, of course.  juju destroy-unit should remove the unit, but if that doesn't work then you need juju destroy-unit --force, which is in the development version.15:31
X-warrior`destroy-environment will work, but I don't want to destroy everything15:32
X-warrior`destroy-unit depends on machine state15:33
X-warrior`uhmm15:35
yolandajamespage, what do you think it can be a good place to store heat auth key15:36
yolanda?15:36
jamespageyolanda, I'm been trying to standardize on /var/lib/charm/{service_name} for that type of stuff15:36
yolandaok15:37
X-warrior`dweaver: 1.16.3 does not have this --force option15:57
dweaverI know, it is in development. see bug: https://bugs.launchpad.net/juju-core/+bug/108929115:58
_mup_Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <theme-oil> <juju-core:Fix Committed by fwereade> <juju-core 1.16:Fix Committed by fwereade> <https://launchpad.net/bugs/1089291>15:58
dweaverSo, it is in 1.16.415:59
dweaverThere is also a workaround in the bug report, at the bottom.16:03
X-warrior`hazmat: are u there?16:09
hazmatX-warrior`, i am16:25
* hazmat reads backlog16:26
X-warrior`hazmat: sorry to bother you, but I saw your script that does a direct db delation and I think I need it...16:26
hazmatX-warrior`, so this is juju-core.. could you pastebin the relevant portion of status16:27
hazmatX-warrior`, failed state is rather ambigious, you mean a unit or machine?.. a status pastebin would help understand what the issue is16:27
X-warrior`just a sec16:27
X-warrior`http://pastebin.com/YHM3bx2E16:30
X-warrior`the problem is that juju resolved, never resolve the problem, I already try to execute it a lot...16:30
hazmatX-warrior`, juju resolved on elasticsearch/0 or logstash-indexer/0  ?16:31
X-warrior`I tried on both of then16:31
hazmatX-warrior`, in one terminal window can you start juju debug-log .. and then in another do the resolve on either elasticsearch/0 or logstash-indexer/0..  and then pastebin the log output.16:33
X-warrior`hazmat: http://pastebin.com/5pJ3KjpW16:36
hazmatX-warrior`, are you doing resolved --retry or just resolved ?16:40
X-warrior`just resolved16:40
X-warrior`hazmat: http://pastebin.com/494HpENN looks better16:41
X-warrior`I see a " cannot allocate memory", I guess because I'm using a micro instance...16:41
hazmatX-warrior`, eek, yeah java on micros is rather ambitious16:42
hazmatX-warrior`, i'd try to ssh into that machine and manually shutdown elasticsearch to free up some memory16:42
hazmatX-warrior`, micros aren't really useful virtual machines in my experience, they are severely constrained and penalized at the hypervisor level. for useful work m1.small is about as small i go.. t1.micros are good for almost static or net io primary workloads16:44
hazmater.. s/good/ok16:44
X-warrior`I'm just starting on it, so testing stuff and checking how it works...16:44
X-warrior`but thanks for the tip ;)16:45
X-warrior`and for the help16:45
X-warrior`let me try to ssh into it16:45
hazmatre t1.micro and cpu penalization.. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html16:52
hazmatX-warrior`, wrt to future stuff that makes this nicer.. it will be in the next dev release (1.17.0) we'll have destroy-machine --force machine_id which will forcibly kill units on the machine16:56
X-warrior`hazmat: yeap I heard about this. dweaver showed me the 'bug' on launchpad and then I find your name and script16:56
hazmatX-warrior`, that script isn't appropriate for your case, it was specifically around a stuck machine with no units, the  --force feature on trunk will handle both cases.16:58
X-warrior`got it16:59
* X-warrior` running status to check if it worked16:59
hazmatX-warrior`, any luck shutting down elasticsearch? .. you'll probably need to rerun the resolved hook on it17:00
X-warrior`Oh men why on earth I used this micro instances :(17:02
X-warrior`and trying to put more then one service on the same machine17:02
* X-warrior` feels dumb17:02
X-warrior`elasticsearch is not marked as failed anymore... but logstash-indexer keeps with the failure status...17:03
=== exekias_ is now known as exekias
hazmatX-warrior`, updated status pastebin pls17:08
hazmatX-warrior`, elasticsearch/0 should be gone i would think17:08
X-warrior`http://pastebin.com/7YN7rrwE17:09
hazmatX-warrior`, looks good, that's progress.. so back to logstash-indexer17:10
X-warrior`yeap17:10
X-warrior`same process as before? one terminal with debug-log another with resolved?17:11
hazmatX-warrior`, yes pls17:12
X-warrior`"ERROR cannot set resolved mode for unit "logstash-indexer/0": already resolved"17:14
X-warrior`but it is not resolved17:15
X-warrior`Logstash   agent-state is marked as down. Maybe it cannot do the resolve with it down17:17
hazmathmm17:21
hazmatX-warrior`, can you log into that machine by ip, and manually restart the juju agent there17:21
hazmatX-warrior`, ls /etc/init/juju* should tell you which upstart job it is.. then sudo service <name of file minus .conf> restart17:22
X-warrior`it worked17:22
X-warrior`I rebooted the machine17:22
X-warrior`and then logstash and elasticsearch are gone17:22
hazmatX-warrior`, cool17:23
X-warrior`thanks17:24
X-warrior`for all the help17:24
X-warrior`:D17:24
X-warrior`hazmat: thanks for all the help, I'm leaving now17:28
X-warrior`dweaver: Thanks! :D17:28
hazmatcheers17:28
aquariusI'm trying to deploy marcoceppi's Discourse charm on Azure. juju status says "agent-state-info: 'hook failed: "install"'" for the "discourse" service. How do I work out what failed and start working out why?20:41
hazmataquarius, log into the machine and inspect the log at /var/log/juju ... alternative juju resolved --retry while doing juju debug-log...20:52
aquariushazmat, that sounds useful. How do I log into the machine? (I am completely new to juju, as you may be able to tell. :))20:53
hazmatfinal option is interactive debug-hooks environment, https://juju.ubuntu.com/docs/authors-hook-debug.html20:53
hazmataquarius, juju ssh name-of-unit20:53
aquariusha! that's too easy :)20:53
hazmatie. juju ssh discourse/020:54
aquariusand I am logged in. Best command ever :)20:54
aquariusaha, it got a timeout while connecting to github. That sounds transient20:55
aquariusso "juju resolved --retry" will retry the failed deploy?20:55
aquariusor will it redeploy everything?20:55
aquariusah, I need to specify the unit20:56
aquariusok, this is just too simple. :)20:56
aquariushm. Maybe I was wrong. "juju resolved --retry discourse/0" returned (and if I run it again, it says that unit is already resolved), but juju status still shows that unit as being in agent-state: error. Shouldn't it be pending or something?20:58
aquariusor do I have to force a redeploy from scratch, rather than just --retry?20:58
hazmataquarius, its async20:59
aquariussure thing20:59
aquariusI wasn't expecting it to hang until finished21:00
hazmataquarius, it will retry, but there21:00
aquariusbu I would expect juju status to *indicate* that it's retrying :)21:00
hazmatis no guarantee as well that subsequent times will fix21:00
hazmatyeah.. there's an outstanding issue to allow for some observation of hook execution21:00
hazmatie. when are things really done/steady state21:00
hazmatas opposed to just workflow states21:00
aquariusoh, cool, so it will work the way I expect eventually but doesn't right now. that's ok21:00
aquariusI don't mind if what I expect to happen agrees with the plan but code hasn't quite caught up yet :)21:01
hazmatwell depends on what you expect, but yeah that should be partially addresses21:01
hazmataquarius, the async is pretty fast though (a few seconds).. so if its not resolved it likely ran into the same issue again21:01
hazmati'd suggest logging into the instance and verifying github connectivity21:01
aquariusand indeed it did fail again21:01
aquariusalthough in a different place.21:02
hazmatprogress then21:02
aquariusgithub connectivity seems to be *intermittent*, which is way worse than "not working at all"21:02
hazmataquarius, what's the error this time21:02
* aquarius gives azure a fishy look :)21:02
hazmatah azure.. tis a special child, with an iaas veneer over paas concepts21:03
aquariusnew question: does the juju gui run on port 80? That is: can I juju deploy the gui to a machine which already has a web thing on it?21:03
hazmataquarius, it runs on port 443, but also listens on 80 to do auto redirect to 44321:03
aquariusright, so if I have a web thing on that machine I should not deploy the gui to it21:04
aquariusI could give the gui its own machine, of course, but I'm currently using the azure free trial and I don't want it to eat the free trial cash amount too quickly ;)21:04
hazmataquarius, yeah.. i'd recommend not, i haven't  tried it, but i  imagine the proxy backend that its running will barf on the port already bound for 80.. would be nice if that were configurable21:05
* aquarius nods21:05
hazmatsince its not functionally used.. ie web app port 80 gui on 44321:05
* aquarius looks irritably at azure. It keeps failing github connections, but not all of them.21:06
* hazmat files a bug against gui 21:06
hazmatazure services dashboard is greenlit .. http://www.windowsazure.com/en-us/support/service-dashboard/21:08
hazmatnot clear what the issue is21:08
hazmatgithub status also green https://status.github.com/21:08
aquariusyeah, something weird going on. It's dying on doing github stuff, but not in the same places, and previously failed commands work sometimes21:08
aquariusI love intermittent failures :)21:08
hazmatthats a special kind ;-)21:09
aquariusmaybe it's a github problem21:09
aquariusrather than an azure problem21:09
aquariuscould be blaming azure unfairly here :)21:09
hazmataquarius, also a possibility21:09
hazmatanedoctally seems to be okay for me in casual testing21:10
hazmatcouple of pulls21:10
aquariusyeah, it's succeeding most of the time21:10
aquariusbut discourse pulls about 500 things21:10
aquarius:)21:10
aquariusthings that would be nice: something which gets kicked off when I do a deploy, sits in the background, and somehow tells me if the deploy goes wrong, rather than having to juju status to see what's happened21:11
hazmatthere's an api for that but yeah, it would be good to integrate that into the cli21:11
hazmatwith desktop notification21:11
aquariusalso, resuming a build from the last place we got to would b nice, but that's not juju's fault, that's the discourse charm's job21:11
aquariushm, it is possible that it actually *is* resuming from the last place it got to :)21:12
aquariusif that's the case then I will make slow and irritating progress, over time :)21:13
hazmataquarius, depends on the charm implementation, juju is just re-executing the  install hook21:14
* hazmat pokes at the discourse charm21:15
hazmatmarcoceppi, you around?21:15
aquariusI pung him a while back, so I don't think he is. Hopefully he will be over the weekend and I can try again :)21:16
hazmataquarius, so the apt installs are intelligent about no op on already present.21:16
aquariusruby bundler, probably less so21:16
aquariuseveryone thinks they can writ a package manager ;)21:16
hazmataquarius, and it looks like the git pieces will just refetch/update instead of full pull21:16
hazmataquarius, yup.. there's at least one for every language21:17
aquariushm! we may have got past the git bits21:18
aquariusprogress!21:18
aquarius2013-11-29 21:19:03 INFO juju.worker.uniter context.go:255 HOOK Gem::InstallError: rake-compiler requires RubyGems version >= 1.8.25. Try 'gem update --system' to update RubyGems itself.21:19
aquariusthat has to be a dependency bug in the charm.21:20
marcoceppi0/21:22
marcoceppiaquarius: there's a version of the charm not pushed yet. wip branch. that uses rvm + ruby2.021:23
marcoceppiare you using the charm store version of the one on github?21:23
dpb1marcoceppi: where is a good starter example of amulet?21:24
marcoceppidpb1: lete dig you up an example21:25
dpb1k21:25
aquariusmarcoceppi, heya!21:28
aquariusmarcoceppi, I'm using the one in the charm store21:28
aquariusaquarius@faith:~ $ juju deploy cs:~marcoceppi/discourse21:28
aquariusshould I have done somethnig else?21:29
marcoceppiaquarius: that one lags behind by a bit, as you can see discourse travels at a fast pace :)21:33
marcoceppiaquarius: let me find the latest stable and push it up to the charm store21:33
aquariuscool21:33
marcoceppiaquarius: it fetches a ton of deps via gem though, that's an upstream thing and nothing I can really do about that :)21:34
marcoceppiI've got some code to streamline the process from them, not yet implemented though21:34
aquariusmarcoceppi, yeah :)21:35
aquariusmarcoceppi, so, once you've pushed a new version to the charm store, can I deploy the new version over the top of the old one?21:35
aquariusor do I have to kill all the existing vms and deploy from scratch?21:36
marcoceppiaquarius: not really, I don't make any guarentees with upgrades, one of the reasons why it's still in a personal branch and not in the store. Once they get a 1.0 out I'll probably stabilize the charm and push it to cs:precise/discourse21:37
hazmataquarius, generically you can switch charm origins.. juju switch -h see --switch and point it to a local branch checkout21:37
aquariusso from what marcoceppi is saying I'm best to kill this setup stone dead and start fresh with the new charm once it's in the charm store, yes?21:37
marcoceppiaquarius hazmat: that's true, but there's a big difference in charm between what you have and current version (building ruby from rvm, etc) and there's no upgrade-charm hook21:38
hazmatmarcoceppi, in this case it never made it through the install hook, is that reasonable safe?21:38
aquariushazmat, I'm happy to destroy the existing deployment21:38
hazmataquarius, that's the safest way21:38
marcoceppihazmat aquarius if you're willing to try, sure!21:38
aquariushow do I do it? :)21:38
hazmatin terms of not creating new errors21:38
hazmataquarius, so..21:38
marcoceppihazmat: however, won't it not work because of install hook in error?21:38
hazmatmarcoceppi, not with --switch and --force21:39
marcoceppihazmat: ah, the good 'ol --force flag21:39
aquariusam I best to juju destroy-environment which kills the whole thing, and then juju bootstrap again from the start?21:39
aquariusI don't think I understand what an environment is :)21:39
hazmatmarcoceppi, its an intended workflow for just this case.. ie. switching origins to fix/work/customize  a charm.21:39
hazmattis a good flag indeed21:40
hazmataquarius, so azure takes forever21:40
hazmataquarius, there's another option21:40
hazmataquarius, which is destroy-service and terminate-machine.. its a bit annoying with juju-core  in that you have to resolve errors first. There's a juju deployer plugin on pypi, that automates this into juju-deplyer -TW ... and it does watch what's happening and tell you what's going on (re your earlier suggestion)21:41
hazmatits also a nice declarative way to build a topology of services.21:41
aquariusone step at a time. I'd rather do things the slow stupid way at first and then get clever later :)21:42
hazmatttyl :-)21:42
aquariusso I understand this: juju boostrap creates an environment? So if I juju destroy-environment, then the whole thing goes away, and then I can just juju bootstrap again?21:42
hazmatgive me a ping when its done, i'll be around.21:42
aquariusI'm stopping shortly and going to the pub anyway. It is Friday night ;)21:43
hazmataquarius, yes, azure is slightly different in that destroy involves synchronous destruction of various internal azure resources.21:43
hazmatits a bit time consuming, and O(n) the size of the environment. It is pretty reliable  (imo)21:43
hazmataquarius, so yeah.. slow but safe, works21:44
aquariusthat's the plan here :)21:44
* aquarius destroys the environment21:44
aquariusif it's not done in ten minutes or so, I'll resume tomorrow :)21:44
hazmatcheers21:44
aquariusmarcoceppi, are you planning on pushing to the charm store soon? If not, can I use the github version of the charm directly? I'm happy to do whichever's convenient for you :)21:45
marcoceppiaquarius: let me test to make sure what I'm giving you works21:45
marcoceppiaquarius: spinning up on hp cloud atm21:45
marcoceppiaquarius: I'm getting a segfault in the charm now during install, poking now22:01
aquariusmarcoceppi, this sounds like good progress, though! nice one :22:01
aquarius:)22:02
aquariusI have to go out, but I'll leave irc open if you want to drop comments in here, and thank you again!22:16
marcoceppiaquarius: np, still trying to figure out why rvm is segfaulting22:18
marcoceppiaquarius: okay, figured it out. Moving on to the rest of the charm for testing22:23

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!