Luca__ | Anybody there having some experiences in deploying openstack using charms? | 04:22 |
---|---|---|
=== freeflying is now known as freeflying_away | ||
=== freeflying_away is now known as freeflying | ||
Luca__ | Hi there. Anyone with some experience in deploying openstack with charms? | 06:52 |
=== freeflying is now known as freeflying_away | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== freeflying_away is now known as freeflying | ||
=== freeflying is now known as freeflying_away | ||
jamespage | yolanda, whats the stack trace? | 11:28 |
yolanda | jamespage, finished with heat charm, now i'm testing a full deployment with heat again, then you should review it? | 12:00 |
jamespage | yolanda, happy to take a look - have you pushed your changes? | 12:03 |
yolanda | jamespage, yes, latest changes are pushed | 12:03 |
jamespage | yolanda, some test failures: http://paste.ubuntu.com/6493740/ | 12:05 |
jamespage | you need to stubb out os | 12:05 |
yolanda | it ran without tests for me | 12:05 |
yolanda | without test failures | 12:05 |
yolanda | mm, i see | 12:05 |
yolanda | maybe i had write permissions there | 12:06 |
jamespage | yolanda, something is also called apt-get update | 12:07 |
jamespage | maybe in install hook? | 12:07 |
yolanda | looking now | 12:08 |
yolanda | ok, pushed new changes | 12:14 |
yolanda | i think i was running the tests with root user, so it worked because of that | 12:14 |
yolanda | jamespage ^ | 12:18 |
jamespage | yolanda, you still need to sub out around the creation of /etc/heat better | 13:09 |
jamespage | it does not exist on my system, so the tests fail | 13:09 |
jamespage | yolanda, README needs a tidy to remove ceph stuff, and metadata.yaml contains some trailing whitespace and odd indentations for categories | 13:12 |
jamespage | yolanda, re templates - you can drop the etc_heat from the api-paste.ini | 13:13 |
jamespage | its not require | 13:13 |
jamespage | and we should probably set auth_encryption_key to something sensible | 13:14 |
jamespage | maybe provide via configuration and set to something random if not supplied - but that has to persist (i.e. it needs to be stored on disk) | 13:14 |
jamespage | yolanda, re the [keystone_authtoken] in heat.conf | 13:14 |
jamespage | that looks like it needs completing - but I'm not 100% sure how that interacts with api-paste.ini | 13:15 |
jamespage | yolanda, in the install hook you need to deal with openstack-origin: distro on 12.04 | 13:16 |
jamespage | I think the cinder charm does this best - basically it changes distro -> cloud:precise-XXX if on precise | 13:16 |
jamespage | take a look | 13:16 |
jamespage | yolanda, actually re the use of os.path and os.mkdir - charmhelpers has an mkdir function somewhere that encapsulates all of that | 13:17 |
jamespage | so you could just stubb that out instead of dealing with os directly | 13:17 |
yolanda | jamespage, ok, i'll work on that fixes | 13:18 |
jamespage | yolanda, other than that looks pretty good - nice work! | 13:18 |
jamespage | not tried it yet - will do later today once you have updated | 13:19 |
yolanda | most complicated part was to make it work, i needed a 16G compute machine for that to run :) | 13:19 |
jamespage | yolanda, oh - the icon is a ceph one btw | 13:19 |
jamespage | do ceilometer and heat have official icons just yet? | 13:19 |
jamespage | I don't think so - yolanda - just drop the icon for the time being | 13:21 |
jamespage | we can add that later | 13:21 |
yolanda | ok | 13:21 |
jamespage | ditto ceilometer | 13:21 |
jamespage | gonna work on that next | 13:21 |
yolanda | i bet ceilometer has changed a lot since i wrote it | 13:23 |
=== rogpeppe1 is now known as rogpeppe | ||
dweaver | Can't seem to find an equivalent to jitsu watch UNIT --state=started using juju-core is there any equivalent or should I be parsing juju status output myself instead? | 13:43 |
X-warrior` | How could I get out of a 'failed state' when juju resolved doesn't work? | 13:47 |
X-warrior` | :S | 13:47 |
X-warrior` | marcoceppi: are u around? | 14:26 |
=== _jmp__ is now known as _jmp_ | ||
=== freeflying_away is now known as freeflying | ||
dweaver | X-warrior`, try juju resolved multiple times. | 15:28 |
X-warrior` | I already tried it | 15:28 |
X-warrior` | A LOT | 15:28 |
dweaver | juju destroy-environment and re-deploy will work, of course. juju destroy-unit should remove the unit, but if that doesn't work then you need juju destroy-unit --force, which is in the development version. | 15:31 |
X-warrior` | destroy-environment will work, but I don't want to destroy everything | 15:32 |
X-warrior` | destroy-unit depends on machine state | 15:33 |
X-warrior` | uhmm | 15:35 |
yolanda | jamespage, what do you think it can be a good place to store heat auth key | 15:36 |
yolanda | ? | 15:36 |
jamespage | yolanda, I'm been trying to standardize on /var/lib/charm/{service_name} for that type of stuff | 15:36 |
yolanda | ok | 15:37 |
X-warrior` | dweaver: 1.16.3 does not have this --force option | 15:57 |
dweaver | I know, it is in development. see bug: https://bugs.launchpad.net/juju-core/+bug/1089291 | 15:58 |
_mup_ | Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <theme-oil> <juju-core:Fix Committed by fwereade> <juju-core 1.16:Fix Committed by fwereade> <https://launchpad.net/bugs/1089291> | 15:58 |
dweaver | So, it is in 1.16.4 | 15:59 |
dweaver | There is also a workaround in the bug report, at the bottom. | 16:03 |
X-warrior` | hazmat: are u there? | 16:09 |
hazmat | X-warrior`, i am | 16:25 |
* hazmat reads backlog | 16:26 | |
X-warrior` | hazmat: sorry to bother you, but I saw your script that does a direct db delation and I think I need it... | 16:26 |
hazmat | X-warrior`, so this is juju-core.. could you pastebin the relevant portion of status | 16:27 |
hazmat | X-warrior`, failed state is rather ambigious, you mean a unit or machine?.. a status pastebin would help understand what the issue is | 16:27 |
X-warrior` | just a sec | 16:27 |
X-warrior` | http://pastebin.com/YHM3bx2E | 16:30 |
X-warrior` | the problem is that juju resolved, never resolve the problem, I already try to execute it a lot... | 16:30 |
hazmat | X-warrior`, juju resolved on elasticsearch/0 or logstash-indexer/0 ? | 16:31 |
X-warrior` | I tried on both of then | 16:31 |
hazmat | X-warrior`, in one terminal window can you start juju debug-log .. and then in another do the resolve on either elasticsearch/0 or logstash-indexer/0.. and then pastebin the log output. | 16:33 |
X-warrior` | hazmat: http://pastebin.com/5pJ3KjpW | 16:36 |
hazmat | X-warrior`, are you doing resolved --retry or just resolved ? | 16:40 |
X-warrior` | just resolved | 16:40 |
X-warrior` | hazmat: http://pastebin.com/494HpENN looks better | 16:41 |
X-warrior` | I see a " cannot allocate memory", I guess because I'm using a micro instance... | 16:41 |
hazmat | X-warrior`, eek, yeah java on micros is rather ambitious | 16:42 |
hazmat | X-warrior`, i'd try to ssh into that machine and manually shutdown elasticsearch to free up some memory | 16:42 |
hazmat | X-warrior`, micros aren't really useful virtual machines in my experience, they are severely constrained and penalized at the hypervisor level. for useful work m1.small is about as small i go.. t1.micros are good for almost static or net io primary workloads | 16:44 |
hazmat | er.. s/good/ok | 16:44 |
X-warrior` | I'm just starting on it, so testing stuff and checking how it works... | 16:44 |
X-warrior` | but thanks for the tip ;) | 16:45 |
X-warrior` | and for the help | 16:45 |
X-warrior` | let me try to ssh into it | 16:45 |
hazmat | re t1.micro and cpu penalization.. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html | 16:52 |
hazmat | X-warrior`, wrt to future stuff that makes this nicer.. it will be in the next dev release (1.17.0) we'll have destroy-machine --force machine_id which will forcibly kill units on the machine | 16:56 |
X-warrior` | hazmat: yeap I heard about this. dweaver showed me the 'bug' on launchpad and then I find your name and script | 16:56 |
hazmat | X-warrior`, that script isn't appropriate for your case, it was specifically around a stuck machine with no units, the --force feature on trunk will handle both cases. | 16:58 |
X-warrior` | got it | 16:59 |
* X-warrior` running status to check if it worked | 16:59 | |
hazmat | X-warrior`, any luck shutting down elasticsearch? .. you'll probably need to rerun the resolved hook on it | 17:00 |
X-warrior` | Oh men why on earth I used this micro instances :( | 17:02 |
X-warrior` | and trying to put more then one service on the same machine | 17:02 |
* X-warrior` feels dumb | 17:02 | |
X-warrior` | elasticsearch is not marked as failed anymore... but logstash-indexer keeps with the failure status... | 17:03 |
=== exekias_ is now known as exekias | ||
hazmat | X-warrior`, updated status pastebin pls | 17:08 |
hazmat | X-warrior`, elasticsearch/0 should be gone i would think | 17:08 |
X-warrior` | http://pastebin.com/7YN7rrwE | 17:09 |
hazmat | X-warrior`, looks good, that's progress.. so back to logstash-indexer | 17:10 |
X-warrior` | yeap | 17:10 |
X-warrior` | same process as before? one terminal with debug-log another with resolved? | 17:11 |
hazmat | X-warrior`, yes pls | 17:12 |
X-warrior` | "ERROR cannot set resolved mode for unit "logstash-indexer/0": already resolved" | 17:14 |
X-warrior` | but it is not resolved | 17:15 |
X-warrior` | Logstash agent-state is marked as down. Maybe it cannot do the resolve with it down | 17:17 |
hazmat | hmm | 17:21 |
hazmat | X-warrior`, can you log into that machine by ip, and manually restart the juju agent there | 17:21 |
hazmat | X-warrior`, ls /etc/init/juju* should tell you which upstart job it is.. then sudo service <name of file minus .conf> restart | 17:22 |
X-warrior` | it worked | 17:22 |
X-warrior` | I rebooted the machine | 17:22 |
X-warrior` | and then logstash and elasticsearch are gone | 17:22 |
hazmat | X-warrior`, cool | 17:23 |
X-warrior` | thanks | 17:24 |
X-warrior` | for all the help | 17:24 |
X-warrior` | :D | 17:24 |
X-warrior` | hazmat: thanks for all the help, I'm leaving now | 17:28 |
X-warrior` | dweaver: Thanks! :D | 17:28 |
hazmat | cheers | 17:28 |
aquarius | I'm trying to deploy marcoceppi's Discourse charm on Azure. juju status says "agent-state-info: 'hook failed: "install"'" for the "discourse" service. How do I work out what failed and start working out why? | 20:41 |
hazmat | aquarius, log into the machine and inspect the log at /var/log/juju ... alternative juju resolved --retry while doing juju debug-log... | 20:52 |
aquarius | hazmat, that sounds useful. How do I log into the machine? (I am completely new to juju, as you may be able to tell. :)) | 20:53 |
hazmat | final option is interactive debug-hooks environment, https://juju.ubuntu.com/docs/authors-hook-debug.html | 20:53 |
hazmat | aquarius, juju ssh name-of-unit | 20:53 |
aquarius | ha! that's too easy :) | 20:53 |
hazmat | ie. juju ssh discourse/0 | 20:54 |
aquarius | and I am logged in. Best command ever :) | 20:54 |
aquarius | aha, it got a timeout while connecting to github. That sounds transient | 20:55 |
aquarius | so "juju resolved --retry" will retry the failed deploy? | 20:55 |
aquarius | or will it redeploy everything? | 20:55 |
aquarius | ah, I need to specify the unit | 20:56 |
aquarius | ok, this is just too simple. :) | 20:56 |
aquarius | hm. Maybe I was wrong. "juju resolved --retry discourse/0" returned (and if I run it again, it says that unit is already resolved), but juju status still shows that unit as being in agent-state: error. Shouldn't it be pending or something? | 20:58 |
aquarius | or do I have to force a redeploy from scratch, rather than just --retry? | 20:58 |
hazmat | aquarius, its async | 20:59 |
aquarius | sure thing | 20:59 |
aquarius | I wasn't expecting it to hang until finished | 21:00 |
hazmat | aquarius, it will retry, but there | 21:00 |
aquarius | bu I would expect juju status to *indicate* that it's retrying :) | 21:00 |
hazmat | is no guarantee as well that subsequent times will fix | 21:00 |
hazmat | yeah.. there's an outstanding issue to allow for some observation of hook execution | 21:00 |
hazmat | ie. when are things really done/steady state | 21:00 |
hazmat | as opposed to just workflow states | 21:00 |
aquarius | oh, cool, so it will work the way I expect eventually but doesn't right now. that's ok | 21:00 |
aquarius | I don't mind if what I expect to happen agrees with the plan but code hasn't quite caught up yet :) | 21:01 |
hazmat | well depends on what you expect, but yeah that should be partially addresses | 21:01 |
hazmat | aquarius, the async is pretty fast though (a few seconds).. so if its not resolved it likely ran into the same issue again | 21:01 |
hazmat | i'd suggest logging into the instance and verifying github connectivity | 21:01 |
aquarius | and indeed it did fail again | 21:01 |
aquarius | although in a different place. | 21:02 |
hazmat | progress then | 21:02 |
aquarius | github connectivity seems to be *intermittent*, which is way worse than "not working at all" | 21:02 |
hazmat | aquarius, what's the error this time | 21:02 |
* aquarius gives azure a fishy look :) | 21:02 | |
hazmat | ah azure.. tis a special child, with an iaas veneer over paas concepts | 21:03 |
aquarius | new question: does the juju gui run on port 80? That is: can I juju deploy the gui to a machine which already has a web thing on it? | 21:03 |
hazmat | aquarius, it runs on port 443, but also listens on 80 to do auto redirect to 443 | 21:03 |
aquarius | right, so if I have a web thing on that machine I should not deploy the gui to it | 21:04 |
aquarius | I could give the gui its own machine, of course, but I'm currently using the azure free trial and I don't want it to eat the free trial cash amount too quickly ;) | 21:04 |
hazmat | aquarius, yeah.. i'd recommend not, i haven't tried it, but i imagine the proxy backend that its running will barf on the port already bound for 80.. would be nice if that were configurable | 21:05 |
* aquarius nods | 21:05 | |
hazmat | since its not functionally used.. ie web app port 80 gui on 443 | 21:05 |
* aquarius looks irritably at azure. It keeps failing github connections, but not all of them. | 21:06 | |
* hazmat files a bug against gui | 21:06 | |
hazmat | azure services dashboard is greenlit .. http://www.windowsazure.com/en-us/support/service-dashboard/ | 21:08 |
hazmat | not clear what the issue is | 21:08 |
hazmat | github status also green https://status.github.com/ | 21:08 |
aquarius | yeah, something weird going on. It's dying on doing github stuff, but not in the same places, and previously failed commands work sometimes | 21:08 |
aquarius | I love intermittent failures :) | 21:08 |
hazmat | thats a special kind ;-) | 21:09 |
aquarius | maybe it's a github problem | 21:09 |
aquarius | rather than an azure problem | 21:09 |
aquarius | could be blaming azure unfairly here :) | 21:09 |
hazmat | aquarius, also a possibility | 21:09 |
hazmat | anedoctally seems to be okay for me in casual testing | 21:10 |
hazmat | couple of pulls | 21:10 |
aquarius | yeah, it's succeeding most of the time | 21:10 |
aquarius | but discourse pulls about 500 things | 21:10 |
aquarius | :) | 21:10 |
aquarius | things that would be nice: something which gets kicked off when I do a deploy, sits in the background, and somehow tells me if the deploy goes wrong, rather than having to juju status to see what's happened | 21:11 |
hazmat | there's an api for that but yeah, it would be good to integrate that into the cli | 21:11 |
hazmat | with desktop notification | 21:11 |
aquarius | also, resuming a build from the last place we got to would b nice, but that's not juju's fault, that's the discourse charm's job | 21:11 |
aquarius | hm, it is possible that it actually *is* resuming from the last place it got to :) | 21:12 |
aquarius | if that's the case then I will make slow and irritating progress, over time :) | 21:13 |
hazmat | aquarius, depends on the charm implementation, juju is just re-executing the install hook | 21:14 |
* hazmat pokes at the discourse charm | 21:15 | |
hazmat | marcoceppi, you around? | 21:15 |
aquarius | I pung him a while back, so I don't think he is. Hopefully he will be over the weekend and I can try again :) | 21:16 |
hazmat | aquarius, so the apt installs are intelligent about no op on already present. | 21:16 |
aquarius | ruby bundler, probably less so | 21:16 |
aquarius | everyone thinks they can writ a package manager ;) | 21:16 |
hazmat | aquarius, and it looks like the git pieces will just refetch/update instead of full pull | 21:16 |
hazmat | aquarius, yup.. there's at least one for every language | 21:17 |
aquarius | hm! we may have got past the git bits | 21:18 |
aquarius | progress! | 21:18 |
aquarius | 2013-11-29 21:19:03 INFO juju.worker.uniter context.go:255 HOOK Gem::InstallError: rake-compiler requires RubyGems version >= 1.8.25. Try 'gem update --system' to update RubyGems itself. | 21:19 |
aquarius | that has to be a dependency bug in the charm. | 21:20 |
marcoceppi | 0/ | 21:22 |
marcoceppi | aquarius: there's a version of the charm not pushed yet. wip branch. that uses rvm + ruby2.0 | 21:23 |
marcoceppi | are you using the charm store version of the one on github? | 21:23 |
dpb1 | marcoceppi: where is a good starter example of amulet? | 21:24 |
marcoceppi | dpb1: lete dig you up an example | 21:25 |
dpb1 | k | 21:25 |
aquarius | marcoceppi, heya! | 21:28 |
aquarius | marcoceppi, I'm using the one in the charm store | 21:28 |
aquarius | aquarius@faith:~ $ juju deploy cs:~marcoceppi/discourse | 21:28 |
aquarius | should I have done somethnig else? | 21:29 |
marcoceppi | aquarius: that one lags behind by a bit, as you can see discourse travels at a fast pace :) | 21:33 |
marcoceppi | aquarius: let me find the latest stable and push it up to the charm store | 21:33 |
aquarius | cool | 21:33 |
marcoceppi | aquarius: it fetches a ton of deps via gem though, that's an upstream thing and nothing I can really do about that :) | 21:34 |
marcoceppi | I've got some code to streamline the process from them, not yet implemented though | 21:34 |
aquarius | marcoceppi, yeah :) | 21:35 |
aquarius | marcoceppi, so, once you've pushed a new version to the charm store, can I deploy the new version over the top of the old one? | 21:35 |
aquarius | or do I have to kill all the existing vms and deploy from scratch? | 21:36 |
marcoceppi | aquarius: not really, I don't make any guarentees with upgrades, one of the reasons why it's still in a personal branch and not in the store. Once they get a 1.0 out I'll probably stabilize the charm and push it to cs:precise/discourse | 21:37 |
hazmat | aquarius, generically you can switch charm origins.. juju switch -h see --switch and point it to a local branch checkout | 21:37 |
aquarius | so from what marcoceppi is saying I'm best to kill this setup stone dead and start fresh with the new charm once it's in the charm store, yes? | 21:37 |
marcoceppi | aquarius hazmat: that's true, but there's a big difference in charm between what you have and current version (building ruby from rvm, etc) and there's no upgrade-charm hook | 21:38 |
hazmat | marcoceppi, in this case it never made it through the install hook, is that reasonable safe? | 21:38 |
aquarius | hazmat, I'm happy to destroy the existing deployment | 21:38 |
hazmat | aquarius, that's the safest way | 21:38 |
marcoceppi | hazmat aquarius if you're willing to try, sure! | 21:38 |
aquarius | how do I do it? :) | 21:38 |
hazmat | in terms of not creating new errors | 21:38 |
hazmat | aquarius, so.. | 21:38 |
marcoceppi | hazmat: however, won't it not work because of install hook in error? | 21:38 |
hazmat | marcoceppi, not with --switch and --force | 21:39 |
marcoceppi | hazmat: ah, the good 'ol --force flag | 21:39 |
aquarius | am I best to juju destroy-environment which kills the whole thing, and then juju bootstrap again from the start? | 21:39 |
aquarius | I don't think I understand what an environment is :) | 21:39 |
hazmat | marcoceppi, its an intended workflow for just this case.. ie. switching origins to fix/work/customize a charm. | 21:39 |
hazmat | tis a good flag indeed | 21:40 |
hazmat | aquarius, so azure takes forever | 21:40 |
hazmat | aquarius, there's another option | 21:40 |
hazmat | aquarius, which is destroy-service and terminate-machine.. its a bit annoying with juju-core in that you have to resolve errors first. There's a juju deployer plugin on pypi, that automates this into juju-deplyer -TW ... and it does watch what's happening and tell you what's going on (re your earlier suggestion) | 21:41 |
hazmat | its also a nice declarative way to build a topology of services. | 21:41 |
aquarius | one step at a time. I'd rather do things the slow stupid way at first and then get clever later :) | 21:42 |
hazmat | ttyl :-) | 21:42 |
aquarius | so I understand this: juju boostrap creates an environment? So if I juju destroy-environment, then the whole thing goes away, and then I can just juju bootstrap again? | 21:42 |
hazmat | give me a ping when its done, i'll be around. | 21:42 |
aquarius | I'm stopping shortly and going to the pub anyway. It is Friday night ;) | 21:43 |
hazmat | aquarius, yes, azure is slightly different in that destroy involves synchronous destruction of various internal azure resources. | 21:43 |
hazmat | its a bit time consuming, and O(n) the size of the environment. It is pretty reliable (imo) | 21:43 |
hazmat | aquarius, so yeah.. slow but safe, works | 21:44 |
aquarius | that's the plan here :) | 21:44 |
* aquarius destroys the environment | 21:44 | |
aquarius | if it's not done in ten minutes or so, I'll resume tomorrow :) | 21:44 |
hazmat | cheers | 21:44 |
aquarius | marcoceppi, are you planning on pushing to the charm store soon? If not, can I use the github version of the charm directly? I'm happy to do whichever's convenient for you :) | 21:45 |
marcoceppi | aquarius: let me test to make sure what I'm giving you works | 21:45 |
marcoceppi | aquarius: spinning up on hp cloud atm | 21:45 |
marcoceppi | aquarius: I'm getting a segfault in the charm now during install, poking now | 22:01 |
aquarius | marcoceppi, this sounds like good progress, though! nice one : | 22:01 |
aquarius | :) | 22:02 |
aquarius | I have to go out, but I'll leave irc open if you want to drop comments in here, and thank you again! | 22:16 |
marcoceppi | aquarius: np, still trying to figure out why rvm is segfaulting | 22:18 |
marcoceppi | aquarius: okay, figured it out. Moving on to the rest of the charm for testing | 22:23 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!