fuzzy | Hi there, I've been having a problem getting mongodb to deploy. I believe that the mongodb charm needs an 'apt-get install python-yaml' before calling hooks.py in this file: https://github.com/charms/mongodb/blob/trusty/hooks/install based on this bug https://bugs.launchpad.net/charms/+source/haproxy/+bug/1302642 | 00:20 |
---|---|---|
mup | Bug #1302642: haproxy missing install dependency <landscape> <haproxy (Juju Charms Collection):Confirmed for mbruzek> <https://launchpad.net/bugs/1302642> | 00:20 |
fuzzy | I've tried now from linode with manual provisioning. I ended up adding it to my node create script there. Today I tried to follow the Digital Ocean Juju steps and deploying to 3 nodes fails all in the same spot asking for python-yaml | 00:21 |
fuzzy | http://hastebin.com/xukaxitalu.sm | 00:22 |
fuzzy | https://github.com/kapilt/juju-digitalocean | 00:22 |
lazyPower | fuzzy: you're going to need to patch the charm, or submit a PR so it can be patched upstream in teh charm store | 00:22 |
fuzzy | I don't have the first clue on how to do that properly | 00:23 |
lazyPower | take a look at hooks/install | 00:23 |
fuzzy | Yes I see that | 00:24 |
fuzzy | is it just a parsed bash script? | 00:24 |
fuzzy | Thats what I assume from that launch pad haproxy bug | 00:24 |
lazyPower | nothing parsed about it, it is by definition, a bash script. | 00:24 |
fuzzy | well it's missing the bang at the top so I didn't know | 00:24 |
lazyPower | are you looking at precise or trusty? | 00:25 |
lazyPower | i've got a copy of precise, and it has the shebang | 00:25 |
fuzzy | I have linode with precise and do with trusty | 00:25 |
fuzzy | I'm experiencing the same problem in both places | 00:25 |
fuzzy | I haven't looked at the precise github yet | 00:25 |
fuzzy | https://github.com/charms/mongodb/blob/precise/hooks/install | 00:26 |
fuzzy | it's missing there too | 00:26 |
lazyPower | looks like its a symlink, and the precursor bash script was removed | 00:26 |
lazyPower | interesting | 00:26 |
lazyPower | LOL and I did that | 00:27 |
lazyPower | nice | 00:27 |
fuzzy | ...... | 00:27 |
fuzzy | Well at least I know who to talk to about it | 00:27 |
lazyPower | file a bug, I cant push a fix for that tonight | 00:27 |
* lazyPower cannot push his own changes | 00:27 | |
fuzzy | Where, lp or github? | 00:27 |
lazyPower | LP | 00:27 |
fuzzy | do you know that link by heart yet? | 00:27 |
sarnold | heh, fuzzy, do you? :) | 00:28 |
lazyPower | http://bugs.launchpad.net/charms/+source/mongodb | 00:28 |
fuzzy | sarnold: ye | 00:28 |
fuzzy | so lazyPower what is juju-mongodb then? | 00:28 |
lazyPower | a juju fork of mongodb for systems that dont have the 'mongodb' package installed already. | 00:29 |
fuzzy | ah | 00:29 |
sarnold | (the juju-mongodb has been neutered to not do javascript, us buzzkills from the security team don't want to support N different javascript implementations.) | 00:30 |
fuzzy | https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1369792 | 00:31 |
mup | Bug #1369792: Python yaml missing from mongodb install on precise and trusty <mongodb (Ubuntu):New> <https://launchpad.net/bugs/1369792> | 00:32 |
sarnold | hmm, that looks like the mongodb source package, not the mongodb charm | 00:35 |
lazyPower | https://bugs.launchpad.net/charms/+source/mongodb | 00:35 |
lazyPower | fuzzy: make sure you open that bug against the proper project. You *did* open that against the ubuntu mongo package | 00:36 |
fuzzy | wonderful | 00:36 |
lazyPower | I moved it for you | 00:36 |
sarnold | hooray, that was easier than I feared | 00:37 |
lazyPower | sarnold: if you run across any more bugs like that, you change the project from Ubuntu to Juju Charms Collection | 00:37 |
sarnold | lazyPower: thanks | 00:37 |
lazyPower | and if you want confirmation on any of the moved bugs, feel free to ping me with the URL | 00:37 |
fuzzy | well that made that easy | 00:38 |
fuzzy | I didn't see a delete button anywhere | 00:38 |
lazyPower | we dont typically delete bugs, we mark them as invalid | 00:39 |
lazyPower | or incomplete | 00:39 |
fuzzy | So when should I try again? | 00:40 |
lazyPower | subscribe to bug mail and you'll be notified when your bug is resolved. | 00:41 |
lazyPower | if you want a brute force fix for now, just install python-yaml on the hosts as they error, then re-run the hook | 00:41 |
lazyPower | juju resolved -r service/# | 00:41 |
fuzzy | I'm actually going to go try now on linode again now that I understand how that charm works better and hope for the best | 00:42 |
fuzzy | lazyPower: you are actually back now or just sudo back? | 00:42 |
lazyPower | watching TV, enjoying my evening, chatting on my laptop and triaging bugs | 00:43 |
lazyPower | so take that under considering | 00:43 |
lazyPower | *consideration | 00:43 |
fuzzy | Where is the data for charms stored on a node? | 00:43 |
lazyPower | /var/lib/juju/unit-service-#/charm | 00:44 |
lazyPower | i may have missed a directory in there, but thats the round-about path | 00:44 |
fuzzy | so basically all juju stuff on a drone would be in /var/lib/juju ? | 00:44 |
fuzzy | The reason I ask, is I would like to move it to zfs so mongodb used zfs in the backend | 00:45 |
lazyPower | you're speaking greek to me | 00:49 |
lazyPower | we have no concept of zfs enablement on our charms to date. | 00:49 |
lazyPower | and just because teh charms payload is /var/lib/juju/* doesn't mean thats wehre mongodb is going to place its data | 00:49 |
lazyPower | you'll need to evaluate a mongodb deployed host to gleen that, i think it stores data in /var/lib/mongo but dont quote me on that. | 00:50 |
fuzzy | Oh ok, so they aren't in some kind of docker container on the node as well. They run right native on the thing | 00:52 |
fuzzy | Awesome | 00:52 |
lazyPower | nope, its all native on the node unless you specify | 01:00 |
lazyPower | --to lxc:# when you deploy | 01:00 |
lazyPower | eg: juju deploy mongodb --to lxc:1 (which will place mongodb in an lxc container on node 1) | 01:01 |
fuzzy | lazyPower: Alright so I've managed to get a 3 replica node mongodb going and expose it @ linode with little effort | 01:15 |
fuzzy | http://hastebin.com/iladimaxah.sm | 01:17 |
lazyPower | fuzzy: whats the tl;dr? | 01:28 |
fuzzy | http://hastebin.com/uboqezexob.sm | 01:28 |
fuzzy | I got precise / trusty mixed nodes with 3 in a working replica doing mongodb and 3 doing meteor.js | 01:29 |
lazyPower | there ya go | 01:29 |
lazyPower | looks like you have found the secret sauce | 01:29 |
sarnold | ooo | 01:29 |
fuzzy | almost | 01:30 |
fuzzy | hmmmm | 01:30 |
fuzzy | Do I have to expose mongodb for it to work correctly? | 01:33 |
lazyPower | nope, it should be linked/communicating over the private network | 01:44 |
jose | whit: hey, mind a quick PM? | 02:36 |
jose | whit: hey, mind a quick PM? | 02:40 |
lazyPower | jose: he's out of the day and probably for the rest of the week | 02:51 |
jose | lazyPower: oh, ok :( | 02:52 |
whit | jose, I about to sign off, wanna catch up in the morning? | 02:54 |
jose | whit: sure thing, not a prob :) have a good night! | 02:54 |
whit | jose: you too! | 02:54 |
jose | marcoceppi: I believe that unit testing is fine unless it *requires* something to function. like, in the case of wordpress, it would be integration testing with mysql as a bare minimum, but in subway, which doesn't require anything, unit testing would be fine for me | 02:55 |
=== CyberJacob|Away is now known as CyberJacob | ||
=== uru_ is now known as urulama | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== MasterPiece_ is now known as MasterPiece | ||
=== fabrice is now known as fabrice|lunch | ||
=== fabrice|lunch is now known as fabrice | ||
=== beuno_ is now known as beuno | ||
mbruzek | good morning Jose | 13:59 |
jose | hello, mbruzek :) | 14:01 |
rick_h_ | mbruzek: hey, question for you | 14:01 |
mbruzek | rick_h_, go ahead | 14:02 |
rick_h_ | mbruzek: if I have an orange box, and I've installed openstack on it, is there a note/doc on what I need to get from that openstack install to configure it as a provider in juju? | 14:02 |
mbruzek | rick_h_, I only know of Kirkland's orange box set up document that talks about how to get maas working with Juju. | 14:03 |
mbruzek | rick_h_, if you have openstack already running I would start with https://juju.ubuntu.com/docs/config-openstack.html | 14:04 |
rick_h_ | mbruzek: yea, that doesn't seem to have the info I'm looking for there. Figured I'd bug you about it in case you guys did anything with it in your sprint | 14:04 |
mbruzek | rick_h_, we did some things with it, but not setting up openstack. | 14:04 |
rick_h_ | mbruzek: cool thanks | 14:04 |
mbruzek | rick_h_, if you have specific questions post them here and perhaps we can learn together. | 14:05 |
jose | rick_h_: I guess it's just a matter of deploying openstack and then setting the credentials in node 0's environments.yaml | 14:05 |
rick_h_ | jose: yea, I assume somewhere in all these openstack services are the secret/info I need for the juju provider | 14:06 |
rick_h_ | jose: I was hoping to cheat doing a bunch of research and making mbruzek tell me :P | 14:06 |
jose | rick_h_: hehe, I guess the docs are the best option then :P | 14:06 |
mbruzek | sorry I don't have that information for you rick_h_ | 14:07 |
rick_h_ | mbruzek: all good, will carry on | 14:07 |
marcoceppi | jose: I htink you're missing the point of unit testing | 14:17 |
marcoceppi | rather the definition of unittesting | 14:17 |
marcoceppi | what you've described with wp and mysql is integration testing, you can unit test even if you need an additional service by simply mocking what mysql /would/ be providing | 14:18 |
jrwren_ | rick_h_: if horizon is running, go get the credentials from the openrc file? | 14:18 |
kirkland | rick_h_: hi | 14:20 |
rick_h_ | kirkland: howdy | 14:20 |
kirkland | rick_h_: so open stack provider on the orange box... the only way I've done it so far, is through the Landscape OpenStack Installer | 14:21 |
rick_h_ | kirkland: ok, cool | 14:21 |
kirkland | rick_h_: from a bare openstack, you could do it too, but you'll need to take a few more steps | 14:21 |
kirkland | rick_h_: namely, creating networks and routers | 14:21 |
kirkland | rick_h_: otherwise, your instances won't be reachable | 14:21 |
rick_h_ | kirkland: ok, no problem then. I was just curious as I had things this far I was tempted to try to see if I could take it the next step | 14:22 |
kirkland | rick_h_: we don't have docs yet on how to do that | 14:22 |
rick_h_ | thanks | 14:22 |
kirkland | rick_h_: but, if you were to compile that howto, you'd be a hero ;-) | 14:22 |
rick_h_ | save me some time on this searching through various dashboard/interfaces for the bits required | 14:22 |
rick_h_ | heh, well I've only got it for today and we're still qa'ing the gui stuff so not sure i'll get that done. Maybe some sprint though it'd be cool to have one of these on site and work that through. | 14:22 |
jose | marcoceppi: unit testing is fine for me, as I said | 14:25 |
=== scuttlemonkey is now known as scuttle|afk | ||
marcoceppi | jrwren_: you deployed with juju? | 14:25 |
jrwren_ | marcoceppi: no. I think I misunderstood rick_h_'s question. | 14:25 |
marcoceppi | oh, there was more above the fold | 14:26 |
rm1 | Hi there. I would like to setup an image to use port other than 22 for ssh as it is blocked by security. | 14:48 |
rm1 | the target would be AWS | 14:49 |
rm1 | or would it possibly be able to define a custom VPC as this would be very useful | 14:56 |
marcoceppi | rm1: what do you mean port 22 is blocked? | 15:22 |
=== med_` is now known as medberry | ||
=== medberry is now known as med_ | ||
=== med_ is now known as med | ||
=== med is now known as med_ | ||
=== roadmr is now known as roadmr_afk | ||
rcj | For charm testing with amulet, is there a way to fire hooks as I would from juju-run? | 17:49 |
mbruzek | rcj, Other than adding relations and the normal hook firing? | 17:50 |
rcj | mbruzek, yes. I'd like to run something on a specific unit just as I would with juju-run. I have code that runs periodically in the charm and is triggered in that fashion. | 17:51 |
mbruzek | rcj there is a UnitSentry.run(command) method. Can you call it that way? | 17:53 |
mbruzek | d.sentry.unit['ubuntu/0'].run('whoami') | 17:53 |
rcj | mbruzek, yeah. I think that will work, thanks. | 17:54 |
mbruzek | rcj glad to help | 17:54 |
rcj | Is there a way to know what environment the charm test is running in so that I can provide different configs for each (different config options and constraints) | 18:13 |
mbruzek | rcj, JUJU_ENV maybe? https://juju.ubuntu.com/docs/charms-environments.html | 18:34 |
mbruzek | rcj, the problem is someone could name hp-cloud as hp-mbruzek so the name is not really a deterministic way to do it. | 18:34 |
mbruzek | rcj, We *just* wrote this documentation at our sprint: https://juju.ubuntu.com/docs/reference-environment-variables.html | 18:35 |
rcj | mbruzek, exactly. I would like to know how the automated tests are run for promulgated charms so that I can provide environment specific constraints | 18:35 |
rcj | mbruzek, so JUJU_ENV won't be sufficient. | 18:36 |
rcj | or I'll just have to nerf the testing | 18:36 |
rcj | but I can do more if I know where it's running at runtime | 18:37 |
mbruzek | rcj I have been given guidance to make the tests run on all environments. | 18:37 |
mbruzek | tvansteenburgh (who should be back tomorrow) should be able to answer that question better than I can. | 18:38 |
rcj | mbruzek, it will, but there is a config option to set up ephemeral storage. The number of volumes available will depend on instance type and device name will depend on the cloud. | 18:38 |
mbruzek | rcj he is working on the automated testing. | 18:38 |
rcj | mbruzek, thanks. | 18:38 |
=== roadmr_afk is now known as roadmr | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== bloodearnest_ is now known as bloodearnest | ||
bloodearnest | rick_h_: yo dude, do bundles still need to use cs charms? Any option yet for local ones? (for demo purposes) | 19:48 |
rick_h_ | bloodearnest: yes, afaik they do still. | 19:50 |
rick_h_ | bloodearnest: next cycle we'll hopefully have fixes for that into place across everything. | 19:50 |
* rick_h_ apologies for non-awesome answer | 19:50 | |
bloodearnest | rick_h_: ok, thanks. No worries, difficult problem. Will try this personal namespace in the charmstore thing that lazyPower blogged about | 19:51 |
rick_h_ | bloodearnest: definitely +1 there | 19:51 |
bloodearnest | rick_h_: am doing a django stack demo (python conference), is hadoop still the best general purpose demo? | 19:53 |
rick_h_ | bloodearnest: I think jcastro and marcoceppi have put a lot of time into the elasticsearch demo | 19:53 |
rick_h_ | bloodearnest: might be useful if you can do some fulltext search on there | 19:53 |
bloodearnest | nice, will look at that | 19:53 |
rick_h_ | bloodearnest: there was some scripts to preload data and such | 19:53 |
natefinch | why the heck do you have to resolve a hook failure before you can destroy a service? | 19:55 |
rick_h_ | natefinch: you tell me :P | 19:55 |
natefinch | rick_h_: no idea, but it's dumb as rocks. | 19:56 |
rick_h_ | natefinch: +1 would love to see that go | 19:56 |
jrwren_ | natefinch: so that the departed hooks can run. | 20:07 |
jrwren_ | it would be nice if there was a --force | 20:07 |
natefinch | jrwren_: I guess I assume that if the hooks are broken, they're broken, and making me resolve them is not going to make anything better. Especially true if it's the install hook that has failed. | 20:11 |
natefinch | jrwren_: plus, the really bad part is that destroy-service doesn't actually fail. It just silently doesn't do anything except set the life of the service to dying, but it'll sit there forever in dying unless you resolve it. | 20:12 |
jrwren_ | natefinch: definitely true when doing charmdev. Less true when install fails for reasons such as network temporarily unavailable where --retry will succedd. | 20:12 |
natefinch | jrwren_: but destroy service should destroy the service. | 20:12 |
natefinch | jrwren_: but yes, a --force option would probably fix most of this | 20:13 |
jrwren_ | natefinch: remember destroy is just an alias for remove. If I'm in a state such that install did succeed, then I'd expect the departed hooks to run on remove. | 20:13 |
jrwren_ | natefinch: and maybe if never installed --force automatially? | 20:14 |
natefinch | jrwren_: yeah, definitely if there's a situation where we know it's "safe" to do --force automatically, just do it. That's one thing that bugs me about a lot of the juju command line commands... they rarely "just do the right thing" when the right thing is obvious. | 20:16 |
jrwren_ | jrwren_: Same here. It makes for a steeper learning curve for a new juju user or charm author too. | 20:19 |
natefinch | jrwren_: we're working on it. My team is actively trying to make life easier for charm authors in particular right now. There's some cool stuff coming down the pipeline. | 20:19 |
jrwren_ | jrwren_: YAY! Its a great time for juju IMO :) | 20:20 |
jrwren_ | and autocompleting the wrong name means I've had a long enough day :) | 20:20 |
=== Guest77540 is now known as wallyworld | ||
natefinch | marcoceppi: wow, holy crap, it finally works: http://54.160.155.32/ | 20:38 |
fuzzy | congrats | 20:38 |
natefinch | man that was a pain in the butt.... totally not anything to do with charming, just.... random programming crap | 20:39 |
fuzzy | The other side of the cutting edge | 20:40 |
fuzzy | You gotta watch it, it's sharp and loves the taste of blood, sweat, and most of all, frustration | 20:40 |
natefinch | and it only takes 20 minutes to install.... geez. | 20:43 |
fuzzy | I made two linode scripts that make manual provisioning and deployment alot easier | 20:44 |
fuzzy | I've got bootstrap with drone nodes down to less than 10 minutes | 20:50 |
natefinch | what's funny is that this is a docker image I'm deploying, I would expect it to be super fast, but it looks like they're doing a boatload of stuff when they start the docker instance, so.... yeah. | 20:51 |
aisrael | It looks like I'm running into an issue with nfs and the local provider. Any known issues with that? | 21:07 |
mbruzek | aisrael, Yes | 21:11 |
mbruzek | aisrael, lazyPower has indicated that nfs on LXC does not work but I googled this problem and found several things that claimed to be workarounds. | 21:12 |
aisrael | I found a related bug that lazyPower commented on, but that workaround doesn't seem to work (https://bugs.launchpad.net/charms/+source/nfs/+bug/1251619) | 21:13 |
mup | Bug #1251619: nfs charm fails in install hook <nfs (Juju Charms Collection):New> <https://launchpad.net/bugs/1251619> | 21:13 |
mbruzek | aisrael, I tried what was described here: http://technuts.tru.my/2013/08/28/how-to-mount-nfs-in-lxc-container/ | 21:14 |
mbruzek | But I was not able to get success, so I moved on. Theoretically if something needs to be done in on the client side, the charm could run those commands. | 21:15 |
aisrael | Ok. I'm going to try to get it working, and then add a caveat to the nfs README so it's documented. | 21:16 |
mbruzek | The web page suggests doing something on the server side too. | 21:16 |
aisrael | Fixing apparmor on the lxc host is a fix i've seen a couple times, but hasn't worked for me (yet) | 21:16 |
mbruzek | aisrael, if anyone can get it working you would be the man to do it! | 21:17 |
* mbruzek is interested in a solution too. | 21:17 | |
aisrael | heh, thanks for the vote of confidence :D | 21:17 |
mbruzek | aisrael, if you get it working please let me know. That has been a thorn in our sides for quite some time, and is preventing us from writing some charm tests. | 21:18 |
aisrael | mbruzek: ack. I'll find a way to make this work. | 21:18 |
mbruzek | aisrael, are you running both the host and server in LXC or just one? | 21:18 |
mbruzek | s/host/client | 21:19 |
aisrael | host is running in a vagrant vm, hosts in lxc. | 21:19 |
aisrael | s/hosts/clients | 21:21 |
=== kentb is now known as kentb-out | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== elarson_ is now known as elarson | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== CyberJacob|Away is now known as CyberJacob |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!