=== CyberJacob is now known as CyberJacob|Away | ||
=== freeflying_away is now known as freeflying | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== Kyle is now known as Guest51501 | ||
=== mthaddon` is now known as mthaddon | ||
ghartmann | good morning | 09:26 |
---|---|---|
lazypower | 'morning | 09:39 |
ghartmann | I wonder if anyone is using juju to deploy python / django apps. | 10:42 |
ghartmann | I started to look at the charm "python-django" but I didn't got such a good documentation on it. | 10:43 |
lazypower | ghartmann: Without having looked at it, what are you expecting to be documented that isn't? | 10:49 |
ghartmann | mainly I don't get how I am supposed to set the requirements and the repository | 11:01 |
lazypower | ghartmann: are you using the GUI or strictly CLI? | 11:04 |
ghartmann | I am using the GUI primarily | 11:04 |
ghartmann | but it's not a problem to use it on the console | 11:04 |
lazypower | http://i.imgur.com/dgAcL0k.png | 11:05 |
lazypower | there's the configuration field for the repository. | 11:05 |
lazypower | I'm not super familiar with Django, so i'm reading up on the requirements settings | 11:05 |
lazypower | i'm assuming that you're checking out from a Github repository? | 11:06 |
ghartmann | well, at the moment yes | 11:07 |
lazypower | ok, yeah plug the http clone url in that field | 11:07 |
ghartmann | but I intend to move to a internal git repository | 11:07 |
ghartmann | and the requirements.txt should be inside of the repo | 11:07 |
lazypower | shouldn't be a problem so long as you've got a deployment key on your repo. | 11:07 |
lazypower | Could you document this as you go through it for later examination? There's more than likely a few people in here that are more familiar with DJango app deployment with juju that can offer more input, and possibly modify the django charm based on your feedback. | 11:08 |
ghartmann | ok great | 11:10 |
ghartmann | where should I put this info ? | 11:11 |
lazypower | Either open a bugreport against the charm, or a blog post would work. | 11:11 |
lazypower | if you link me i'll run it around the circle of developers and see where feedback like this should go in the future. | 11:12 |
ghartmann | great, thanks | 11:27 |
ghartmann | * I am currently on the juju gui trying to re deploy the server * | 11:28 |
lazypower | ghartmann: Actually the more I think about it, the mailing list would be a prime location for that to go. | 12:02 |
ghartmann | the juju main mail list ? | 12:04 |
lazypower | https://lists.ubuntu.com/mailman/listinfo/juju | 12:05 |
ghartmann | thanks, I will subscribe to it | 12:13 |
=== gary_poster|away is now known as gary_poster | ||
rick_h_ | marcoceppi_: ping, is there any help to charm devs for s3-like techniques for charms? We're using azure atm, but would prefer to do this in a cloud agnostic way. | 13:34 |
marcoceppi_ | rick_h_: could you elaborate? | 13:54 |
rick_h_ | marcoceppi_: I'm looking at backup up our jenkins config and such from our gui deploy on azure | 13:56 |
rick_h_ | marcoceppi_: ideally, this would be in a cloud-agnostic way to backup to blog on azure, s3 on amz | 13:56 |
rick_h_ | marcoceppi_: I can't find any way to do this but want to make sure I'm not missing anything | 13:56 |
marcoceppi_ | rick_h_: there has been multiple talks of a generic backup charm but nothing materialized yet. Either something that would live as a subordinate charm or possibly in charm-helpers. As of right now, there's nothing really /in/ that space | 13:57 |
rick_h_ | marcoceppi_: yea, each cloud has their own sdk/tooling for working with storage and we'd need something like cross-cloud s3cmd to do this well I think | 13:58 |
rick_h_ | marcoceppi_: ok, thanks for verifying what I'm seeing | 13:58 |
marcoceppi_ | rick_h_: I think just having all the tools installed, then using configuration to drive where to sync would be a good start. This still incurs the problem that you need to provide your account credentials an additional time to the charm, since the charm can't access environment creds from bootstrap | 13:59 |
marcoceppi_ | that's always been the sort of non-starter for the charm | 13:59 |
rick_h_ | marcoceppi_: right, doesn't juju itself store some blob storage? | 14:00 |
rick_h_ | marcoceppi_: or does it actually store all the stuff in mongo itself? the charm zip files and such? | 14:00 |
marcoceppi_ | rick_h_: there's some notion of storage for charm stuff, but nothing really exposed to the charms/services | 14:00 |
rick_h_ | marcoceppi_: right, cool | 14:00 |
marcoceppi_ | rick_h_: last I checked, it's still using provider based storage for charms | 14:01 |
rick_h_ | marcoceppi_: right, so there's some idea of using the storage for each cloud in juju itself. That would be a cool way to expose it. juju store xxxx.tar.gz :) | 14:01 |
marcoceppi_ | rick_h_: that's probable using a plugin. the issue is how do you get the storage information to the charm so it knows where to pull | 14:02 |
=== sidnei` is now known as sidnei | ||
fcorrea | what's the best way to compute the unit name without charmhelpers? Since this one is a shell based charm, I suppose I should just extract it from the path or something? | 14:46 |
mgz | fcorrea: can't you just get the JUJU_UNIT_NAME envvar? | 14:56 |
fcorrea | mgz, oh, didn't know about that one. Thanks | 14:58 |
marcoceppi_ | hatch: good news, I'm just about done reviewing your charm | 15:04 |
hatch | saweeeeet | 15:04 |
* hatch hopes it was ok | 15:05 | |
marcoceppi_ | hatch: there's bad news, but I don't want to harsh your excitement | 15:05 |
hatch | lol, well there are no tests | 15:06 |
marcoceppi_ | well, that's one thing | 15:06 |
marcoceppi_ | that I forgot to mention | 15:06 |
hatch | but they aren't required yet :P | 15:06 |
marcoceppi_ | exactly | 15:06 |
hatch | it's ok, at least I will know what work to do | 15:07 |
hatch | I work with open source software, it's pretty hard to hurt my feelings with a review haha | 15:08 |
mgz | your code sucks and I hate you! | 15:09 |
hatch | then don't use it :P | 15:09 |
hatch | or | 15:09 |
hatch | write your own :P | 15:09 |
Makyo | In python. | 15:09 |
hatch | or even better | 15:09 |
hatch | pr's accepted | 15:09 |
Makyo | (or go?) | 15:09 |
marcoceppi_ | y u no pascal? | 15:09 |
hatch | VB? | 15:10 |
mgz | hatch: if you've not read it, http://mumak.net/stuff/your-code-sucks.html | 15:10 |
Makyo | Haskelljuju. | 15:10 |
hatch | mgz I haven't, I'll have to check it out | 15:10 |
marcoceppi_ | I could have saved a lot of writing in my review if I'd just written "ur code suks and I </3 u" | 15:11 |
hatch | haha | 15:11 |
hatch | well a lot of it was written by jcsackett so I'll blame the crappy parts on him | 15:11 |
hatch | lol | 15:11 |
marcoceppi_ | bugger, I can't edit comments, grr | 15:12 |
jcsackett | hatch: yeah, but all the architecture decisions were things you stuck me with. :-P | 15:12 |
hatch | lol | 15:13 |
hatch | ok that's the truth | 15:13 |
jcsackett | hatch: did you ever get a chance to do the apache frontend stuff for the charm? i still haven't had much time to throw at it. | 15:16 |
hatch | jcsackett nope I started on it then stopped and started on nginx instead, then stopped on that | 15:17 |
hatch | heh | 15:17 |
hatch | I think I'm going to do nginx whenever I do get around to it | 15:17 |
marcoceppi_ | hatch: nginx in main possibly for 14.04, I am so excite | 15:18 |
Makyo | \o/ | 15:18 |
mgz | marcoceppi_: it is meme day today? :) | 15:19 |
hatch | marcoceppi_ yeah that's kind of why I dwecided to just to nginx and forget about apache | 15:19 |
marcoceppi_ | mgz: everyday is meme day for me | 15:20 |
mgz | marcoceppi_: :D | 15:20 |
hatch | marcoceppi_ thanks a lot for the review, after todays sprint I'll take a look at it | 15:23 |
hatch | marcoceppi_ hmm there must be some new updates to proof from when I submitted because I didn't get any messages before | 15:31 |
marcoceppi_ | hatch: there have been several | 15:33 |
marcoceppi_ | though the does not provide anything has been there for a while | 15:33 |
hatch | oh haha ok | 15:33 |
hatch | oh yeah the provides one I knew about | 15:33 |
hatch | ok np I'll have to make sure to update my proof tool | 15:33 |
hatch | marcoceppi_ just read through the review - great comments thanks. Typically do you like people to reply to the reviews? | 15:36 |
marcoceppi_ | hatch: it's whatever the author wants. I typically like people to update the charm ;) but for questions I usually say something like "Reply here, or find us in #juju, or ask ubuntu, or the mailing list" | 15:37 |
hatch | ok cool np, I'll definitely be updating the charm to address the concerns, just sometimes reviewers like to have a dialogue :) | 15:38 |
marcoceppi_ | hatch: if, when you're ready for a review, and you want to address each bullet point, go for it. Otherwise it'll just go through a few review again. As I might (hopefully) won't be the one reviewing it again | 15:39 |
hatch | :) | 15:39 |
marcoceppi_ | and by hopefully it's not that this charm is le suck, but rather hopefully there will be more than me doing charm reviews \o/ | 15:40 |
hatch | lol | 15:41 |
Makyo | (but maybe also that) | 15:41 |
marcoceppi_ | (oh, definitely that) | 15:41 |
marcoceppi_ | hatch: actually, you might be interested in this: https://code.launchpad.net/~dstroppa/charms/precise/node-app/refactor-hooks/+merge/200330 | 15:42 |
marcoceppi_ | there's a charm.js now, that mocks a lot of the events in an async way | 15:42 |
hatch | yeah I would have rejected it | 15:42 |
hatch | :P | 15:42 |
hatch | jk | 15:43 |
hatch | but really it would fail jshint | 15:43 |
hatch | the improvements look really good | 15:44 |
hatch | though | 15:44 |
bcsaller | kirkland: are you still looking for me? | 16:23 |
kirkland | bcsaller: howdy! yeah, I'm in a meeting right now, but want to catch up later | 16:29 |
bcsaller | kirkland: sounds good, I'll be around | 16:30 |
=== sarnold_ is now known as sarnold | ||
maxcan | good morning | 16:40 |
maxcan | in the mongodb charm, when i add units my services database-relation-changed hook gets called. is there anyway to only have this triggered on by the primary unit of the replicaset? | 16:41 |
marcoceppi_ | maxcan: no, this is buy design of juju | 16:42 |
marcoceppi_ | by* | 16:42 |
maxcan | hm | 16:42 |
maxcan | do you know if there's anyway to determine the address of the primary in the relation-changed hook? | 16:43 |
marcoceppi_ | maxcan: shouldn't all of them be able to accept connections? | 16:43 |
maxcan | they can | 16:44 |
maxcan | the problem is the mongodriver for haskell is kind of out of date and doesn't send writes to the primary | 16:44 |
marcoceppi_ | maxcan: there's not peer election or anything like that in juju yet, relations are created at the service level, but executed on a per unit basis | 16:44 |
maxcan | i think | 16:44 |
maxcan | may be fixable on the haskell side | 16:49 |
maxcan | thanks | 16:49 |
marcoceppi_ | maxcan: np! | 16:49 |
=== Guest51501 is now known as Kyle | ||
fwereade | cmars, bcsaller, marcoceppi_: hey, I've ducked out of the sprint | 17:04 |
fwereade | didn't see the reschedule | 17:04 |
fwereade | but it's not long after Iland backin malta so I'm not sure I can make that one either | 17:04 |
fwereade | since I'mhere,do you want to do it now? | 17:04 |
fwereade | mramm, cmars, bcsaller, marcoceppi_, jcastro: ^^ | 17:05 |
=== marcoceppi_ is now known as marcoceppi | ||
marcoceppi | fwereade: if everyone else is available, I can make myself available as well | 17:05 |
cmars | fwereade, works for me | 17:07 |
mramm | sure | 17:07 |
mramm | we can do it now | 17:07 |
jcastro | I can go now | 17:08 |
fwereade | cmars, marcoceppi, mramm, jcastro, bcsaller: https://plus.google.com/hangouts/_/7acpiks0pmu1dt6ouvjavqr1k0?hl=en | 17:09 |
mgz | jcastro: semi-off topic, do you know what the around-fosdem plans are? I'm sorting out travel and stuff soon, and wonder if you still want me for anything. | 17:10 |
jcastro | mgz, yeah, we'd love to have you at cfgmngmnt camp | 17:10 |
jcastro | http://cfgmgmtcamp.eu/ | 17:11 |
jcastro | mgz, as for fosdem itself, rbasak was going to submit a talk, I think you should too! | 17:11 |
rbasak | mgz: I've booked travel/hotel already. I would've asked you had I known you were coming! | 17:12 |
mgz | that's the hardest damn url to type | 17:12 |
jcastro | yeah, tell me about it | 17:12 |
mgz | rbasak: I was doing accomodation seperately anyway I think, but are you eurostaring over? could aim at the same train again | 17:13 |
rbasak | mgz: yes, I'm eurostaring. I'll forward you my iternarary | 17:13 |
mgz | rbasak: ta! | 17:13 |
rbasak | You have mail. | 17:14 |
bjorne | 2013-12-22 11:20:04,813 - url_helper.py[DEBUG]: Attempting to open 'http://172.16.1.21/MAAS/metadata//2012-03-01/user-data' with 1 attempts (0 retries, timeout=None) to be performed | 17:55 |
bjorne | 2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: Failed reading from http://172.16.1.21/MAAS/metadata//2012-03-01/user-data after 1 attempts | 17:55 |
bjorne | 2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: 1 errors occured, re-raising the last one | 17:55 |
bjorne | someone now anything about this? | 17:55 |
marcoceppi | bjorne: what version of juju? 0.7? | 17:56 |
bjorne | the lastest. | 17:56 |
marcoceppi | bjorne: that's a loaded answer, can you run juju version? | 17:56 |
bjorne | w8 | 17:56 |
bjorne | 1.16.5.sauzy :) | 17:57 |
mgz | rogpeppe: can I have a rubber stamp on the cr to fix bug 1266518 please? | 17:57 |
marcoceppi | bjorne: I've never seen that error than, it's a new one to me | 17:58 |
rogpeppe | mgz: looking | 17:58 |
bjorne | marcoceppi https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1264717 | 17:58 |
rogpeppe | mgz: LGTM | 17:59 |
mgz | rogpeppe: thanks, landing | 18:00 |
mgz | sinzui: ^landing that fix now | 18:02 |
sinzui | thank you mgz | 18:02 |
=== allenap_ is now known as allenap | ||
=== BradCrittenden is now known as bac | ||
=== jaywink_ is now known as jaywink | ||
maxcan | hi, i'm following the instructions at https://jujucharms.com/fullscreen/search/precise/mongodb-20/?text=mongodb#bws-readme to set up mongoS | 19:52 |
maxcan | my shard replica sets have this error: agent-state-info: 'hook failed: "replica-set-relation-joined"' | 19:53 |
marcoceppi | maxcan: can you get the logs from the failed unit under /var/log/juju/unit-*.log ? | 19:54 |
maxcan | yeah | 19:54 |
maxcan | this is the error: | 19:54 |
maxcan | 2014-01-06 19:44:19 INFO worker.uniter.jujuc server.go:108 running hook tool "juju-log" ["port_check: Unable to connect to ec2-54-245-13-124.us-west-2.compute.amazonaws.com:27017/TCP."] | 19:54 |
maxcan | that host is the host the machine is running on | 19:55 |
maxcan | s/machine/agent | 19:55 |
maxcan | it looks like the ports are open on the security group | 19:56 |
marcoceppi | maxcan: that's odd, can you paste more of the log to http://paste.ubuntu.com ? | 19:56 |
maxcan | http://paste.ubuntu.com/6705155/ | 20:00 |
maxcan | here is my mongos log: http://paste.ubuntu.com/6705170/ | 20:04 |
maxcan | and my deployment commands: http://paste.ubuntu.com/6705194/ | 20:07 |
=== wendar_ is now known as wendar | ||
maxcan | so i played around a bit, and have managed to get python error: http://paste.ubuntu.com/6705348/ | 20:43 |
maxcan | this is in the juju logs for mongo shard001 | 20:43 |
marcoceppi | maxcan: are you sharding via juju add-unit? | 20:44 |
maxcan | deploy -n 3 | 20:44 |
maxcan | as in the readme. should I try to deploy to one server then add-unit? | 20:45 |
marcoceppi | maxcan: I wonder if there's a race condition. Where are you deploying? I'm going to try in HP cloud with deploy -n 3 as well see if I can replicate | 20:45 |
maxcan | amazon | 20:45 |
maxcan | i'll try to add-unit | 20:46 |
marcoceppi | I usually demo with "Deploy mongodb, now we want to scale" and add-unit. Let me see if I can replicate this | 20:46 |
maxcan | in general, juju is theoretically parallel right? | 20:47 |
maxcan | as in I can fire off a bunch of deploy commands in parallel | 20:48 |
maxcan | or better to run everything serially | 20:48 |
sarnold | like juju deploy foo & juju deploy bar & juju relate ... & ? | 20:49 |
maxcan | yeah | 20:49 |
maxcan | well, more like: juju deploy mongo & juju deploy myapp & juju deploy haproxy ; juju add-relation mongo myapp | 20:50 |
marcoceppi | maxcan: yes, you can, totally and you don't even need & it can be juju deploy mongo; juju deploy myapp; juju deploy haproxy ; juju add-relation mongo myapp | 20:51 |
marcoceppi | maxcan: that doesn't mean that the charm isn't doing something bad though ;) | 20:51 |
lazypower | When using PPA's in my charm, its customary to document that, and why its there correct? | 21:13 |
lazypower | i ask because upstream broke ruby 1.9 compat, i have need of ruby 2.x and presently i can either rbenv install that or use a PPA for ruby 2.x on 12.04 - i opted into the PPA instead of having a ton of rbenv based deployments out there where things can go wonky. | 21:15 |
=== CyberJacob is now known as CyberJacob|Away | ||
=== CyberJacob|Away is now known as CyberJacob | ||
maxcan | marcoceppi: progress, thanks | 21:38 |
maxcan | but, still having problems | 21:38 |
maxcan | if you look at the bottom of the log for the mongos agent http://paste.ubuntu.com/6705606/ , you see "running hook tool "juju-log" ["mongos_relation_change: undefined rel_type: None"]" | 21:39 |
maxcan | and something similar at the bottom of the shard001 log: http://paste.ubuntu.com/6705649/ | 21:40 |
maxcan | my add relation commands are: | 21:40 |
maxcan | juju add-relation mongos:mongos-cfg shard001:configsvr | 21:40 |
maxcan | juju add-relation mongos:mongos shard001:database | 21:40 |
maxcan | so, mongo wasn't running on the first shard001 machine and when I tried to start it via upstart, it immediately dies | 21:48 |
maxcan | it seems that when the other two members of the replica set come online, they kill the primary. and I can't get the primry to start at all | 21:53 |
maxcan | service mongodb start on the host runs but the process immediately dies w/out any logs | 21:53 |
arosales | marcoceppi, were you going to add the pastebin examples to https://github.com/marcoceppi/amulet ? | 21:55 |
marcoceppi | arosales: yes, still am going to | 21:56 |
arosales | marcoceppi, ok thanks | 21:57 |
lazypower | arosales: o/ | 21:57 |
lazypower | maxcan: thats typically due to a lock being present in the MongoDB data directory | 21:57 |
lazypower | however you should have gotten some log output. | 21:58 |
maxcan | checked the lock file | 21:58 |
lazypower | what locations did you check for logs? both /var/log/upstart and /var/log/mongodb right? | 21:58 |
maxcan | yes | 21:58 |
lazypower | Hmm :? | 21:58 |
arosales | lazypower, hello | 22:00 |
lazypower | maxcan: can you pastebin the logs? | 22:01 |
maxcan | sorry, just killed my environment.. will PB them in a few minutes | 22:02 |
=== gary_poster is now known as gary_poster|away | ||
lazypower | No problem. When you've got them I'd like to look them over. I had some funkyness when I moved a client's MongoDB setup to juju - the second pass went flawlessly and I couldn't find the bug i ran into. | 22:11 |
maxcan | #facepam | 22:33 |
maxcan | sorry guys, one of my commands was add-relation mongos:mongos shard001:configsvr instead of configsvr:configsvr | 22:33 |
lazypower | Glad its sorted :) | 22:36 |
maxcan | well, mongos:mongos-cfg | 22:37 |
maxcan | lazypower: i wish it was... | 22:37 |
maxcan | now, the mongos agent is running a plain, vanilla mongod daemon, not mongos | 22:37 |
maxcan | seems to not be picking up the config servers | 22:41 |
lazypower | hmm | 22:44 |
lazypower | maxcan: will you be working on this through the night? I'd be happy to join a hangout session and talk through this with you. | 22:45 |
maxcan | seems that you need 3 config servers | 22:45 |
maxcan | i was only running 1 | 22:45 |
lazypower | its common to run config servers on each host thats going to house a mongod | 22:45 |
maxcan | ahh | 22:45 |
lazypower | if your config server tanks, they vote and promote just like mongod | 22:46 |
lazypower | also, they act as witnesses if configured properly. | 22:46 |
lazypower | or, whats the name they use? arbiter i think? | 22:46 |
maxcan | something like that | 22:46 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!