=== CyberJacob is now known as CyberJacob|Away === freeflying_away is now known as freeflying === CyberJacob|Away is now known as CyberJacob === Kyle is now known as Guest51501 === mthaddon` is now known as mthaddon [09:26] good morning [09:39] 'morning [10:42] I wonder if anyone is using juju to deploy python / django apps. [10:43] I started to look at the charm "python-django" but I didn't got such a good documentation on it. [10:49] ghartmann: Without having looked at it, what are you expecting to be documented that isn't? [11:01] mainly I don't get how I am supposed to set the requirements and the repository [11:04] ghartmann: are you using the GUI or strictly CLI? [11:04] I am using the GUI primarily [11:04] but it's not a problem to use it on the console [11:05] http://i.imgur.com/dgAcL0k.png [11:05] there's the configuration field for the repository. [11:05] I'm not super familiar with Django, so i'm reading up on the requirements settings [11:06] i'm assuming that you're checking out from a Github repository? [11:07] well, at the moment yes [11:07] ok, yeah plug the http clone url in that field [11:07] but I intend to move to a internal git repository [11:07] and the requirements.txt should be inside of the repo [11:07] shouldn't be a problem so long as you've got a deployment key on your repo. [11:08] Could you document this as you go through it for later examination? There's more than likely a few people in here that are more familiar with DJango app deployment with juju that can offer more input, and possibly modify the django charm based on your feedback. [11:10] ok great [11:11] where should I put this info ? [11:11] Either open a bugreport against the charm, or a blog post would work. [11:12] if you link me i'll run it around the circle of developers and see where feedback like this should go in the future. [11:27] great, thanks [11:28] * I am currently on the juju gui trying to re deploy the server * [12:02] ghartmann: Actually the more I think about it, the mailing list would be a prime location for that to go. [12:04] the juju main mail list ? [12:05] https://lists.ubuntu.com/mailman/listinfo/juju [12:13] thanks, I will subscribe to it === gary_poster|away is now known as gary_poster [13:34] marcoceppi_: ping, is there any help to charm devs for s3-like techniques for charms? We're using azure atm, but would prefer to do this in a cloud agnostic way. [13:54] rick_h_: could you elaborate? [13:56] marcoceppi_: I'm looking at backup up our jenkins config and such from our gui deploy on azure [13:56] marcoceppi_: ideally, this would be in a cloud-agnostic way to backup to blog on azure, s3 on amz [13:56] marcoceppi_: I can't find any way to do this but want to make sure I'm not missing anything [13:57] rick_h_: there has been multiple talks of a generic backup charm but nothing materialized yet. Either something that would live as a subordinate charm or possibly in charm-helpers. As of right now, there's nothing really /in/ that space [13:58] marcoceppi_: yea, each cloud has their own sdk/tooling for working with storage and we'd need something like cross-cloud s3cmd to do this well I think [13:58] marcoceppi_: ok, thanks for verifying what I'm seeing [13:59] rick_h_: I think just having all the tools installed, then using configuration to drive where to sync would be a good start. This still incurs the problem that you need to provide your account credentials an additional time to the charm, since the charm can't access environment creds from bootstrap [13:59] that's always been the sort of non-starter for the charm [14:00] marcoceppi_: right, doesn't juju itself store some blob storage? [14:00] marcoceppi_: or does it actually store all the stuff in mongo itself? the charm zip files and such? [14:00] rick_h_: there's some notion of storage for charm stuff, but nothing really exposed to the charms/services [14:00] marcoceppi_: right, cool [14:01] rick_h_: last I checked, it's still using provider based storage for charms [14:01] marcoceppi_: right, so there's some idea of using the storage for each cloud in juju itself. That would be a cool way to expose it. juju store xxxx.tar.gz :) [14:02] rick_h_: that's probable using a plugin. the issue is how do you get the storage information to the charm so it knows where to pull === sidnei` is now known as sidnei [14:46] what's the best way to compute the unit name without charmhelpers? Since this one is a shell based charm, I suppose I should just extract it from the path or something? [14:56] fcorrea: can't you just get the JUJU_UNIT_NAME envvar? [14:58] mgz, oh, didn't know about that one. Thanks [15:04] hatch: good news, I'm just about done reviewing your charm [15:04] saweeeeet [15:05] * hatch hopes it was ok [15:05] hatch: there's bad news, but I don't want to harsh your excitement [15:06] lol, well there are no tests [15:06] well, that's one thing [15:06] that I forgot to mention [15:06] but they aren't required yet :P [15:06] exactly [15:07] it's ok, at least I will know what work to do [15:08] I work with open source software, it's pretty hard to hurt my feelings with a review haha [15:09] your code sucks and I hate you! [15:09] then don't use it :P [15:09] or [15:09] write your own :P [15:09] In python. [15:09] or even better [15:09] pr's accepted [15:09] (or go?) [15:09] y u no pascal? [15:10] VB? [15:10] hatch: if you've not read it, http://mumak.net/stuff/your-code-sucks.html [15:10] Haskelljuju. [15:10] mgz I haven't, I'll have to check it out [15:11] I could have saved a lot of writing in my review if I'd just written "ur code suks and I haha [15:11] well a lot of it was written by jcsackett so I'll blame the crappy parts on him [15:11] lol [15:12] bugger, I can't edit comments, grr [15:12] hatch: yeah, but all the architecture decisions were things you stuck me with. :-P [15:13] lol [15:13] ok that's the truth [15:16] hatch: did you ever get a chance to do the apache frontend stuff for the charm? i still haven't had much time to throw at it. [15:17] jcsackett nope I started on it then stopped and started on nginx instead, then stopped on that [15:17] heh [15:17] I think I'm going to do nginx whenever I do get around to it [15:18] hatch: nginx in main possibly for 14.04, I am so excite [15:18] \o/ [15:19] marcoceppi_: it is meme day today? :) [15:19] marcoceppi_ yeah that's kind of why I dwecided to just to nginx and forget about apache [15:20] mgz: everyday is meme day for me [15:20] marcoceppi_: :D [15:23] marcoceppi_ thanks a lot for the review, after todays sprint I'll take a look at it [15:31] marcoceppi_ hmm there must be some new updates to proof from when I submitted because I didn't get any messages before [15:33] hatch: there have been several [15:33] though the does not provide anything has been there for a while [15:33] oh haha ok [15:33] oh yeah the provides one I knew about [15:33] ok np I'll have to make sure to update my proof tool [15:36] marcoceppi_ just read through the review - great comments thanks. Typically do you like people to reply to the reviews? [15:37] hatch: it's whatever the author wants. I typically like people to update the charm ;) but for questions I usually say something like "Reply here, or find us in #juju, or ask ubuntu, or the mailing list" [15:38] ok cool np, I'll definitely be updating the charm to address the concerns, just sometimes reviewers like to have a dialogue :) [15:39] hatch: if, when you're ready for a review, and you want to address each bullet point, go for it. Otherwise it'll just go through a few review again. As I might (hopefully) won't be the one reviewing it again [15:39] :) [15:40] and by hopefully it's not that this charm is le suck, but rather hopefully there will be more than me doing charm reviews \o/ [15:41] lol [15:41] (but maybe also that) [15:41] (oh, definitely that) [15:42] hatch: actually, you might be interested in this: https://code.launchpad.net/~dstroppa/charms/precise/node-app/refactor-hooks/+merge/200330 [15:42] there's a charm.js now, that mocks a lot of the events in an async way [15:42] yeah I would have rejected it [15:42] :P [15:43] jk [15:43] but really it would fail jshint [15:44] the improvements look really good [15:44] though [16:23] kirkland: are you still looking for me? [16:29] bcsaller: howdy! yeah, I'm in a meeting right now, but want to catch up later [16:30] kirkland: sounds good, I'll be around === sarnold_ is now known as sarnold [16:40] good morning [16:41] in the mongodb charm, when i add units my services database-relation-changed hook gets called. is there anyway to only have this triggered on by the primary unit of the replicaset? [16:42] maxcan: no, this is buy design of juju [16:42] by* [16:42] hm [16:43] do you know if there's anyway to determine the address of the primary in the relation-changed hook? [16:43] maxcan: shouldn't all of them be able to accept connections? [16:44] they can [16:44] the problem is the mongodriver for haskell is kind of out of date and doesn't send writes to the primary [16:44] maxcan: there's not peer election or anything like that in juju yet, relations are created at the service level, but executed on a per unit basis [16:44] i think [16:49] may be fixable on the haskell side [16:49] thanks [16:49] maxcan: np! === Guest51501 is now known as Kyle [17:04] cmars, bcsaller, marcoceppi_: hey, I've ducked out of the sprint [17:04] didn't see the reschedule [17:04] but it's not long after Iland backin malta so I'm not sure I can make that one either [17:04] since I'mhere,do you want to do it now? [17:05] mramm, cmars, bcsaller, marcoceppi_, jcastro: ^^ === marcoceppi_ is now known as marcoceppi [17:05] fwereade: if everyone else is available, I can make myself available as well [17:07] fwereade, works for me [17:07] sure [17:07] we can do it now [17:08] I can go now [17:09] cmars, marcoceppi, mramm, jcastro, bcsaller: https://plus.google.com/hangouts/_/7acpiks0pmu1dt6ouvjavqr1k0?hl=en [17:10] jcastro: semi-off topic, do you know what the around-fosdem plans are? I'm sorting out travel and stuff soon, and wonder if you still want me for anything. [17:10] mgz, yeah, we'd love to have you at cfgmngmnt camp [17:11] http://cfgmgmtcamp.eu/ [17:11] mgz, as for fosdem itself, rbasak was going to submit a talk, I think you should too! [17:12] mgz: I've booked travel/hotel already. I would've asked you had I known you were coming! [17:12] that's the hardest damn url to type [17:12] yeah, tell me about it [17:13] rbasak: I was doing accomodation seperately anyway I think, but are you eurostaring over? could aim at the same train again [17:13] mgz: yes, I'm eurostaring. I'll forward you my iternarary [17:13] rbasak: ta! [17:14] You have mail. [17:55] 2013-12-22 11:20:04,813 - url_helper.py[DEBUG]: Attempting to open 'http://172.16.1.21/MAAS/metadata//2012-03-01/user-data' with 1 attempts (0 retries, timeout=None) to be performed [17:55] 2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: Failed reading from http://172.16.1.21/MAAS/metadata//2012-03-01/user-data after 1 attempts [17:55] 2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: 1 errors occured, re-raising the last one [17:55] someone now anything about this? [17:56] bjorne: what version of juju? 0.7? [17:56] the lastest. [17:56] bjorne: that's a loaded answer, can you run juju version? [17:56] w8 [17:57] 1.16.5.sauzy :) [17:57] rogpeppe: can I have a rubber stamp on the cr to fix bug 1266518 please? [17:58] bjorne: I've never seen that error than, it's a new one to me [17:58] mgz: looking [17:58] marcoceppi https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1264717 [17:59] mgz: LGTM [18:00] rogpeppe: thanks, landing [18:02] sinzui: ^landing that fix now [18:02] thank you mgz === allenap_ is now known as allenap === BradCrittenden is now known as bac === jaywink_ is now known as jaywink [19:52] hi, i'm following the instructions at https://jujucharms.com/fullscreen/search/precise/mongodb-20/?text=mongodb#bws-readme to set up mongoS [19:53] my shard replica sets have this error: agent-state-info: 'hook failed: "replica-set-relation-joined"' [19:54] maxcan: can you get the logs from the failed unit under /var/log/juju/unit-*.log ? [19:54] yeah [19:54] this is the error: [19:54] 2014-01-06 19:44:19 INFO worker.uniter.jujuc server.go:108 running hook tool "juju-log" ["port_check: Unable to connect to ec2-54-245-13-124.us-west-2.compute.amazonaws.com:27017/TCP."] [19:55] that host is the host the machine is running on [19:55] s/machine/agent [19:56] it looks like the ports are open on the security group [19:56] maxcan: that's odd, can you paste more of the log to http://paste.ubuntu.com ? [20:00] http://paste.ubuntu.com/6705155/ [20:04] here is my mongos log: http://paste.ubuntu.com/6705170/ [20:07] and my deployment commands: http://paste.ubuntu.com/6705194/ === wendar_ is now known as wendar [20:43] so i played around a bit, and have managed to get python error: http://paste.ubuntu.com/6705348/ [20:43] this is in the juju logs for mongo shard001 [20:44] maxcan: are you sharding via juju add-unit? [20:44] deploy -n 3 [20:45] as in the readme. should I try to deploy to one server then add-unit? [20:45] maxcan: I wonder if there's a race condition. Where are you deploying? I'm going to try in HP cloud with deploy -n 3 as well see if I can replicate [20:45] amazon [20:46] i'll try to add-unit [20:46] I usually demo with "Deploy mongodb, now we want to scale" and add-unit. Let me see if I can replicate this [20:47] in general, juju is theoretically parallel right? [20:48] as in I can fire off a bunch of deploy commands in parallel [20:48] or better to run everything serially [20:49] like juju deploy foo & juju deploy bar & juju relate ... & ? [20:49] yeah [20:50] well, more like: juju deploy mongo & juju deploy myapp & juju deploy haproxy ; juju add-relation mongo myapp [20:51] maxcan: yes, you can, totally and you don't even need & it can be juju deploy mongo; juju deploy myapp; juju deploy haproxy ; juju add-relation mongo myapp [20:51] maxcan: that doesn't mean that the charm isn't doing something bad though ;) [21:13] When using PPA's in my charm, its customary to document that, and why its there correct? [21:15] i ask because upstream broke ruby 1.9 compat, i have need of ruby 2.x and presently i can either rbenv install that or use a PPA for ruby 2.x on 12.04 - i opted into the PPA instead of having a ton of rbenv based deployments out there where things can go wonky. === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob [21:38] marcoceppi: progress, thanks [21:38] but, still having problems [21:39] if you look at the bottom of the log for the mongos agent http://paste.ubuntu.com/6705606/ , you see "running hook tool "juju-log" ["mongos_relation_change: undefined rel_type: None"]" [21:40] and something similar at the bottom of the shard001 log: http://paste.ubuntu.com/6705649/ [21:40] my add relation commands are: [21:40] juju add-relation mongos:mongos-cfg shard001:configsvr [21:40] juju add-relation mongos:mongos shard001:database [21:48] so, mongo wasn't running on the first shard001 machine and when I tried to start it via upstart, it immediately dies [21:53] it seems that when the other two members of the replica set come online, they kill the primary. and I can't get the primry to start at all [21:53] service mongodb start on the host runs but the process immediately dies w/out any logs [21:55] marcoceppi, were you going to add the pastebin examples to https://github.com/marcoceppi/amulet ? [21:56] arosales: yes, still am going to [21:57] marcoceppi, ok thanks [21:57] arosales: o/ [21:57] maxcan: thats typically due to a lock being present in the MongoDB data directory [21:58] however you should have gotten some log output. [21:58] checked the lock file [21:58] what locations did you check for logs? both /var/log/upstart and /var/log/mongodb right? [21:58] yes [21:58] Hmm :? [22:00] lazypower, hello [22:01] maxcan: can you pastebin the logs? [22:02] sorry, just killed my environment.. will PB them in a few minutes === gary_poster is now known as gary_poster|away [22:11] No problem. When you've got them I'd like to look them over. I had some funkyness when I moved a client's MongoDB setup to juju - the second pass went flawlessly and I couldn't find the bug i ran into. [22:33] #facepam [22:33] sorry guys, one of my commands was add-relation mongos:mongos shard001:configsvr instead of configsvr:configsvr [22:36] Glad its sorted :) [22:37] well, mongos:mongos-cfg [22:37] lazypower: i wish it was... [22:37] now, the mongos agent is running a plain, vanilla mongod daemon, not mongos [22:41] seems to not be picking up the config servers [22:44] hmm [22:45] maxcan: will you be working on this through the night? I'd be happy to join a hangout session and talk through this with you. [22:45] seems that you need 3 config servers [22:45] i was only running 1 [22:45] its common to run config servers on each host thats going to house a mongod [22:45] ahh [22:46] if your config server tanks, they vote and promote just like mongod [22:46] also, they act as witnesses if configured properly. [22:46] or, whats the name they use? arbiter i think? [22:46] something like that