/srv/irclogs.ubuntu.com/2014/01/06/#juju.txt

=== CyberJacob is now known as CyberJacob|Away
=== freeflying_away is now known as freeflying
=== CyberJacob|Away is now known as CyberJacob
=== Kyle is now known as Guest51501
=== mthaddon` is now known as mthaddon
ghartmanngood morning09:26
lazypower'morning09:39
ghartmannI wonder if anyone is using juju to deploy python / django apps.10:42
ghartmannI started to look at the charm "python-django" but I didn't got such a good documentation on it.10:43
lazypowerghartmann: Without having looked at it, what are you expecting to be documented that isn't?10:49
ghartmannmainly I don't get how I am supposed to set the requirements and the repository11:01
lazypowerghartmann: are you using the GUI or strictly CLI?11:04
ghartmannI am using the GUI primarily11:04
ghartmannbut it's not a problem to use it on the console11:04
lazypowerhttp://i.imgur.com/dgAcL0k.png11:05
lazypowerthere's the configuration field for the repository.11:05
lazypowerI'm not super familiar with Django, so i'm reading up on the requirements settings11:05
lazypoweri'm assuming that you're checking out from a Github repository?11:06
ghartmannwell, at the moment yes11:07
lazypowerok, yeah plug the http clone url in that field11:07
ghartmannbut I intend to move to a internal git repository11:07
ghartmannand the requirements.txt should be inside of the repo11:07
lazypowershouldn't be a problem so long as you've got a deployment key on your repo.11:07
lazypowerCould you document this as you go through it for later examination? There's more than likely a few people in here that are more familiar with DJango app deployment with juju that can offer more input, and possibly modify the django charm based on your feedback.11:08
ghartmannok great11:10
ghartmannwhere should I put this info ?11:11
lazypowerEither open a bugreport against the charm, or a blog post would work.11:11
lazypowerif you link me i'll run it around the circle of developers and see where feedback like this should go in the future.11:12
ghartmanngreat, thanks11:27
ghartmann* I am currently on the juju gui trying to re deploy the server *11:28
lazypowerghartmann: Actually the more I think about it, the mailing list would be a prime location for that to go.12:02
ghartmannthe juju main mail list ?12:04
lazypowerhttps://lists.ubuntu.com/mailman/listinfo/juju12:05
ghartmannthanks, I will subscribe to it12:13
=== gary_poster|away is now known as gary_poster
rick_h_marcoceppi_: ping, is there any help to charm devs for s3-like techniques for charms? We're using azure atm, but would prefer to do this in a cloud agnostic way.13:34
marcoceppi_rick_h_: could you elaborate?13:54
rick_h_marcoceppi_: I'm looking at backup up our jenkins config and such from our gui deploy on azure13:56
rick_h_marcoceppi_: ideally, this would be in a cloud-agnostic way to backup to blog on azure, s3 on amz13:56
rick_h_marcoceppi_: I can't find any way to do this but want to make sure I'm not missing anything13:56
marcoceppi_rick_h_: there has been multiple talks of a generic backup charm but nothing materialized yet. Either something that would live as a subordinate charm or possibly in charm-helpers. As of right now, there's nothing really /in/ that space13:57
rick_h_marcoceppi_: yea, each cloud has their own sdk/tooling for working with storage and we'd need something like cross-cloud s3cmd to do this well I think13:58
rick_h_marcoceppi_: ok, thanks for verifying what I'm seeing13:58
marcoceppi_rick_h_: I think just having all the tools installed, then using configuration to drive where to sync would be a good start. This still incurs the problem that you need to provide your account credentials an additional time to the charm, since the charm can't access environment creds from bootstrap13:59
marcoceppi_that's always been the sort of non-starter for the charm13:59
rick_h_marcoceppi_: right, doesn't juju itself store some blob storage?14:00
rick_h_marcoceppi_: or does it actually store all the stuff in mongo itself? the charm zip files and such?14:00
marcoceppi_rick_h_: there's some notion of storage for charm stuff, but nothing really exposed to the charms/services14:00
rick_h_marcoceppi_: right, cool14:00
marcoceppi_rick_h_: last I checked, it's still using provider based storage for charms14:01
rick_h_marcoceppi_: right, so there's some idea of using the storage for each cloud in juju itself. That would be a cool way to expose it. juju store xxxx.tar.gz :)14:01
marcoceppi_rick_h_: that's probable using a plugin. the issue is how do you get the storage information to the charm so it knows where to pull14:02
=== sidnei` is now known as sidnei
fcorreawhat's the best way to compute the unit name without charmhelpers? Since this one is a shell based charm, I suppose I should just extract it from the path or something?14:46
mgzfcorrea: can't you just get the JUJU_UNIT_NAME envvar?14:56
fcorreamgz, oh, didn't know about that one. Thanks14:58
marcoceppi_hatch: good news, I'm just about done reviewing your charm15:04
hatchsaweeeeet15:04
* hatch hopes it was ok15:05
marcoceppi_hatch: there's bad news, but I don't want to harsh your excitement15:05
hatchlol, well there are no tests15:06
marcoceppi_well, that's one thing15:06
marcoceppi_that I forgot to mention15:06
hatchbut they aren't required yet :P15:06
marcoceppi_exactly15:06
hatchit's ok, at least I will know what work to do15:07
hatchI work with open source software, it's pretty hard to hurt my feelings with a review haha15:08
mgzyour code sucks and I hate you!15:09
hatchthen don't use it :P15:09
hatchor15:09
hatchwrite your own :P15:09
MakyoIn python.15:09
hatchor even better15:09
hatchpr's accepted15:09
Makyo(or go?)15:09
marcoceppi_y u no pascal?15:09
hatchVB?15:10
mgzhatch: if you've not read it, http://mumak.net/stuff/your-code-sucks.html15:10
MakyoHaskelljuju.15:10
hatchmgz I haven't, I'll have to check it out15:10
marcoceppi_I could have saved a lot of writing in my review if I'd just written "ur code suks and I </3 u"15:11
hatchhaha15:11
hatchwell a lot of it was written by jcsackett so I'll blame the crappy parts on him15:11
hatchlol15:11
marcoceppi_bugger, I can't edit comments, grr15:12
jcsacketthatch: yeah, but all the architecture decisions were things you stuck me with. :-P15:12
hatchlol15:13
hatchok that's the truth15:13
jcsacketthatch: did you ever get a chance to do the apache frontend stuff for the charm? i still haven't had much time to throw at it.15:16
hatchjcsackett nope I started on it then stopped and started on nginx instead, then stopped on that15:17
hatchheh15:17
hatchI think I'm going to do nginx whenever I do get around to it15:17
marcoceppi_hatch: nginx in main possibly for 14.04, I am so excite15:18
Makyo\o/15:18
mgzmarcoceppi_: it is meme day today? :)15:19
hatchmarcoceppi_ yeah that's kind of why I dwecided to just to nginx and forget about apache15:19
marcoceppi_mgz: everyday is meme day for me15:20
mgzmarcoceppi_: :D15:20
hatchmarcoceppi_ thanks a lot for the review, after todays sprint I'll take a look at it15:23
hatchmarcoceppi_ hmm there must be some new updates to proof from when I submitted because I didn't get any messages before15:31
marcoceppi_hatch: there have been several15:33
marcoceppi_though the does not provide anything has been there for a while15:33
hatchoh haha ok15:33
hatchoh yeah the provides one I knew about15:33
hatchok np I'll have to make sure to update my proof tool15:33
hatchmarcoceppi_ just read through the review - great comments thanks. Typically do you like people to reply to the reviews?15:36
marcoceppi_hatch: it's whatever the author wants. I typically like people to update the charm ;) but for questions I usually say something like "Reply here, or find us in #juju, or ask ubuntu, or the mailing list"15:37
hatchok cool np, I'll definitely be updating the charm to address the concerns, just sometimes reviewers like to have a dialogue :)15:38
marcoceppi_hatch: if, when you're ready for a review, and you want to address each bullet point, go for it. Otherwise it'll just go through a few review again. As I might (hopefully) won't be the one reviewing it again15:39
hatch:)15:39
marcoceppi_and by hopefully it's not that this charm is le suck, but rather hopefully there will be more than me doing charm reviews \o/15:40
hatchlol15:41
Makyo(but maybe also that)15:41
marcoceppi_(oh, definitely that)15:41
marcoceppi_hatch: actually, you might be interested in this: https://code.launchpad.net/~dstroppa/charms/precise/node-app/refactor-hooks/+merge/20033015:42
marcoceppi_there's a charm.js now, that mocks a lot of the events in an async way15:42
hatchyeah I would have rejected it15:42
hatch:P15:42
hatchjk15:43
hatchbut really it would fail jshint15:43
hatchthe improvements look really good15:44
hatchthough15:44
bcsallerkirkland: are you still looking for me?16:23
kirklandbcsaller: howdy!  yeah, I'm in a meeting right now, but want to catch up later16:29
bcsallerkirkland: sounds good, I'll be around16:30
=== sarnold_ is now known as sarnold
maxcangood morning16:40
maxcanin the mongodb charm, when i add units my services database-relation-changed hook gets called.  is there anyway to only have this triggered on by the primary unit of the replicaset?16:41
marcoceppi_maxcan: no, this is buy design of juju16:42
marcoceppi_by*16:42
maxcanhm16:42
maxcando you know if there's anyway to determine the address of the primary in the relation-changed hook?16:43
marcoceppi_maxcan: shouldn't all of them be able to accept connections?16:43
maxcanthey can16:44
maxcanthe problem is the mongodriver for haskell is kind of out of date and doesn't send writes to the primary16:44
marcoceppi_maxcan: there's not peer election or anything like that in juju yet, relations are created at the service level, but executed on a per unit basis16:44
maxcani think16:44
maxcanmay be fixable on the haskell side16:49
maxcanthanks16:49
marcoceppi_maxcan: np!16:49
=== Guest51501 is now known as Kyle
fwereadecmars, bcsaller, marcoceppi_: hey, I've ducked out of the sprint17:04
fwereadedidn't see the reschedule17:04
fwereadebut it's not long after Iland backin malta so I'm not sure I can make that one either17:04
fwereadesince I'mhere,do you want to do it now?17:04
fwereademramm, cmars, bcsaller, marcoceppi_, jcastro: ^^17:05
=== marcoceppi_ is now known as marcoceppi
marcoceppifwereade: if everyone else is available, I can make myself available as well17:05
cmarsfwereade, works for me17:07
mrammsure17:07
mrammwe can do it now17:07
jcastroI can go now17:08
fwereadecmars, marcoceppi, mramm, jcastro, bcsaller: https://plus.google.com/hangouts/_/7acpiks0pmu1dt6ouvjavqr1k0?hl=en17:09
mgzjcastro: semi-off topic, do you know what the around-fosdem plans are? I'm sorting out travel and stuff soon, and wonder if you still want me for anything.17:10
jcastromgz, yeah, we'd love to have you at cfgmngmnt camp17:10
jcastrohttp://cfgmgmtcamp.eu/17:11
jcastromgz, as for fosdem itself, rbasak was going to submit a talk, I think you should too!17:11
rbasakmgz: I've booked travel/hotel already. I would've asked you had I known you were coming!17:12
mgzthat's the hardest damn url to type17:12
jcastroyeah, tell me about it17:12
mgzrbasak: I was doing accomodation seperately anyway I think, but are you eurostaring over? could aim at the same train again17:13
rbasakmgz: yes, I'm eurostaring. I'll forward you my iternarary17:13
mgzrbasak: ta!17:13
rbasakYou have mail.17:14
bjorne2013-12-22 11:20:04,813 - url_helper.py[DEBUG]: Attempting to open 'http://172.16.1.21/MAAS/metadata//2012-03-01/user-data' with 1 attempts (0 retries, timeout=None) to be performed17:55
bjorne2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: Failed reading from http://172.16.1.21/MAAS/metadata//2012-03-01/user-data after 1 attempts17:55
bjorne2013-12-22 11:20:04,856 - url_helper.py[DEBUG]: 1 errors occured, re-raising the last one17:55
bjornesomeone now anything about this?17:55
marcoceppibjorne: what version of juju? 0.7?17:56
bjornethe lastest.17:56
marcoceppibjorne: that's a loaded answer, can you run juju version?17:56
bjornew817:56
bjorne1.16.5.sauzy :)17:57
mgzrogpeppe: can I have a rubber stamp on the cr to fix bug 1266518 please?17:57
marcoceppibjorne: I've never seen that error than, it's a new one to me17:58
rogpeppemgz: looking17:58
bjornemarcoceppi https://bugs.launchpad.net/ubuntu/+source/maas/+bug/126471717:58
rogpeppemgz: LGTM17:59
mgzrogpeppe: thanks, landing18:00
mgzsinzui: ^landing that fix now18:02
sinzuithank you mgz18:02
=== allenap_ is now known as allenap
=== BradCrittenden is now known as bac
=== jaywink_ is now known as jaywink
maxcanhi, i'm following the instructions at https://jujucharms.com/fullscreen/search/precise/mongodb-20/?text=mongodb#bws-readme to set up mongoS19:52
maxcanmy shard replica sets have this error:  agent-state-info: 'hook failed: "replica-set-relation-joined"'19:53
marcoceppimaxcan: can you get the logs from the failed unit under /var/log/juju/unit-*.log ?19:54
maxcanyeah19:54
maxcanthis is the error:19:54
maxcan2014-01-06 19:44:19 INFO worker.uniter.jujuc server.go:108 running hook tool "juju-log" ["port_check: Unable to connect to ec2-54-245-13-124.us-west-2.compute.amazonaws.com:27017/TCP."]19:54
maxcanthat host is the host the machine is running on19:55
maxcans/machine/agent19:55
maxcanit looks like the ports are open on the security group19:56
marcoceppimaxcan: that's odd, can you paste more of the log to http://paste.ubuntu.com ?19:56
maxcanhttp://paste.ubuntu.com/6705155/20:00
maxcanhere is my mongos log: http://paste.ubuntu.com/6705170/20:04
maxcanand my deployment commands: http://paste.ubuntu.com/6705194/20:07
=== wendar_ is now known as wendar
maxcanso i played around a bit, and have managed to get  python error: http://paste.ubuntu.com/6705348/20:43
maxcanthis is in the juju logs for mongo shard00120:43
marcoceppimaxcan: are you sharding via juju add-unit?20:44
maxcandeploy -n 320:44
maxcanas in the readme.  should I try to deploy to one server then add-unit?20:45
marcoceppimaxcan: I wonder if there's a race condition. Where are you deploying? I'm going to try in HP cloud with deploy -n 3 as well see if I can replicate20:45
maxcanamazon20:45
maxcani'll try to add-unit20:46
marcoceppiI usually demo with "Deploy mongodb, now we want to scale" and add-unit. Let me see if I can replicate this20:46
maxcanin general, juju is theoretically parallel right?20:47
maxcanas in I can fire off a bunch of deploy commands in parallel20:48
maxcanor better to run everything serially20:48
sarnoldlike juju deploy foo & juju deploy bar & juju relate ...  &  ?20:49
maxcanyeah20:49
maxcanwell, more like: juju deploy mongo & juju deploy myapp & juju deploy haproxy ; juju add-relation mongo myapp20:50
marcoceppimaxcan: yes, you can, totally and you don't even need & it can be juju deploy mongo; juju deploy myapp; juju deploy haproxy ; juju add-relation mongo myapp20:51
marcoceppimaxcan: that doesn't mean that the charm isn't doing something bad though ;)20:51
lazypowerWhen using PPA's in my charm, its customary to document that, and why its there correct?21:13
lazypoweri ask because upstream broke ruby 1.9 compat, i have need of ruby 2.x and presently i can either rbenv install that or use a PPA for ruby 2.x on 12.04 - i opted into the PPA instead of having a ton of rbenv based deployments out there where things can go wonky.21:15
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
maxcanmarcoceppi: progress, thanks21:38
maxcanbut, still having problems21:38
maxcanif you look at the bottom of the log for the mongos agent http://paste.ubuntu.com/6705606/ , you see "running hook tool "juju-log" ["mongos_relation_change: undefined rel_type: None"]"21:39
maxcanand something similar at the bottom of the shard001 log: http://paste.ubuntu.com/6705649/21:40
maxcanmy add relation commands are:21:40
maxcanjuju add-relation mongos:mongos-cfg shard001:configsvr21:40
maxcanjuju add-relation mongos:mongos shard001:database21:40
maxcanso, mongo wasn't running on the first shard001 machine and when I tried to start it via upstart, it immediately dies21:48
maxcanit seems that when the other two members of the replica set come online, they kill the primary.  and I can't get the primry to start at all21:53
maxcanservice mongodb start on the host runs but the process immediately dies w/out any logs21:53
arosalesmarcoceppi, were you going to add the pastebin examples to https://github.com/marcoceppi/amulet ?21:55
marcoceppiarosales: yes, still am going to21:56
arosalesmarcoceppi, ok thanks21:57
lazypowerarosales: o/21:57
lazypowermaxcan: thats typically due to a lock being present in the MongoDB data directory21:57
lazypowerhowever you should have gotten some log output.21:58
maxcanchecked the lock file21:58
lazypowerwhat locations did you check for logs? both /var/log/upstart and /var/log/mongodb right?21:58
maxcanyes21:58
lazypowerHmm :?21:58
arosaleslazypower, hello22:00
lazypowermaxcan: can you pastebin the logs?22:01
maxcansorry, just killed my environment.. will PB them in a few minutes22:02
=== gary_poster is now known as gary_poster|away
lazypowerNo problem. When you've got them I'd like to look them over. I had some funkyness when I moved a client's MongoDB setup to juju - the second pass went flawlessly and I couldn't find the bug i ran into.22:11
maxcan#facepam22:33
maxcansorry guys, one of my commands was add-relation mongos:mongos shard001:configsvr instead of configsvr:configsvr22:33
lazypowerGlad its sorted :)22:36
maxcanwell, mongos:mongos-cfg22:37
maxcanlazypower: i wish it was...22:37
maxcannow, the mongos agent is running a plain, vanilla mongod daemon, not mongos22:37
maxcanseems to not be picking up the config servers22:41
lazypowerhmm22:44
lazypowermaxcan: will you be working on this through the night? I'd be happy to join a hangout session and talk through this with you.22:45
maxcanseems that you need 3 config servers22:45
maxcani was only running 122:45
lazypowerits common to run config servers on each host thats going to house a mongod22:45
maxcanahh22:45
lazypowerif your config server tanks, they vote and promote just like mongod22:46
lazypoweralso, they act as witnesses if configured properly.22:46
lazypoweror, whats the name they use? arbiter i think?22:46
maxcansomething like that22:46

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!