=== CyberJacob is now known as CyberJacob|Away === gary_poster is now known as gary_poster|away === freeflying is now known as freeflying_away === _thumper_ is now known as thumper [03:51] aquarius: subordinate === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === CyberJacob|Away is now known as CyberJacob [08:51] marcoceppi, the thing I don't understand is: I'm meant to write a whole charm? and put my ssh keys in it? [09:10] I am working on a local charm and have updated the charm contents to fix a bug, now when i rerun the charm it still picks the buggy version [09:10] how do i force new code in? [09:38] mysql charm problem: http://paste.ubuntu.com/6555177/ === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [10:47] how do i force new code of local charm to get pushed to new deploys? [11:14] noodles775: hei [11:16] Hi InformatiQ. [11:16] I am kinda stuck developing a charm, it seems it is caching an old code version so keeps failoing in deploying that local charm [11:17] noodles775: how do i force new code into the failed deploy? [11:17] or to a new deploy even [11:18] InformatiQ: That will happen if you remove the 'revision' file which was created. juju deploy increments it each time you deploy. If you remove and recreate it, you'll effectively tell juju to deploy a revision for which it already has a copy. Does that make sense? Athough, that shouldn't effect a new deploy (ie. a new environment). [11:19] noodles775: i have destryed many envs but strsil lsame issue [11:19] trying the revision thing [11:21] InformatiQ: if you have a freshly bootstrapped environment, I cannot see how it could even know about old versions of your charm. But maybe others will know. [11:32] noodles775: ok removing the revion file didn't help either [11:32] but new deploy did not create that file and it started at 12 [11:32] something is fishy [11:37] InformatiQ: I wasn't suggesting removing the revision file - rather that removing the revision file can cause the behaviour you saw (with an existing environment). So given the above info, I don't understand why you'd be getting a cached version of your charm in a freshly bootstrapped environment. We'll need to wait for someone else to find out more I think. [11:39] InformatiQ: you can do "juju deploy --upgrade local:foo" [11:47] thanks jam that seems to have remedies the issue [11:49] jam: for my own learning, why/how is there a cached version of the charm if the environment is freshly bootstrapped? [11:51] noodles775: if you destroy-environment, I would expect it to destroy the charm cache, but I don't quite know for sure. The above conversation sounded more like InformatiQ was destroying the *service* which doesn't delete the cached charm [11:51] and doing "juju destroy-service foo; juju deploy local:foo" will reuse the charm that was uploaded unless you do "juju deploy --upgrade local:foo" [11:51] Right - I'd understood that the environment had been re-created (rebootstrapped). Thanks. [11:52] jam: I was doing destroy-environment [11:52] FYI that is local environment (lxc) [11:52] InformatiQ: I'm not personally 100% about where the files are going and what may or may not be cleaned up. It *sounds* like a bug if destroy-environment isn't destroying the charm cache === CyberJacob is now known as CyberJacob|Away === freeflying is now known as freeflying_away === gary_poster|away is now known as gary_poster [13:13] aquarius: the subordinate charm, they can be super simple, it's just a way to bolt on additional stuff to existing charms without having to fork a charm [13:14] marcoceppi, I don't think I understand how that ought to work. I mean... charms live in the charm store, right? [13:14] aquarius: no [13:14] aquarius: charms live either locally, or in a personal branch in the charmstore, or in the offiicial charm store [13:15] marcoceppi, so... I should write a whole new charm, and hardcode my ssh key into it? [13:15] and then th charm deploys that key and sets up the rsync backup? [13:15] I was hoping that someone had already done something like that... [13:16] so you can mkdir -p ~/charms/precise; juju charm create aquarius-psql-backups ~/charms/precise; hack on that code a little; juju deploy --repository ~/charms local:aquarius-psql-backup [13:17] or, bzr push lp:~aquarius/charms/precise/aquarius-psql-backup/trunk; then juju deploy cs:~aquarius/aquarius-psql-backup [13:17] of course, naming the charm whatever you want, you don't have to put you name in the charms name [13:17] I was just trying to make a personalized example [13:18] then, during the install hook have it copy the SSH key you want, then drop a file in cron.d [13:18] which I think is the extent of your backup stuff [13:18] no need to make configuration options, no need to go crazy and make it useful for anyone else if you don't want to [13:18] that's basically my plan, indeed :) [13:19] aquarius: we had plans to write a backup charm, but the permatations are so plentiful it's hard to do it and do it right [13:19] yeah [13:19] aquarius: the last think you'll want to do, is edit the metadata.yaml file and makesure "suborindate" is true [13:19] I'd have to generate an ssh key and then teach the destination machine about it [13:20] maybe it'd just be easier to have the destination machine pull the stuff, rather than the source machine to push it :( [13:20] I am wary of sshing anywhere as root. [13:20] man, I hate being a sysadmin. === freeflying_away is now known as freeflying === sidnei` is now known as sidnei [15:01] HI, Anyone could answer a couple of questions regarding JUJU - Manual provisioning? [15:21] FourDollars_: it's best to just ask, instead of asking to ask [15:21] oops, sorry FourDollars_; wrong ping [15:46] marcoceppi, I seem to still have 1.2.0 of charm tools [15:48] jcastro: what's `dpkg -l charm-tools` show? [15:49] ii charm-tools 1.2.0-0ubunt all Tools for maintaining Juju charms [15:50] nm, I found the problem [15:51] marcoceppi, I can confirm the right readme template this time. \o/ [15:51] nice work [15:52] jcastro: try this, go to a charm, run `juju charm add tests` [15:59] marcoceppi, that worked [16:00] jcastro: now go write a bunch of tests === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [17:44] hey, can someone help me with an agent-state-info error I'm getting [17:44] ? [17:44] "agent-state-info: '(error: container "oatman-local-machine-1" is already created)'" [17:56] oatman: this was brought up last night [17:57] marcoceppi, yeah? [17:57] it's been working all day, I don't understand what broke [17:57] oatman: yeah, seems like a new bug that came up recently [17:57] oatman: do you have anything in /var/lib/cache/lxc? [17:57] I think that's the path [17:57] one min [17:58] no folder called cache in /var/lib [17:58] webbrandon: juju init and juju generate-config are the same. they're aliases of each other [17:59] marcoceppi, I have /var/cache/lxc [18:00] I have [18:00] └── cloud-precise [18:00] └── ubuntu-12.04-server-cloudimg-amd64-root.tar.gz [18:00] oatman: that's fine [18:01] oatman: I think /var/lib/lxc is what I wanted. Could you pastebin the tree of that? [18:01] sure could [18:03] it is 29k lines, that ok? [18:04] I could tree -d it [18:07] https://pastebin.canonical.com/101822/ [18:07] had to -d it marcoceppi [18:07] 30k lines crashed my browser last time ;-) [18:08] oatman: okay, yeah, I got what I wanted [18:08] oatman: juju destroy-environment [18:08] then make sure /var/lib/lxc/oatman-local-machine-1 is removed as well [18:08] ok [18:08] bootstrap and try again [18:08] oatman: also, when you got a second, what's juju version say? [18:09] so I still have this in the local machine folder: [18:09] config fstab rootfs rootfs.hold [18:09] 1.16.5-raring-amd64 [18:10] interesting, when you destroy-environment everyhthing in /var/lib/lxc that's related to a local-machine should be removed [18:11] should I blow it away manually? [18:12] I've blown it away anyway, I'm so close to finishing what I was doing! [18:12] I'll let you know if it works [18:12] oatman: yeah, blow it away [18:13] marcoceppi, fixed! [18:13] thanks so much [18:14] oatman: awesome, glad that worked out for you [18:14] oatman: if after you destroy this, it still lingers, open a bug against juju-core as that's a regression [18:15] will do [18:15] I think I see the old bug report [18:16] it's interesting that I've done about 50 bootstraps today, and it's only the last one that caused issues [18:24] So, before I get crazy and really manage to foul things up - can I mix mode my juju deployments? As in manually provision a machine under the amazon environment and consume it? [18:30] lazypower: you *can* [18:30] lazypower: you can turn on an amazon machine, then juju add-machine ; so long as that machine as your ssh key on it [18:31] then you can juju deploy --to ; after the machine is enlisted in juju status [18:31] lazypower: it's still betaware though, use at your own risk [18:31] the next release of juju, 1.17, should have better support for this [18:41] marcoceppi: so, if I get really fun and add a machine from a different provider, juju is basically blind to this and doesnt care [18:42] lazypower: as long as the machine can talk to the bootstrap node, yes, juju is blind to it [18:42] lazypower: so you may need to open ports in the firewall to have access to the bootstrap node exposed [18:42] Ah Yeah, I didn't consider that juju uses the internal IP structure for cross communication [18:43] marcoceppi, I spoke too soon, now my instance-state is "missing" [18:44] it did work once [18:45] marcoceppi, oh gods, I was just impatient [18:45] marcoceppi, it's come up now :-) [18:46] :) [19:10] /window 8 [19:18] marcoceppi, woot \o/ Charm Tools 1.2.3 released :-) [19:19] aquarius: heh, yeah, just building the windows tools now [19:19] arosales: *^ [19:20] marcoceppi, cool [19:20] marcoceppi, remember to get an rt for signing [19:21] arosales: yeah [19:21] marcoceppi, thanks === CyberJacob|Away is now known as CyberJacob [19:29] Cool, powershell isn't working :( [19:30] marcoceppi: which version of windows, and/or powershell? [19:30] And do you need a third party to test it? i've got a fleet of VM's here we can toy with. [19:30] Windows Server 2012, and I can't get it to give me a prompt [19:34] shoot me the instructions and i'll fire up a test for you. I've got a meeting in 30 minutes, but i can help until then. [20:22] marcoceppi, reading your release announce for 1.2.3 charm tools [20:22] "juju charm add tests` will create an example Amulet test file based on metadata.yaml information for the charm provided (or cwd, if cwd is a charm)" [20:22] sorry for being dense here but what is "cwd" [20:22] in this context [20:22] current working directory [20:23] bascially you either do `juju charm add tests /tmp/precise/charm` or `cd /tmp/precise/charm; juju charm add tests` [20:23] marcoceppi, so the default is not provided is the cwd [20:23] sorry if no charm path is given is the cwd, and hopefully that is a charm path [20:23] arosales: right, it'll check if the current directory is a charm, if you didn't give it a path [20:24] marcoceppi, ok I follow thanks [20:24] marcoceppi, thanks [20:24] arosales: a lot of the commands that take a path do this, I don't know why I felt the need to highlight it [20:24] marcoceppi, no is good info I just read it wrong [20:56] Hey guys, working on a node deploy (https://juju.ubuntu.com/docs/howto-node.html) and can't seem to get the node app to respond. I've related the charms and exposed haproxy, but when I go to haproxy's public URL I can't get a response. Any thoughts or ideas to point me in the right direction? [20:59] youngnico, 1 you don't have to do expose haproxy, 2 can you show us result of juju status [20:59] freeflying: you do have to expose haproxy if you want to get to it outside of the network [21:00] http://pastebin.com/QQ8NXCrM [21:00] youngnico: you're haproxy unit is in an error state [21:01] Ahhh.. [21:01] youngnico: I just patched the haproxy, as there was a problem with it earlier [21:01] config-changed? [21:01] marcoceppi, you're right, youngnico's case is public cloud, then he has to :) [21:01] youngnico: yes, so for this problem in particular, just run `juju resolved haproxy/0` [21:02] youngnico: that will push past the error, and on the next go-around it should recieve the relation information and work [21:03] youngnico: finally, if you're just testing the node app, you don't neccisarily need haproxy at all :). If it still gives you grief after a few mins, run juju expose node-app; then go to the node-app's URL [21:03] After resolving the error, should I re-relate the charms? Or run a restart of some sort? [21:03] haproxy, and all proxies, are there if you have more than one unit. For testing and playing, it's just a machine you don't need [21:03] youngnico: no, it'll continue on it's way. There are a bunch of events queued to run, but can't because it's in an error state [21:03] marcoceppi, ok found another template problem [21:04] jcastro: sweet, I want to squeeze a few more things in to the 1.2 tree, so open a bug/merge req [21:04] charm proof gives me E: README.md Includes boilerplate README.ex line 11 for like, all the headings, etc. [21:04] ok [21:04] jcastro: ah, yeah, that makes sense [21:04] I'll drop the boiler plate checking [21:05] https://bugs.launchpad.net/charm-tools/+bug/1260073 [21:05] <_mup_> Bug #1260073: charm proof is too strict [21:05] FYI [21:05] jcastro: ack, thanks [21:05] glad I didn't get too deep in to the Windows stuff [21:06] marcoceppi: / freeflying - thanks guys! So I pushed past the errors, and the server is resolving a 502 (Bad Gateway) - http://pastebin.com/apUAJTcP [21:06] Is this just a problem with the node-app example? [21:06] youngnico: I just noticed that the mongodb service is still in an installed state, IE, not started yet [21:06] Oh, nevermind it was just starting up! [21:07] Takes a bit, I need to be more patient! [21:07] youngnico: yeah, it can take a few mins to come up :) [21:07] youngnico: I'll have the fix for haproxy land soon so you won't run in to that the next time [21:07] it'd be nice to have splash pages there instead of nginx errors [21:07] jcastro: file a bug ;) [21:07] "yo, I will error out until you connect a database to me" [21:07] marcoceppi, is that per charm? [21:08] marcoceppi, i think david c. had identified some issues with haproxy too [21:08] * arosales grabs bug # [21:08] arosales: yeah, that's what I patched last night [21:08] I caught him online when he found the problem, which youngnico ran in to [21:08] https://bugs.launchpad.net/charms/+source/haproxy/+bug/1257062 [21:08] <_mup_> Bug #1257062: config-changed fails [21:08] ah ok I think I saw the merge now doing a search [21:09] Oh yay, dave ack'd it [21:09] let me merge it now [21:09] https://code.launchpad.net/~marcoceppi/charms/precise/haproxy/cfg-changed-fix/+merge/198491 [21:09] Thanks a bunch guys, you have no idea how much help this is! [21:10] marcoceppi, does merge 198491 resolve bug 1257062? [21:10] arosales: yes [21:10] arosales: they're attached [21:10] or are there still issues with version 19 [21:10] youngnico: you're welcome! Let us know if you have any other questions [21:11] marcoceppi, cool could you drop a note in the bug so the bug reporter, darryl sees the current status. [21:11] Will do! [21:11] I think he may still be under the impression the work around needs to be deployed (ie deploy version 18) [21:12] arosales: yeah, I'm writing him now, that he should be able to use haproxy-22, and do an upgrade charm [21:13] marcoceppi, cool another bug fixed [21:13] squish'em squash'em [21:14] another bits the dust [21:14] another one bits the dust, that is. [21:37] So I'm trying to deploy my own node application now, using the node-app charm. I made a configuration file with the location of my app on Github, and ran juju deploy --config ./config.yaml node-app (config.yaml has the git repo pointer), but it throws: error: no settings found for "node-app". [21:37] Am I passing the configuration yaml file incorrectly? [21:40] I'm trying to pass my configuration like: https://juju.ubuntu.com/docs/charms-config.html [21:41] youngnico: what's your config.yaml file look like? [21:43] I just copied the example at: https://github.com/charms/node-app/blob/master/config.yaml [21:43] And changed the repository to my own. Does the repo need to be public? [21:43] Or can it forward my keys to pull to the remote? [21:44] Actually, even if I just copy the config exactly, it still throws that error [21:45] youngnico: yeah, you need to format it like so: https://juju.ubuntu.com/docs/charms-config.html#config-deployment [21:45] the config.yaml and the configuration file you're writing are two different things [21:46] I'd recommend naming int deployment.yaml, then you want to change mediawiki to node-app, and the key: values to the ones you wish to change [21:46] s/int/it to/ [21:47] marcoceppi: Sorry didn't see your response. I understand they are but why? I know there is some concept behind it, I am just trying to understand. [21:47] Ahhh, okay the top level key needs to be the charm name. [21:48] youngnico: right, you can actually put multiple services configurations in one file [21:48] The way I think is there should only be one command for a event, why two in this case? [21:48] webbrandon: it was first called generate-config, but that's so long, we decided init was a shorter name [21:49] we kept the old one for backwards compat [21:49] Has anyone recently setup MMS Monitoring with the MongoDB charm? I'm not finding the MMS Token it's looking for, it appears they ahve updated the MMS Service to use an Auth Key and Secret Key for configuration now. [21:49] ohhh. makes sense now. SO it will eventualy get depreciated [21:49] Handy! So the config just sets values, the config.yaml when authoring a charm is more for defining the structure of those values... I'm getting closer, thanks! [21:49] youngnico: exactly! [21:50] webbrandon: maybe/probably. There's some clashing, as some people don't like the name init, others don't like the lenght of generate-config [21:51] lazypower: I have no idea. negronjl wrote the charm, maybe he has some insights? [21:51] repeating the message, with ping, so it's easier to read [21:51] I'm thinking about opening a bug on launchpad for this - i'm not done digging though. [21:52] negronjl: lazypower asks: "Has anyone recently setup MMS Monitoring with the MongoDB charm? I'm not finding the MMS Token it's looking for, it appears they ahve updated the MMS Service to use an Auth Key and Secret Key for configuration now." [21:52] lazypower: +1 on opening a bug, even now, if you find the answer then you can just document it there, if not it's atleast there [21:55] marcoceppi, lazypower: a bug would be the thing to do here. As I remember it, I didn't setup the mongodb charm for monitoring :/ [21:56] marcoceppi, lazypower: I can work on that soon(ish) :) [21:57] negronjl: in your infinite free time of course ;) [21:57] marcoceppi, ROFL [21:57] negronjl: Actually - you're fine the way you are. Looking over the charm this was for the configuration flags [21:58] lazypower, you see ... I'm awesome ... I fixed it even before I knew I needed it :P [21:58] this has been depecriated but not removed from teh default config - https://jira.mongodb.org/browse/SERVER-8055 [21:58] j/k :P [21:58] I'll open a bug report against this and issue a merge request later if you dont mind [21:59] marcoceppi: one last question, and I think I'll have this working: When I set everything up on my own app, the node-app status is errored with "'hook failed: "install"'". [21:59] lazypower, not at all ... [21:59] How would I go about debugging that? Is there a "redeploy" command or something with a verbose flag I can see what's causing the error? [22:00] I'm sure I need to customize a hook or something to start my app differently than the example, but I'm not quite sure where/how to debug. If there's some documentation that I'm missing please point me in the right direction! [22:00] youngnico: you can do a few things, you can `juju ssh node-app/0` then look at the charm logs in `/var/log/juju/unit-node-app-0.log` [22:01] youngnico: alternatively, you can run `juju debug-hooks node-app/0`, then in another terminal run `juju resolved --retry node-app/0` and you can interactively debug hooks [22:01] youngnico: the latter is a little more complex, but super powerful, there's a charm school we did on it, let me find you the video [22:02] Great! That sounds like enough to keep me busy with debugging ;) is there documentation around that somewhere, or just man pages? I hate being a pest in IRC with questions, just don't know where to look. === gary_poster is now known as gary_poster|away [22:03] youngnico: https://juju.ubuntu.com/docs/authors-hook-debug.html [22:03] bac: MP:198633 Approved and merged [22:03] youngnico: that page is a little dry, sadly [22:04] No worries, I'll take a look. Trying to figure out how to set SSH keys now... the one Juju is trying to use is no good =/ [22:07] negronjl: Thank you for the quick response. I've opened the bug report and should have a PR for you later this evening. [22:08] lazypower, np === _mup__ is now known as _mup_ [22:11] Another question that may be off topic. Is it typical to gate pull requests through the original charm maintainer or is it better practice to just assign the charmers group and go through the documented channels? [22:14] lazypower: always assign to ~charmers [22:15] Aye aye captain. [22:34] * arosales waves at lazypower [22:38] * lazypower waves back at arosales [22:38] Greetings Program === CyberJacob is now known as CyberJacob|Away [22:53] marcoceppi: can you help me get my charm into launchpad [22:53] there is already a bug [22:53] and I am subscribed to it [22:53] how do i merge my code there [22:55] marcoceppi: the thing is i did not start from that bug from a new repo [22:55] Informat1Q: happy to help, going to need some links for context though [22:56] marcoceppi: trac bug https://bugs.launchpad.net/charms/+bug/795480 [22:56] <_mup_> Bug #795480: Charm needed: Trac [22:56] my code https://code.launchpad.net/~rhanna/charms/precise/trac/trunk [22:56] Informat1Q: cool, so you want to merge ~rhanna/charms/precise/trac/trunk in to lp:charms/trac? [22:57] marcoceppi: i think this is the correct path, or is it not? [22:57] Informat1Q: sounds good so far [22:57] Informat1Q: OHHH [22:57] Informat1Q: This is a new charm [22:58] marcoceppi: yup that is the problem [22:59] Informat1Q: Okay, no problem, what you want to do is assign the bug to you. The click on "Link a related branch" and link to your lp:~rhanna/charms/precise/trac/trunk branch; Then move the bug status to "Fix Committed". That'll put it in the review queue for a charmer to review and provide feedback, then eventually put it int he store for you [23:00] marcoceppi: thanks [23:00] marcoceppi: you're the man [23:00] Informat1Q: after you do that, in about 10 mins you'll it listed in the review-queue: manage.jujucharms.com/tools/review-queue [23:00] https://manage.jujucharms.com/tools/review-queue [23:03] done [23:03] waiting for my first charm review [23:08] Informat1Q: Awesome!, given the size of the queue we might not get to it this week [23:09] Informat1Q: check the page again in about 5 mins, you should see it right below the haproxy entry [23:10] cool i'll get some sleep now [23:10] good night all [23:11] Informat1Q: thanks for the submission! o/ [23:15] Informat1Q: confirmed, it's in the queue!