[03:51] <marcoceppi> aquarius: subordinate
[08:51] <aquarius> marcoceppi, the thing I don't understand is: I'm meant to write a whole charm? and put my ssh keys in it?
[09:10] <InformatiQ> I am working on a local charm and have updated the charm contents to fix a bug, now when i rerun the charm it still picks the buggy version
[09:10] <InformatiQ> how do i force new code in?
[09:38] <ashipika> mysql charm problem: http://paste.ubuntu.com/6555177/
[10:47] <InformatiQ> how do i force new code of local charm to get pushed to new deploys?
[11:14] <InformatiQ> noodles775: hei
[11:16] <noodles775> Hi InformatiQ.
[11:16] <InformatiQ> I am kinda stuck developing a charm, it seems it is caching an old code version so keeps failoing in deploying that local charm
[11:17] <InformatiQ> noodles775: how do i force new code into the failed deploy?
[11:17] <InformatiQ> or to a new deploy even
[11:18] <noodles775> InformatiQ: That will happen if you remove the 'revision' file which was created. juju deploy increments it each time you deploy. If you remove and recreate it, you'll effectively tell juju to deploy a revision for which it already has a copy. Does that make sense? Athough, that shouldn't effect a new deploy (ie. a new environment).
[11:19] <InformatiQ> noodles775: i have destryed many envs but strsil lsame issue
[11:19] <InformatiQ> trying the revision thing
[11:21] <noodles775> InformatiQ: if you have a freshly bootstrapped environment, I cannot see how it could even know about old versions of your charm. But maybe others will know.
[11:32] <InformatiQ> noodles775: ok removing the revion file didn't help either
[11:32] <InformatiQ> but new deploy did not create that file and it started at 12
[11:32] <InformatiQ> something is fishy
[11:37] <noodles775> InformatiQ: I wasn't suggesting removing the revision file - rather that removing the revision file can cause the behaviour you saw (with an existing environment). So given the above info, I don't understand why you'd be getting a cached version of your charm in a freshly bootstrapped environment. We'll need to wait for someone else to find out more I think.
[11:39] <jam> InformatiQ: you can do "juju deploy --upgrade local:foo"
[11:47] <InformatiQ> thanks jam that seems to have remedies the issue
[11:49] <noodles775> jam: for my own learning, why/how is there a cached version of the charm if the environment is freshly bootstrapped?
[11:51] <jam> noodles775: if you destroy-environment, I would expect it to destroy the charm cache, but I don't quite know for sure. The above conversation sounded more like InformatiQ was destroying the *service* which doesn't delete the cached charm
[11:51] <jam> and doing "juju destroy-service foo; juju deploy local:foo" will reuse the charm that was uploaded unless you do "juju deploy --upgrade local:foo"
[11:51] <noodles775> Right - I'd understood that the environment had been re-created (rebootstrapped). Thanks.
[11:52] <InformatiQ> jam: I was doing destroy-environment
[11:52] <InformatiQ> FYI that is local environment (lxc)
[11:52] <jam> InformatiQ: I'm not personally 100% about where the files are going and what may or may not be cleaned up. It *sounds* like a bug if destroy-environment isn't destroying the charm cache
[13:13] <marcoceppi> aquarius: the subordinate charm, they can be super simple, it's just a way to bolt on additional stuff to existing charms without having to fork a charm
[13:14] <aquarius> marcoceppi, I don't think I understand how that ought to work. I mean... charms live in the charm store, right?
[13:14] <marcoceppi> aquarius: no
[13:14] <marcoceppi> aquarius: charms live either locally, or in a personal branch in the charmstore, or in the offiicial charm store
[13:15] <aquarius> marcoceppi, so... I should write a whole new charm, and hardcode my ssh key into it?
[13:15] <aquarius> and then th charm deploys that key and sets up the rsync backup?
[13:15] <aquarius> I was hoping that someone had already done something like that...
[13:16] <marcoceppi> so you can mkdir -p ~/charms/precise; juju charm create aquarius-psql-backups ~/charms/precise; hack on that code a little; juju deploy --repository ~/charms local:aquarius-psql-backup
[13:17] <marcoceppi> or, bzr push lp:~aquarius/charms/precise/aquarius-psql-backup/trunk; then juju deploy cs:~aquarius/aquarius-psql-backup
[13:17] <marcoceppi> of course, naming the charm whatever you want, you don't have to put you name in the charms name
[13:17] <marcoceppi> I was just trying to make a personalized example
[13:18] <marcoceppi> then, during the install hook have it copy the SSH key you want, then drop a file in cron.d
[13:18] <marcoceppi> which I think is the extent of your backup stuff
[13:18] <marcoceppi> no need to make configuration options, no need to go crazy and make it useful for anyone else if you don't want to
[13:18] <aquarius> that's basically my plan, indeed :)
[13:19] <marcoceppi> aquarius: we had plans to write a backup charm, but the permatations are so plentiful it's hard to do it and do it right
[13:19] <aquarius> yeah
[13:19] <marcoceppi> aquarius: the last think you'll want to do, is edit the metadata.yaml file and makesure "suborindate" is true
[13:19] <aquarius> I'd have to generate an ssh key and then teach the destination machine about it
[13:20] <aquarius> maybe it'd just be easier to have the destination machine pull the stuff, rather than the source machine to push it :(
[13:20] <aquarius> I am wary of sshing anywhere as root.
[13:20] <aquarius> man, I hate being a sysadmin.
[15:01] <Felipe_C> HI, Anyone could answer a couple of questions regarding JUJU - Manual provisioning?
[15:21] <marcoceppi> FourDollars_: it's best to just ask, instead of asking to ask
[15:21] <marcoceppi> oops, sorry FourDollars_; wrong ping
[15:46] <jcastro> marcoceppi, I seem to still have 1.2.0 of charm tools
[15:48] <marcoceppi> jcastro: what's `dpkg -l charm-tools` show?
[15:49] <jcastro> ii  charm-tools    1.2.0-0ubunt all          Tools for maintaining Juju charms
[15:50] <jcastro> nm, I found the problem
[15:51] <jcastro> marcoceppi, I can confirm the right readme template this time. \o/
[15:51] <jcastro> nice work
[15:52] <marcoceppi> jcastro: try this, go to a charm, run `juju charm add tests`
[15:59] <jcastro> marcoceppi, that worked
[16:00] <marcoceppi> jcastro: now go write a bunch of tests
[17:44] <oatman> hey, can someone help me with an agent-state-info error I'm getting
[17:44] <oatman> ?
[17:44] <oatman> "agent-state-info: '(error: container "oatman-local-machine-1" is already created)'"
[17:56] <marcoceppi> oatman: this was brought up last night
[17:57] <oatman> marcoceppi, yeah?
[17:57] <oatman> it's been working all day, I don't understand what broke
[17:57] <marcoceppi> oatman: yeah, seems like a new bug that came up recently
[17:57] <marcoceppi> oatman: do you have anything in /var/lib/cache/lxc?
[17:57] <marcoceppi> I think that's the path
[17:57] <marcoceppi> one min
[17:58] <oatman> no folder called cache in /var/lib
[17:58] <marcoceppi> webbrandon: juju init and juju generate-config are the same. they're aliases of each other
[17:59] <oatman> marcoceppi, I have /var/cache/lxc
[18:00] <oatman> I have
[18:00] <oatman> └── cloud-precise
[18:00] <oatman>     └── ubuntu-12.04-server-cloudimg-amd64-root.tar.gz
[18:00] <marcoceppi> oatman: that's fine
[18:01] <marcoceppi> oatman: I think /var/lib/lxc is what I wanted. Could you pastebin the tree of that?
[18:01] <oatman> sure could
[18:03] <oatman> it is 29k lines, that ok?
[18:04] <oatman> I could tree -d it
[18:07] <oatman> https://pastebin.canonical.com/101822/
[18:07] <oatman> had to -d it marcoceppi
[18:07] <oatman> 30k lines crashed my browser last time ;-)
[18:08] <marcoceppi> oatman: okay, yeah, I got what I wanted
[18:08] <marcoceppi> oatman: juju destroy-environment
[18:08] <marcoceppi> then make sure /var/lib/lxc/oatman-local-machine-1 is removed as well
[18:08] <oatman> ok
[18:08] <marcoceppi> bootstrap and try again
[18:08] <marcoceppi> oatman: also, when you got a second, what's juju version say?
[18:09] <oatman> so I still have this in the local machine folder:
[18:09] <oatman> config  fstab  rootfs  rootfs.hold
[18:09] <oatman> 1.16.5-raring-amd64
[18:10] <marcoceppi> interesting, when you destroy-environment everyhthing in /var/lib/lxc that's related to a local-machine should be removed
[18:11] <oatman> should I blow it away manually?
[18:12] <oatman> I've blown it away anyway, I'm so close to finishing what I was doing!
[18:12] <oatman> I'll let you know if it works
[18:12] <marcoceppi> oatman: yeah, blow it away
[18:13] <oatman> marcoceppi, fixed!
[18:13] <oatman> thanks so much
[18:14] <marcoceppi> oatman: awesome, glad that worked out for you
[18:14] <marcoceppi> oatman: if after you destroy this, it still lingers, open a bug against juju-core as that's a regression
[18:15] <oatman> will do
[18:15] <oatman> I think I see the old bug report
[18:16] <oatman> it's interesting that I've done about 50 bootstraps today, and it's only the last one that caused issues
[18:24] <lazypower> So, before I get crazy and really manage to foul things up - can I mix mode my juju deployments? As in manually provision a machine under the amazon environment and consume it?
[18:30] <marcoceppi> lazypower: you *can*
[18:30] <marcoceppi> lazypower: you can turn on an amazon machine, then juju add-machine <ip>; so long as that machine as your ssh key on it
[18:31] <marcoceppi> then you can juju deploy --to <machine-num>; after the machine is enlisted in juju status
[18:31] <marcoceppi> lazypower: it's still betaware though, use at your own risk
[18:31] <marcoceppi> the next release of juju, 1.17, should have better support for this
[18:41] <lazypower> marcoceppi: so, if I get really fun and add a machine from a different provider, juju is basically blind to this and doesnt care
[18:42] <marcoceppi> lazypower: as long as the machine can talk to the bootstrap node, yes, juju is blind to it
[18:42] <marcoceppi> lazypower: so you may need to open ports in the firewall to have access to the bootstrap node exposed
[18:42] <lazypower> Ah Yeah, I didn't consider that juju uses the internal IP structure for cross communication
[18:43] <oatman> marcoceppi, I spoke too soon, now my instance-state is "missing"
[18:44] <oatman> it did work once
[18:45] <oatman> marcoceppi, oh gods, I was just impatient
[18:45] <oatman> marcoceppi, it's come up now :-)
[18:46] <marcoceppi> :)
[19:10] <lazypower>  /window 8
[19:18] <arosales> marcoceppi, woot \o/ Charm Tools 1.2.3 released :-)
[19:19] <marcoceppi> aquarius: heh, yeah, just building the windows tools now
[19:19] <marcoceppi> arosales: *^
[19:20] <arosales> marcoceppi, cool
[19:20] <arosales> marcoceppi, remember to get an rt for signing
[19:21] <marcoceppi> arosales: yeah
[19:21] <arosales> marcoceppi, thanks
[19:29] <marcoceppi> Cool, powershell isn't working :(
[19:30] <lazypower> marcoceppi: which version of windows, and/or powershell?
[19:30] <lazypower> And do you need a third party to test it? i've got a fleet of VM's here we can toy with.
[19:30] <marcoceppi> Windows Server 2012, and I can't get it to give me a prompt
[19:34] <lazypower> shoot me the instructions and i'll fire up a test for you. I've got a meeting in 30 minutes, but i can help until then.
[20:22] <arosales> marcoceppi, reading your release announce for 1.2.3 charm tools
[20:22] <arosales> "juju charm add tests` will create an example Amulet test file based on metadata.yaml information for the charm provided (or cwd, if cwd is a charm)"
[20:22] <arosales> sorry for being dense here but what is "cwd"
[20:22] <arosales> in this context
[20:22] <marcoceppi> current working directory
[20:23] <marcoceppi> bascially you either do `juju charm add tests /tmp/precise/charm` or `cd /tmp/precise/charm; juju charm add tests`
[20:23] <arosales> marcoceppi, so the default is not provided is the cwd
[20:23] <arosales> sorry if no charm path is given is the cwd, and hopefully that is a charm path
[20:23] <marcoceppi> arosales: right, it'll check if the current directory is a charm, if you didn't give it a path
[20:24] <arosales> marcoceppi, ok I  follow thanks
[20:24] <arosales> marcoceppi, thanks
[20:24] <marcoceppi> arosales: a lot of the commands that take a path do this, I don't know why I felt the need to highlight it
[20:24] <arosales> marcoceppi, no is good info I just read it wrong
[20:56] <youngnico> Hey guys, working on a node deploy (https://juju.ubuntu.com/docs/howto-node.html) and can't seem to get the node app to respond. I've related the charms and exposed haproxy, but when I go to haproxy's public URL I can't get a response. Any thoughts or ideas to point me in the right direction?
[20:59] <freeflying> youngnico, 1 you don't have to do expose haproxy, 2 can you show us result of juju status
[20:59] <marcoceppi> freeflying: you do have to expose haproxy if you want to get to it outside of the network
[21:00] <youngnico> http://pastebin.com/QQ8NXCrM
[21:00] <marcoceppi> youngnico: you're haproxy unit is in an error state
[21:01] <youngnico> Ahhh..
[21:01] <marcoceppi> youngnico: I just patched the haproxy, as there was a problem with it earlier
[21:01] <youngnico> config-changed?
[21:01] <freeflying> marcoceppi, you're right, youngnico's case is public cloud, then he has to :)
[21:01] <marcoceppi> youngnico: yes, so for this problem in particular, just run `juju resolved haproxy/0`
[21:02] <marcoceppi> youngnico: that will push past the error, and on the next go-around it should recieve the relation information and work
[21:03] <marcoceppi> youngnico: finally, if you're just testing the node app, you don't neccisarily need haproxy at all :). If it still gives you grief after a few mins, run juju expose node-app; then go to the node-app's URL
[21:03] <youngnico> After resolving the error, should I re-relate the charms? Or run a restart of some sort?
[21:03] <marcoceppi> haproxy, and all proxies, are there if you have more than one unit. For testing and playing, it's just a machine you don't need
[21:03] <marcoceppi> youngnico: no, it'll continue on it's way. There are a bunch of events queued to run, but can't because it's in an error state
[21:03] <jcastro> marcoceppi, ok found another template problem
[21:04] <marcoceppi> jcastro: sweet, I want to squeeze a few more things in to the 1.2 tree, so open a bug/merge req
[21:04] <jcastro> charm proof gives me E: README.md Includes boilerplate README.ex line 11 for like, all the headings, etc.
[21:04] <jcastro> ok
[21:04] <marcoceppi> jcastro: ah, yeah, that makes sense
[21:04] <marcoceppi> I'll drop the boiler plate checking
[21:05] <jcastro> https://bugs.launchpad.net/charm-tools/+bug/1260073
[21:05] <_mup_> Bug #1260073: charm proof is too strict <Juju Charm Tools:New> <https://launchpad.net/bugs/1260073>
[21:05] <jcastro> FYI
[21:05] <marcoceppi> jcastro: ack, thanks
[21:05] <marcoceppi> glad I didn't get too deep in to the Windows stuff
[21:06] <youngnico> marcoceppi: / freeflying - thanks guys! So I pushed past the errors, and the server is resolving a 502 (Bad Gateway) - http://pastebin.com/apUAJTcP
[21:06] <youngnico> Is this just a problem with the node-app example?
[21:06] <marcoceppi> youngnico: I just noticed that the mongodb service is still in an installed state, IE, not started yet
[21:06] <youngnico> Oh, nevermind it was just starting up!
[21:07] <youngnico> Takes a bit, I need to be more patient!
[21:07] <marcoceppi> youngnico: yeah, it can take a few mins to come up :)
[21:07] <marcoceppi> youngnico: I'll have the fix for haproxy land soon so you won't run in to that the next time
[21:07] <jcastro> it'd be nice to have splash pages there instead of nginx errors
[21:07] <marcoceppi> jcastro: file a bug ;)
[21:07] <jcastro> "yo, I will error out until you connect a database to me"
[21:07] <jcastro> marcoceppi, is that per charm?
[21:08] <arosales> marcoceppi, i think david c. had identified some issues with haproxy too
[21:08]  * arosales grabs bug #
[21:08] <marcoceppi> arosales: yeah, that's what I patched last night
[21:08] <marcoceppi> I caught him online when he found the problem, which youngnico ran in to
[21:08] <arosales> https://bugs.launchpad.net/charms/+source/haproxy/+bug/1257062
[21:08] <_mup_> Bug #1257062: config-changed fails <haproxy (Juju Charms Collection):In Progress> <https://launchpad.net/bugs/1257062>
[21:08] <arosales> ah ok I think I saw the merge now doing a search
[21:09] <marcoceppi> Oh yay, dave ack'd it
[21:09] <marcoceppi> let me merge it now
[21:09] <arosales> https://code.launchpad.net/~marcoceppi/charms/precise/haproxy/cfg-changed-fix/+merge/198491
[21:09] <youngnico> Thanks a bunch guys, you have no idea how much help this is!
[21:10] <arosales> marcoceppi, does merge 198491 resolve bug 1257062?
[21:10] <marcoceppi> arosales: yes
[21:10] <marcoceppi> arosales: they're attached
[21:10] <arosales> or are there still issues with version 19
[21:10] <marcoceppi> youngnico: you're welcome! Let us know if you have any other questions
[21:11] <arosales> marcoceppi, cool could you drop a note in the bug so the bug reporter, darryl sees the current status.
[21:11] <youngnico> Will do!
[21:11] <arosales> I think he may still be under the impression the work around needs to be deployed (ie deploy version 18)
[21:12] <marcoceppi> arosales: yeah, I'm writing him now, that he should be able to use haproxy-22, and do an upgrade charm
[21:13] <arosales> marcoceppi, cool another bug fixed
[21:13] <marcoceppi> squish'em squash'em
[21:14] <arosales> another bits the dust
[21:14] <arosales> another one bits the dust, that is.
[21:37] <youngnico> So I'm trying to deploy my own node application now, using the node-app charm. I made a configuration file with the location of my app on Github, and ran juju deploy --config ./config.yaml node-app (config.yaml has the git repo pointer), but it throws: error: no settings found for "node-app".
[21:37] <youngnico> Am I passing the configuration yaml file incorrectly?
[21:40] <youngnico> I'm trying to pass my configuration like: https://juju.ubuntu.com/docs/charms-config.html
[21:41] <marcoceppi> youngnico: what's your config.yaml file look like?
[21:43] <youngnico> I just copied the example at: https://github.com/charms/node-app/blob/master/config.yaml
[21:43] <youngnico> And changed the repository to my own. Does the repo need to be public?
[21:43] <youngnico> Or can it forward my keys to pull to the remote?
[21:44] <youngnico> Actually, even if I just copy the config exactly, it still throws that error
[21:45] <marcoceppi> youngnico: yeah, you need to format it like so: https://juju.ubuntu.com/docs/charms-config.html#config-deployment
[21:45] <marcoceppi> the config.yaml and the configuration file you're writing are two different things
[21:46] <marcoceppi> I'd recommend naming int deployment.yaml, then you want to change mediawiki to node-app, and the key: values to the ones you wish to change
[21:46] <marcoceppi> s/int/it to/
[21:47] <webbrandon> marcoceppi: Sorry didn't see your response.  I understand they are but why?  I know there is some concept behind it, I am just trying to understand.
[21:47] <youngnico> Ahhh, okay the top level key needs to be the charm name.
[21:48] <marcoceppi> youngnico: right, you can actually put multiple services configurations in one file
[21:48] <webbrandon> The way I think is there should only be one command for a event, why two in this case?
[21:48] <marcoceppi> webbrandon: it was first called generate-config, but that's so long, we decided init was a shorter name
[21:49] <marcoceppi> we kept the old one for backwards compat
[21:49] <lazypower> Has anyone recently setup MMS Monitoring with the MongoDB charm? I'm not finding the MMS Token it's looking for, it appears they ahve updated the MMS Service to use an Auth Key and Secret Key for configuration now.
[21:49] <webbrandon> ohhh. makes sense now.  SO it will eventualy get depreciated
[21:49] <youngnico> Handy! So the config just sets values, the config.yaml when authoring a charm is more for defining the structure of those values... I'm getting closer, thanks!
[21:49] <marcoceppi> youngnico: exactly!
[21:50] <marcoceppi> webbrandon: maybe/probably. There's some clashing, as some people don't like the name init, others don't like the lenght of generate-config
[21:51] <marcoceppi> lazypower: I have no idea. negronjl wrote the charm, maybe he has some insights?
[21:51] <marcoceppi> repeating the message, with ping, so it's easier to read
[21:51] <lazypower> I'm thinking about opening a bug on launchpad for this - i'm not done digging though.
[21:52] <marcoceppi> negronjl: lazypower asks: "Has anyone recently setup MMS Monitoring with the MongoDB charm? I'm not finding the MMS Token it's looking for, it appears they ahve updated the MMS Service to use an Auth Key and Secret Key for configuration now."
[21:52] <marcoceppi> lazypower: +1 on opening a bug, even now, if you find the answer then you can just document it there, if not it's atleast there
[21:55] <negronjl> marcoceppi, lazypower: a bug would be the thing to do here.  As I remember it, I didn't setup the mongodb charm for monitoring :/
[21:56] <negronjl> marcoceppi, lazypower: I can work on that soon(ish) :)
[21:57] <marcoceppi> negronjl: in your infinite free time of course ;)
[21:57] <negronjl> marcoceppi, ROFL
[21:57] <lazypower> negronjl: Actually - you're fine the way you are. Looking over the charm this was for the configuration flags
[21:58] <negronjl> lazypower, you see ... I'm awesome ... I fixed it even before I knew I needed it :P
[21:58] <lazypower> this has been depecriated but not removed from teh default config - https://jira.mongodb.org/browse/SERVER-8055
[21:58] <negronjl> j/k :P
[21:58] <lazypower> I'll open a bug report against this and issue a merge request later if you dont mind
[21:59] <youngnico> marcoceppi: one last question, and I think I'll have this working: When I set everything up on my own app, the node-app status is errored with "'hook failed: "install"'".
[21:59] <negronjl> lazypower, not at all ...
[21:59] <youngnico> How would I go about debugging that? Is there a "redeploy" command or something with a verbose flag I can see what's causing the error?
[22:00] <youngnico> I'm sure I need to customize a hook or something to start my app differently than the example, but I'm not quite sure where/how to debug. If there's some documentation that I'm missing please point me in the right direction!
[22:00] <marcoceppi> youngnico: you can do a few things, you can `juju ssh node-app/0` then look at the charm logs in `/var/log/juju/unit-node-app-0.log`
[22:01] <marcoceppi> youngnico: alternatively, you can run `juju debug-hooks node-app/0`, then in another terminal run `juju resolved --retry node-app/0` and you can interactively debug hooks
[22:01] <marcoceppi> youngnico: the latter is a little more complex, but super powerful, there's a charm school we did on it, let me find you the video
[22:02] <youngnico> Great! That sounds like enough to keep me busy with debugging ;) is there documentation around that somewhere, or just man pages? I hate being a pest in IRC with questions, just don't know where to look.
[22:03] <marcoceppi> youngnico: https://juju.ubuntu.com/docs/authors-hook-debug.html
[22:03] <negronjl> bac: MP:198633 Approved and merged
[22:03] <marcoceppi> youngnico: that page is a little dry, sadly
[22:04] <youngnico> No worries, I'll take a look. Trying to figure out how to set SSH keys now... the one Juju is trying to use is no good =/
[22:07] <lazypower> negronjl: Thank you for the quick response. I've opened the bug report and should have a PR for you later this evening.
[22:08] <negronjl> lazypower, np
[22:11] <lazypower> Another question that may be off topic. Is it typical to gate pull requests through the original charm maintainer or is it better practice to just assign the charmers group and go through the documented channels?
[22:14] <marcoceppi> lazypower: always assign to ~charmers
[22:15] <lazypower> Aye aye captain.
[22:34]  * arosales waves at lazypower
[22:38]  * lazypower waves back at arosales 
[22:38] <lazypower> Greetings Program
[22:53] <Informat1Q> marcoceppi: can you help me get my charm into launchpad
[22:53] <Informat1Q> there is already a bug
[22:53] <Informat1Q> and I am subscribed to it
[22:53] <Informat1Q> how do i merge my code there
[22:55] <Informat1Q> marcoceppi: the thing is i did not start from that bug from a new repo
[22:55] <marcoceppi> Informat1Q: happy to help, going to need some links for context though
[22:56] <Informat1Q> marcoceppi: trac bug https://bugs.launchpad.net/charms/+bug/795480
[22:56] <_mup_> Bug #795480: Charm needed: Trac <bitesize> <Juju Charms Collection:In Progress by ahmedelgamil> <https://launchpad.net/bugs/795480>
[22:56] <Informat1Q> my code https://code.launchpad.net/~rhanna/charms/precise/trac/trunk
[22:56] <marcoceppi> Informat1Q: cool, so you want to merge ~rhanna/charms/precise/trac/trunk in to lp:charms/trac?
[22:57] <Informat1Q> marcoceppi: i think this is the correct path, or is it not?
[22:57] <marcoceppi> Informat1Q: sounds good so far
[22:57] <marcoceppi> Informat1Q: OHHH
[22:57] <marcoceppi> Informat1Q: This is a new charm
[22:58] <Informat1Q> marcoceppi: yup that is the problem
[22:59] <marcoceppi> Informat1Q: Okay, no problem, what you want to do is assign the bug to you. The click on "Link a related branch" and link to your lp:~rhanna/charms/precise/trac/trunk branch; Then move  the bug status to "Fix Committed". That'll put it in the review queue for a charmer to review and provide feedback, then eventually put it int he store for you
[23:00] <Informat1Q> marcoceppi: thanks
[23:00] <Informat1Q> marcoceppi: you're the man
[23:00] <marcoceppi> Informat1Q: after you do that, in about 10 mins you'll it listed in the review-queue: manage.jujucharms.com/tools/review-queue
[23:00] <marcoceppi> https://manage.jujucharms.com/tools/review-queue
[23:03] <Informat1Q> done
[23:03] <Informat1Q> waiting for my first charm review
[23:08] <marcoceppi> Informat1Q: Awesome!, given the size of the queue we might not get to it this week
[23:09] <marcoceppi> Informat1Q: check the page again in about 5 mins, you should see it right below the haproxy entry
[23:10] <Informat1Q> cool i'll get some sleep now
[23:10] <Informat1Q> good night all
[23:11] <marcoceppi> Informat1Q:  thanks for the submission! o/
[23:15] <marcoceppi> Informat1Q: confirmed, it's in the queue!