[00:00] <SpamapS> cowmix: hm
[00:00] <SpamapS> cowmix: its definitely *far* better than when that statement was written
[00:00] <SpamapS> cowmix: I think for certain use cases, its a good choice in production
[00:00] <SpamapS> cowmix: https://bugs.launchpad.net/juju/+bugs?field.tag=production
[00:01] <SpamapS> cowmix: those are known issues that should be considered before putting it there. Perhaps we should update the docs to point at that list.
[00:11] <imbrandon> .win 15
[00:40] <cowmix> SpamapS: thanks... I'm *very* excited about juju.. I'm trying out the 12.04 beta as soon as its out
[00:47] <SpamapS> cowmix: beta1 is out. :)
[00:47] <SpamapS> cowmix: I'd recommend using juju from the ppa though.. we should land a new version in beta2, and the PPA is much closer to that.
[00:50] <cowmix> Awesome.. when 12.04 is fully baked.. use the release version then?
[00:51] <cowmix> It's interesting because I have some friends that do HUGE EC2 deployments on Ubuntu and they didn't know about juju.. I turned them on to it and they seemed jazzed.
[00:52] <cowmix> Canonical really needs to pump this up harder.. it solves an issue everyone is trying to solve on their own.
[00:55] <SpamapS> cowmix: a little over a year ago, juju didn't exist.. so.. give us some time. :)
[00:56] <SpamapS> cowmix: there's a charm contest going on right now.. you should get some of them to enter it. :)
[01:10] <cowmix> SpamapS: is Canonical dog-fooding it now in their own infrastructure?
[01:39] <sloth_> anyone got time for a really basic question
[01:46] <SpamapS> cowmix: here and there, yes.. nothing important, as we only run that on released LTS's, and lucid can't run juju
[01:47] <SpamapS> sloth_: just ask, and hang out for an answer, somebody will get back to you. Also we watch the juju tag on askubuntu.com a lot, so asking there might be a good idea
[01:49] <sloth_> cool.  so I'm totally new to juju (and not a Ubuntu guy normally).  I'd like to use it provision a mongo cluster on amazon.  I'm not sure if I have to create a ubuntu EC2 instance to use as the provisioning host (that wouldn't be part of the 3 machine replication set) or if I can do it from my mac or where to start
[01:49] <sloth_> I looked at the introduction to juju but I suspect I'm missing a few pieces of knowledge on how to get to where it starts
[02:02] <_mup_> juju/status-changes r484 committed by kapil.thangavelu@canonical.com
[02:02] <_mup_> collapse relation display in units unless error, all relation names are multi-valued
[02:04] <hazmat> i'm going to hit a devops meetup tomorrow, we're hitting a few conferences this year
[02:21] <_mup_> juju/status-changes r485 committed by kapil.thangavelu@canonical.com
[02:21] <_mup_> status uniques when the same service has multiple relations to the same endpoint
[02:42] <_mup_> Bug #959884 was filed: Improved status output <juju:In Progress by hazmat> < https://launchpad.net/bugs/959884 >
[03:03] <_mup_> juju/force-upgrade r465 committed by kapil.thangavelu@canonical.com
[03:03] <_mup_> finish upgrade-charm --force cli tests
[03:04] <_mup_> juju/force-upgrade r466 committed by kapil.thangavelu@canonical.com
[03:04] <_mup_> add to ignores
[03:07] <hazmat> are pkg task sets addressable, ie. can i install the cloud/server task set into a container, or is it more of a preseed thing
[03:08] <hazmat> ah.. tasksel
[03:11] <hazmat> hmm.. doesn't want to work without a ttyl
[03:59] <_mup_> juju/series-from-charm r464 committed by kapil.thangavelu@canonical.com
[03:59] <_mup_> charm series is used as service system constraint
[04:19] <_mup_> Bug #959914 was filed: Charm series is used for service series constraint. <juju:In Progress by hazmat> < https://launchpad.net/bugs/959914 >
[06:35] <bkerensa> SpamapS: Uhh lots of suggestions and I dont know where to begin :) I guess I will bug jcastro for guidance tomorrow :P I must be off to bed now
[06:39] <SpamapS> bkerensa: Sorry.. I am really excited about your charm which is why I'm being so picky about it. I will likely be a user. :)
[06:40] <SpamapS> bkerensa: bug me for help too, I'm happy to assist. :)
[11:29] <jamespage> whats the state of juju on lucid?
[11:30] <jamespage> might be a funny question but I'd like to see whether I can get apache bigtop running with the hadoop charm
[13:07] <hazmat> jamespage, it hasn't been tried in some time, the libzk libs need a more recent version to avoid some bugs there afaicr. we may have picked up some py 2.7isms
[13:08] <jamespage> hazmat, ack
[13:08] <jamespage> it was a 'if i get time' type activity TBH
[13:53] <yolanda> hi all, i'm trying to use juju in openstack, but i receive an error in the machine every time i try to create one, any one knows about this problem?
[14:07] <yolanda> hi, anyone can help with juju and openstack?
[14:18] <james_w> yolanda, hi, what's the error?
[14:18] <yolanda> hi, james_w, just my openstack instance is created, but the status is "error"
[14:20] <james_w> yolanda, hmm, I'm not sure how to find out what caused the error
[14:21] <jamespage> yolanda, could be a number of things
[14:21] <yolanda> james_w, i see that the image used is ami-00000049, is that ok?
[14:21] <jamespage> it might be best to take a look at the instance directly in openstack to see if you can get more info
[14:22] <yolanda> jamespage, how can it be done? i only can see the instance when i do an euca-describe-instances, and see that status
[14:22] <jamespage> yolanda, does it show error there as well?
[14:22] <yolanda> no
[14:22] <jamespage> yolanda, hmm
[14:23] <jamespage> euca-get-console-output <instanceid> might show you a bit more
[14:23] <yolanda> mm, let me see
[14:23] <yolanda>  euca-get-console-output i-00001b40
[14:23] <yolanda> UnknownError: An unknown error has occurred. Please try your request again.
[14:24] <jamespage> yolanda, oops
[14:24] <yolanda> not a very clear error :)
[14:24] <jamespage> yolanda, thats a openstack is broken error
[14:25] <yolanda> jamespage, but i'm running some other instances in openstack right now
[14:25] <yolanda> i'll try to create that one manually
[14:26] <jamespage> yolanda, you might be able to ssh to it - please try that
[14:26] <jamespage> might be able to get some good debug out of it yet...
[14:26] <yolanda> jamespage, cannot ssh, i can't see any internal ip
[14:26] <jamespage> yolanda, what is its status? running?
[14:26] <yolanda> i have 3 instances, i can see the internal ip for the previous two, but this doesn't show any ip
[14:26] <yolanda> status is error
[14:27] <yolanda> the privateIpAddress field is empty
[14:27] <jamespage> yolanda, so basically it compelelty failed to start
[14:28] <yolanda> yes, it seems
[14:28] <yolanda> will try to create same image manually
[14:30] <yolanda> ok, same image created with euca-run-instances work
[14:30] <yolanda> so it's something with juju process
[14:32] <yolanda> btw, what i want is to test some juju charm, if i cannot do it with openstack, what can i use? i did some tests with a personal EC2 account, but i need a large machine, and they started charging me an important bill, so i stopped it
[14:33] <jamespage> yolanda, I think I know what it might be - lemme just check
[14:33] <yolanda> ok, jamespage, thanks
[14:36] <jamespage> yolanda, can you try with ami-00000048 please
[14:36] <yolanda> jamespage, that can be specified with the default-image-id param, right?
[14:37] <jamespage> yolanda, it can
[14:37] <jamespage> yolanda, have you tried using the local provider for juju?
[14:38] <yolanda> jamespage, yes, but gives another error: error: Falló al iniciar la red default
[14:38] <yolanda> error: Requested operation is not valid: network is already active
[14:38] <yolanda> Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1
[14:38] <yolanda> 2012-03-20 15:38:29,472 ERROR Command '['virsh', 'net-start', 'default']' returned non-zero exit status 1
[14:39] <jamespage> yolanda, ah - I know what this is
[14:39] <yolanda> i saw that i had to logout/login, but i already did it
[14:39] <jamespage> yolanda: what does virsh list-network give you
[14:40] <yolanda> unknown command "list-network"
[14:40] <jamespage> sorry net-list
[14:40] <jamespage> not list-network
[14:41] <SpamapS> yolanda: did you logout/back in after installing juju and especially libvirt-bin?
[14:41] <jamespage> yolanda, can you check that the account you are using is in the libvirtd groups as well please
[14:41] <yolanda> SpamapS, yes
[14:42] <yolanda> Nombre               Estado     Inicio automático
[14:42] <yolanda> -----------------------------------------
[14:42] <yolanda> default              activo     si
[14:42] <yolanda> sorry but it's spanish output
[14:42] <SpamapS> no problemo, yo hablo un poquito ;)
[14:42] <yolanda> genial :)
[14:44] <yolanda> jamespage, how can i check it?
[14:44] <jamespage> yolanda, type 'groups'
[14:45] <yolanda> groups
[14:45] <yolanda> yolanda adm dialout cdrom plugdev lpadmin admin sambashare libvirtd
[14:45] <yolanda> that?
[14:45] <jamespage> yes - looks OK
[14:45] <jamespage> 'libvirtd' being the critical bit...
[14:45] <jamespage> hmm
[14:46] <jamespage> I wonder
[14:48] <jamespage> yolanda, I think thats a bug - juju parses the output of net-list and looks for 'active'
[14:48] <jamespage> 'active' != 'activo'
[14:48] <yolanda> damn
[14:48] <yolanda> so how can i fix it? is it possible?
[14:49] <SpamapS> wait
[14:49] <yolanda> please don't say: use Ubuntu in english :)
[14:49] <SpamapS> yolanda: dpkg -l juju
[14:49] <SpamapS> That bug was fixed... we set LANG=C before calling net-list
[14:50] <yolanda> Deseado=Desconocido/Instalar/Eliminar/Purgar/Retener
[14:50] <yolanda> | Estado=No/Instalado/Config-files/Desempaquetado/Medio-conf/Medio-inst/espera-disparo/pendiente-disparo
[14:50] <yolanda> |/ Err?=(ninguno)/Requiere-reinst (Estado,Err: mayúsc.=malo)
[14:50] <yolanda> ||/ Nombre                            Versión                          Descripción
[14:50] <yolanda> +++-[14:50] <yolanda> ii  juju                              0.5+bzr398-0ubuntu1               next generation service orchestration system
[14:50] <jamespage> SpamapS, I was just looking at that
[14:51] <jamespage> yolanda, I would recommend using juju from the team PPA - that is a very old version
[14:51] <jamespage> (the one that is in oneiric)
[14:52] <yolanda> jamespage, how can i config it? in which repo is?
[14:52] <jamespage> yolanda, https://launchpad.net/~juju/+archive/pkgs
[14:52] <yolanda> let me see
[14:52] <yolanda> i wonder if that's the cause of all my problems
[14:53] <jamespage> yolanda, not all of them but this one - probably
[14:53] <jamespage> SpamapS, good catch
[14:54] <jcastro> krondor, how's it coming along?
[14:54] <jcastro> krondor, hey do you guys plan to nginx in the moodle charm?
[14:54] <yolanda> let me test
[14:59] <yolanda> i updated it, now i have this error:
[14:59] <yolanda> Starting storage server...
[14:59] <yolanda> could not connect before timeout
[14:59] <yolanda> 2012-03-20 15:58:41,768 ERROR could not connect before timeout
[15:01] <bac> SpamapS: gmb said he talked to you about packaging for python-shelltoolbox.  will your packaging work for a lucid build?
[15:01] <SpamapS> We really should just push a newer juju into 11.10 and fore-go the proposed testing requirements. :-P
[15:02] <SpamapS> bac: it might, but you can't deploy charms on lucid at the moment, so thats sort of moot. ;)
[15:03] <krondor> jcastro:  it's going well so far.  We're going apache because the thought is most moodle users would need the auth modules.
[15:03] <jamespage> yolanda, please try 'juju --verbose bootstrap'  - you might need to juju destroy-environment first
[15:03] <jamespage> SpamapS, +1 its next to useless how it is ATM
[15:03] <bac> SpamapS: our buildbot slaves will be running lucid containers and they need to use the package too, that's why i ask.
[15:04] <jamespage> jcastro, lookings at the jetty/spdy stuff ATM - have a few ideas
[15:04] <SpamapS> bac: I can just use python-support.. that will work back to lucid
[15:04] <yolanda> jamesjpage, no timeout error now, seems i got it!
[15:04] <yolanda> i'll check with status
[15:04] <bac> SpamapS: that would be great
[15:04]  * SpamapS goes afk for a bit
[15:06] <hazmat> more recent versions set the locale to C explicitly before calling out to libvirt
[15:06] <hazmat> via env vars
[15:06] <jcastro> jamespage, right so I was thinking, something simple, that would end up being "this is a fast way to mess with SPDY" and then tell people about it. As a technology showcase, nothing like serious or something people would actually use.
[15:07] <jamespage> jcastro, how about jenkins on SPDY?  I'm going to give it a whizz
[15:08] <yolanda> jamespage, SpamapS, thanks a lot for the help
[15:08] <jamespage> yolanda, np
[15:08] <marcoceppi> I noticed something with hooks firing. My db-joined hook takes quite a while to execute. But as soon as I add-relation it shows as "state: up", if it eventually fails it shows the failure then. So, I know the relation hook fired but it won't wait for end of execution of the relation hook. Is this a bug or a intended usage?
[15:10] <jcastro> jamespage, ok so that is sexy, yes, absolutely!
[15:15] <m_3> marcoceppi: that's a known issue
[15:16] <marcoceppi> m_3: Cool, I'm fine with just adding relation, exposing, juju status until the port shows as open (thus the end of the script is done)
[15:19] <m_3> marcoceppi: yeah, we've had several discussions of this problem, but no solns afaik
[15:33] <krondor> jcastro:  mayber I can let them pick ngninx or apache w/ a config directive.  Was thinking the same with postgresql / mysql.
[15:36] <jcastro> krondor, yep
[15:38] <_mup_> juju/trunk r486 committed by kapil.thangavelu@canonical.com
[15:38] <_mup_> merge setuppy-fixes from paul [a=][r=hazmat,clint]
[15:46] <marcoceppi> m_3: I wish I knew more about how juju launched hooks :\ one day I'm going to start digging in to the source code
[15:47] <SpamapS> marcoceppi: open-port is actually supposed to be the signal that your service is ready.
[15:48] <marcoceppi> SpamapS: yeah, that's what I've been using, it's just a wee bit confusing to see the relation state as up when the hook is still running, because then it goes from up to error half way though
[15:48] <marcoceppi> Merely commenting on how it'd be nice if it went from state: "adding", to "up" or "error"
[15:53] <ninjix> I am running into ssh host key errors after I destroy my environment and try to bootstrap a new one
[15:53] <SpamapS> marcoceppi: yeah, I think a more appropriate word would be "active"
[15:53] <SpamapS> ninjix: thats common
[15:53] <SpamapS> ninjix: there's an open bug to have juju ssh use a generated known_hosts file to avoid that..
[15:53] <jcastro> lmorchard, hey I heard you were having some juju problems, anything I can do to help?
[15:53] <ninjix> how do you guys run your ssh_config?
[15:54] <ninjix> did you set StrictHostKeyChecking off?
[15:54] <SpamapS> ninjix: for production, I just deal with the pain. For testing I wildcard the test hosts and do StrictHostKeyChecking no
[15:54] <ninjix> ;)
[15:55] <SpamapS> the way I do that, which might be a bit evil, is by wildcarding by amazon region. All my production instances are in us-west-1 .. I test in us-east-1
[15:55] <ninjix> ok, cool. just wanted to make sure I hadn't missed something basic
[15:56] <ninjix> SpamapS: btw, thanks for your help over the last few days. I've now got juju and cobbler driving our Proxmox cluster
[15:57] <ninjix> dev team is going to very pleased when I unveil this new cloud environment
[15:58] <jcastro> this is quite cool
[15:59] <jcastro> ninjix, when you're not so slammed it'd be nice to send this to the list.
[15:59] <m_3> yeah, I'd love to see a writeup of that
[15:59] <ninjix> sure
[15:59] <SpamapS> ninjix: *NICE*
[16:00] <jcastro> http://charms.kapilt.com/~patrick-hetu/oneiric/openerp-server
[16:00] <jcastro> look at this readme
[16:00] <jcastro> it's already better than all the stuff we wrote in the intial set of charms. :)
[16:00] <jcastro> patrick and jseutter are killing it. :)
[16:01] <SpamapS> jcastro: I'm so ashamed of mediawiki now.. I almost want to just nuke it and re-do the whole thing. ;)
[16:01] <jseutter> huh?
[16:01] <jcastro> jseutter, it's exciting to see you guys submitting these charms
[16:01] <jseutter> jcastro: ah :)
[16:02] <jcastro> shazzner's working on gitolite too
[16:03] <ninjix> my service units keep appearing with <hostname>.localdomain why are they not using the domain delivered by DHCP?
[16:08]  * SpamapS digs through the code to see where that value comes from
[16:09]  * SpamapS goes a 4th level of indirection deeper and wonders what he has wandered into
[16:12] <SpamapS> UGH
[16:12] <SpamapS>         output = subprocess.check_output(["hostname", "-f"])
[16:12] <SpamapS> ninjix: ^^
[16:12] <SpamapS> that is just so wrong.. :-P
[16:18] <ninjix> hmm... ok I'll work with the -f arg
[16:38] <ninjix> just had 99% successful wordpress test run. Only thing that didn't work was that the 000-default site was left enabled
[16:38] <ninjix> is that normal? I'm running today's PPA
[16:39] <SpamapS> ninjix: its more complicated than that
[16:40] <SpamapS> ninjix: basically the wordpress charm expects you to use the 'public-address' to access it..
[16:40] <SpamapS> ninjix: I believe marcoceppi is working on fixing that
[16:40] <SpamapS> ninjix: glad to hear you got it deploying though! :)
[16:40] <SpamapS> marcoceppi: were you going to try and tackle the Host: header weirdness w/ wordpress?
[16:40] <ninjix> the public address started working once I a2dissite 000-default
[16:41] <SpamapS> ninjix: oh, hm, that sounds different
[16:41] <jamespage> jcastro, that was painful - looks like upstream managed to not release spdy as part of the lastest jetty distro
[16:41] <jamespage> I had to hack it in
[16:42] <jcastro> nice
[16:50] <ninjix> SpamapS: this seems to be Apache2 strangeness. The FQDN site looks properly configured but Apache stops using it as soon as I re-enable the default
[16:51] <ninjix> ahh... this is happening because the 000-default is using the same FQDN as the juju created site
[16:51] <SpamapS> ninjix: sounds like a bug in the charm really.. should just dissite the default
[16:52] <SpamapS> ninjix: https://launchpad.net/charms/+source/wordpress/+filebug
[16:52] <SpamapS> If you wouldn't mind. :)
[16:52] <ninjix> no problem
[17:00] <marcoceppi> SpamapS: That's a different issue with using HAProxy in front of WP
[17:09] <ninjix> marcoceppi: I just appended the a2dissite to my local oneiric/wordpress/hooks/install
[17:09] <ninjix> marcoceppi: will the next juju add-unit wordpress pick up the change?
[17:11] <SpamapS> ninjix: no
[17:12] <SpamapS> ninjix: you have to use upgrade-charm
[17:13] <ninjix> does that imply that the my current env has the wordpress charm cached on the bootstrap instance?
[17:13] <SpamapS> ninjix: yes, there's a webdav server that hosts the charm bundles
[17:13] <ninjix> :)
[17:14] <marcoceppi> actually, SpamapS I had a question about add-unit. When I add-unit it executes the charm without any real idea that it's an additional unit, correct? So it would be just as if I had done a deploy + add-relation + expose + whatever else I did to the previous unit
[17:14] <SpamapS> marcoceppi: right
[17:14] <SpamapS> marcoceppi: to make units aware of one another use peer relations
[17:14] <marcoceppi> figured, follow up question
[17:14] <marcoceppi> and answered.
[17:15] <SpamapS> marcoceppi: though that one will proceed all the way through install -> started before being aware of the peer relationships
[17:16] <SpamapS> marcoceppi: which has been a problem in the past for charms that need to work slightly different between 1 and 1+ nodes ;)
[17:16] <marcoceppi> Lastly, is there a way within a non-peer relation hook to tell if there is another unit?
[17:16] <marcoceppi> I'm trying to avoid having the db-relation-joined hook run a MySQL import when it already exists from the first instance
[17:17] <SpamapS> marcoceppi: I'd use the database itself for that... not rely on juju for it
[17:17] <marcoceppi> I guess just mysqladmin create and capture a non 0 exit or something?
[17:17] <m_3> marcoceppi: there's a few different ways to do that... right... what SpamapS just said... check the db for a schema version or some content
[17:26] <_mup_> juju/subordinate r522 committed by kapil.thangavelu@canonical.com
[17:26] <_mup_> update unit deploy signature
[17:31] <ninjix> I am getting an SSH key error when I execute upgrade-charm
[17:32] <ninjix> ssh works manually to the remote_host value
[17:57] <negronjl> jamespage: ping
[17:58] <jamespage> negronjl, pong!
[17:58] <SpamapS> ninjix: all commands have to ssh to machine 0 so they can talk to zookeeper securely..
[17:58] <negronjl> jamespage:  Here is what I have so far for a combined tomcat6 and tomcat7 charm ( lp:~negronjl/+junk/tomcat )
[17:58] <negronjl> jamespage:  if/when you get a chance, I would appreciate some feedback.
[17:59] <SpamapS> negronjl: +junk?! ;)
[17:59] <jamespage> negronjl, yep - I'll take a look tomorrow
[17:59] <negronjl> SpamapS: It's a work in progress and I don't want to confuse anyone that may think that this is an actual usable charm yet.
[17:59] <negronjl> jamespage: thx
[18:00] <negronjl> SpamapS: I'll put it in the proper place and put it through the proper process once I get done with it.
[18:00] <SpamapS> negronjl: roger. :)
[18:25] <_mup_> juju/relation-id r488 committed by jim.baker@canonical.com
[18:25] <_mup_> Pass through relation id
[19:12] <ninjix> SpamapS: how does one add a second mysql unit as a slave?
[19:13] <ninjix> I noticed the mediawiki charm utilizes mysql slaves
[19:51] <SpamapS> ninjix: currently the mysql charm only supports one-way replication, so you 'juju deploy mysql slave-service' and then 'juju add-relation master-service slave-service'
[20:17] <ninjix> starting get the hang this. So it is the relation call that determines which hooks fire
[20:17] <SpamapS> ninjix: right
[20:19] <ninjix> so a master-master creating hooks that understand how to handle 'juju add-relation master-1 master-2' and then a 'juju add-relation master-2 master-1'
[20:20] <ninjix> that's going to dial up the complexity of the charm
[20:22] <ninjix> maybe I can add a hook for mysql-mmm pkg
[20:23] <SpamapS> ninjix: you'd only have one 'add-relation'
[20:23] <SpamapS> ninjix: they're bi-directional
[20:23] <SpamapS> ninjix: you might be able to achieve it just by doing that now though.. hmm.. hadn't thought of trying that
[20:24] <ninjix> how would it discriminate between between simple slave and master-master with only one add-relation
[20:31] <ninjix> ahh... answer my own question. Just learned how to use the <service>:<relation> syntax :)
[20:48] <ninjix> very cool. I'm now scaling the database service with total ease.
[20:49] <SpamapS> ninjix: I think thats our next t-shirt ;)
[20:49] <SpamapS> jcastro: ^^
[20:49] <ninjix> really liking the low barrier learning curve too
[20:49] <jcastro> man nice dude
[20:50] <jcastro> ninjix, send me a mail, we should  send you a juju shirt, jorge@ubuntu.com
[20:51] <ninjix> me like schwag
[21:37] <shazzner> hello
[21:38] <shazzner> quick question, what could be up when after deploying a charm the newly created machine still has a 'not-started' state
[21:38] <shazzner> and I can't ssh into it yet
[21:38] <shazzner> just need to be patient?
[21:52] <shazzner> well I destroyed it, then recreated it
[21:52] <shazzner> now it works, weird
[22:09] <bkerensa> jcastro: when does this contest end? I have a busy week and probably cant do work till this weekend on the charm
[22:09] <marcoceppi> bkerensa: the 23rd
[22:10] <jcastro> bkerensa, it's ok we can be flexible
[22:10] <jcastro> the deadline is the 23'rd
[22:10] <jcastro> but then there's a week of reviews, etc.
[22:10] <shazzner> cool
[22:12] <jcastro> this is the first one so it'd be dumb to make it "pencils down everyone!"
[22:30] <_mup_> juju/relation-id r489 committed by jim.baker@canonical.com
[22:30] <_mup_> Completed relation id refactoring
[23:13] <SpamapS> I think if you already submitted the charm..
[23:13] <SpamapS> and you are just dealing with my nit-picks..
[23:13] <SpamapS> your entry will be considered positively ;)
[23:13] <SpamapS> bkerensa: ^^
[23:14] <bkerensa> kk