[00:01] <adam_g> m_3: oh jeez, ya..
[00:06] <m_3> there's wiring inside for the EnvironmentConfig to take another path... but the cli itself doesn't pass it thru
[01:09] <_mup_> Bug #922398 was filed: cli should accept alternate config file path <juju:New> < https://launchpad.net/bugs/922398 >
[12:35] <niemeyer> Good Friday, Magicians
[12:44] <jorge> Hi folks! I would like some help with the same problem from yesterday: juju can't connect to instances when i call juju status. the complete description of the problem is here http://pastebin.com/KkTZkEjv ... thanks.
[13:02] <niemeyer> jorge: WOw, that's an awesome debugging session, thanks
[13:03] <niemeyer> jorge: So if you actually connect to the host, does it close on you like your verification suggests?
[13:05] <jorge> you're welcome. i'm going to try with the ubuntu user and other... just a moment.
[13:07] <niemeyer> jorge: There's another trick worth trying to investigate this as well
[13:07] <niemeyer> jorge: We have a little-known "open-tunnel" commnad
[13:08] <niemeyer> jorge: You can use it by opening a separate terminal, and running the command "juju open-tunnel"
[13:08] <niemeyer> jorge: This will open a connection to the host, and block
[13:08] <niemeyer> jorge: Then, on your first terminal, try running "juju status"
[13:08] <niemeyer> jorge: It should tunnel the command over the ssh session from the blocked open-tunnel command
[13:19] <jorge> niemeyer: this way worked! but, something strange still occurs. The first attempt get error and the connection just is established on the second attempt.
[13:19] <niemeyer> jorge: There's something really bizarre going on there
[13:20] <niemeyer> jorge: Do you have anything in the logs of the server saying why sshd is closing the connection abruptly?
[13:20] <jorge> the first attempt is always closed. the second works (using open-tunnel). really strange!
[13:20] <niemeyer> jorge: Sounds like a networking issue
[13:20] <jorge> no, just these lines indicating opened session and the next second, closed
[13:21] <niemeyer> jorge: Hmm
[13:23] <niemeyer> jorge: Try to ssh onto the machine
[13:23] <niemeyer> jorge: juju ssh 0
[13:23] <niemeyer> (that's a zero at the end)
[13:24] <jorge> ops... looks like it was just luck ... now i'm getting the same error with open-tunnel
[13:24] <niemeyer> With the open-tunnel command live
[13:24] <niemeyer> Ok
[13:24] <niemeyer> So, close the open-tunnel
[13:24] <niemeyer> and try to ssh onto it using "juju ssh 0"
[13:24] <jorge> i'm going to try that when the open-tunnel come back hehe.
[13:25] <jorge> curious, if I leave the machine idle for a while and test the command, works. after the first attempt, stop work.
[13:31] <niemeyer> jorge: It really looks like there's something weird going on with the machine/networking there, but I'm not sure what
[13:31] <niemeyer> jorge: Have you tried to juju ssh 0?
[13:32] <jorge> hum, juju ssh 0 not working...
[13:33] <niemeyer> jorge: What happens?
[13:34] <jorge> the same error. i think because open-tunnel is not alive. i'm trying to restablish it
[13:35] <niemeyer> jorge: Nope
[13:36] <niemeyer> jorge: Please forget open-tunnel for the moment
[13:36] <niemeyer> jorge: It's unrelated to the problem
[13:36] <niemeyer> jorge: What is the error message you get?
[13:37] <jorge> ERROR SSH forwarding error: bind: Cannot assign requested address
[13:40] <niemeyer> jorge: Is that in the first call, or in the second one?
[13:41] <niemeyer> I suspect the TCP port is simply lingering
[13:41] <niemeyer> jorge: Do you have lsof installed?
[13:42] <jorge> second call
[13:42] <jorge> yes, i do
[13:42] <niemeyer> jorge: I think I know what's up
[13:42] <niemeyer> jorge: Your kernel is likely configured to linger TCP ports in a different way
[13:43] <niemeyer> jorge: Are you using juju from source or from the package?
[13:44] <jorge> i don't know about 'lingering' ports... juju was instaled from ubuntu repo, 11.10 oneiric
[13:44] <niemeyer> jorge: Ok
[13:44] <jorge> juju 0.5+bzr398-0ubuntu1
[13:45] <jorge> Linux  3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
[13:45] <jorge> default from ubuntu server 11.10
[13:45] <niemeyer> jorge: Please run "dpkg -L juju | grep 'state\/utils.py'"
[13:45] <niemeyer> jorge: and open this file for editing as root
[13:46] <jorge> hey, i'll be back in 1 hour... sorry...
[13:46] <jorge> and i'll try this
[13:46] <jorge> thanks
[13:46] <niemeyer> jorge: Have a good lunch
[14:57] <robbiew> can anyone tell me why I have a virbr0 *and* lxcbr0 interface now? (running precise)
[14:57] <robbiew> is that related to juju local provider somehow?
[15:04] <m_3> robbiew: juju's still using virbr0 (installed from libvirt)... I assume the other's newly installed along with precise lxc
[15:05] <m_3> they've both got their own little dnsmasqs tho
[15:05] <robbiew> interesting
[15:09] <robbiew> seems if lxc now provides a device, we could drop virbr0, no?
[15:09] <robbiew> just seems to be a waste of resources to have both virbr0 and lxcbr0
[15:10] <robbiew> bcsaller:  hazmat: ^ ?
[15:11] <hazmat> robbiew, precise lxc includes its own bridge?
[15:11] <robbiew> I suppose users can easily disable lxcbr0 in /etc/defaults/lxc , so maybe we have the juju package do that when installed
[15:11] <robbiew> hazmat: yeah
[15:11] <robbiew> hazmat: hallyn said it just recently landed in precise
[15:13] <hazmat> robbiew, cool, the problem is backwards compatibility, i'll try it out.. but for now i'd rather just leave it.. unless its a problem/conflict.
[15:14]  * hazmat fires up a precise instance
[15:15] <robbiew> just thinking folks will get a bit pissy when they see a virbr0 and lxcbr0 created....waste of resources (though small)
[15:19] <robbiew> eh...I guess it's no huge deal...just surprised me
[15:20] <hazmat> robbiew, we shouldn't really be installing libvirt-d and lxc as recommends
[15:20] <hazmat> they only apply to local development
[15:21] <hazmat> installing those packages on orchestra or ec2 machines doesn't serve any purpose
[15:21] <hazmat> local provider usage already checks and warns appropriately if run without required packages
[15:21] <hazmat> SpamapS, ^
[15:27] <m_3> the 'juju-origin: distro
[15:28] <m_3> seems to be the default for oneiric... and it's _old_
[15:28] <m_3> is there any way to bump that up?  or do we always need to use 'ppa' for oneiric?
[15:29] <m_3> my units are going straight to 'start_error' with version 398
[15:41] <SpamapS> hazmat: we should be turning off recommends when juju installs itself actually
[15:42] <SpamapS> hazmat: the idea behind recommends is that if you let your machine install recommends, as most Ubuntu users do, you get the full functionality of the package.
[15:43] <SpamapS> hazmat: so for automated things, usually --no-install-recommends is advisable
[15:43] <SpamapS> in fact we should probably make most charms do that.
[15:44] <m_3> it'd be strange to have different default behavior for apt in the charm -vs- laptop... it should still be an explicit --no-install-recommends
[15:53] <SpamapS> m_3: agreed, I'm suggesting we should consider making that the recommended way to call apt
[15:59] <bac> this page needs to be updated wrt 'charm' vs 'charms' in lp.  the bzr examples fail if you use 'charm'.  should i file a bug, if so against what?
[16:00] <bac> https://juju.ubuntu.com/Charms
[16:02] <SpamapS> bac: done, thanks!
[16:10] <bac> great, thanks SpamapS
[16:23] <jorge> Hi all, i'm back.
[16:24] <jorge> i make a loop to detect what command is juju calling.
[16:24] <jorge> while [ 1 ]; do ps aux | grep ssh | grep juju ; ls /root/.juju/ssh/ ; sleep 1; done
[16:24] <jorge> so, I call juju status or juju open-tunnel
[16:25] <jorge> the result is the following process
[16:25] <jorge> root     29772  0.2  0.1  41176  2712 pts/0    S+   14:22   0:00 ssh -T -o ControlPath /root/.juju/ssh/master-%r@%h:%p -o ControlMaster auto -o PasswordAuthentication no -Llocalhost:33511:localhost:2181 ubuntu@172.16.0.2
[16:26] <jorge> if i try to run this command on terminal i get command-line line 0: Missing argument.
[16:28] <jorge> So, when juju tries to connect on localhost in a local port that should be forward to 2181 of instance, the problem occurs.
[16:29] <jorge> if this command is not successful the local port doesn't open ... right? could this be the problem?
[16:38] <SpamapS> jorge: right
[16:38] <SpamapS> jorge: for future reference, btw, you can use 'strace -e trace=exec -f juju status'
[16:39] <jorge> great
[16:42] <jorge> trace=exec works on linux? here i get one error. looking at the man page, there is no exec parameter for trace=
[16:45] <jorge> -e trace=process :)
[16:47] <jorge> execve("/usr/bin/ssh", ["ssh", "-T", "-o", "ControlPath /root/.juju/ssh/mast"..., "-o", "ControlMaster no", "-o", "PasswordAuthentication no", "-Llocalhost:55700:localhost:2181", "ubuntu@172.16.0.2"], [/* 36 vars */]) = 0
[16:48] <niemeyer> jorge: Ok, should we continue from where we stopped?
[16:49] <jorge> yes
[16:52] <jorge> root@sold016:~/.juju/ssh# dpkg -L juju | grep 'state\/utils.py' /usr/share/pyshared/juju/state/utils.py /usr/lib/python2.7/dist-packages/juju/state/utils.py /usr/lib/python2.6/dist-packages/juju/state/utils.py
[16:52] <niemeyer> jorge: So please do the suggested command, and open the file in your editor as root
[16:52] <niemeyer> jorge: Then, find the get_open_port function
[16:53] <niemeyer> jorge: Right above the line temp_sock = socket.socket(...)
[16:53] <niemeyer> jorge: Enter this, on the same indentation:
[16:53] <niemeyer>     import random; return random.randint(50000, 60000)
[16:53] <niemeyer> jorge: Then, try to run juju again, and let me know please
[16:54] <niemeyer> I'll step out to get a cup of coffee meanwhile.. brb
[16:54] <jorge> irght
[17:00] <jorge> the first time I ran juju status worked, now getting the same error.
[17:08] <niemeyer> jorge: Please run "juju ssh 0" for testing
[17:09] <niemeyer> jorge: You're having some fundamental issues there that don't even involve juju itself.. it's failing well before getting into more interesting logic
[17:11] <jorge> well, the first time I ran juju ssh 0 worked. So i ran again and get the error.
[17:11] <jorge> :/
[17:12] <jorge> yes, there is some network problem ...
[17:13] <SpamapS> jorge: is there any way you can run juju *not* on your openstack network controller?
[17:15] <jorge> I tried from another hosts but it didn't work because the difference between versions.
[17:16] <jorge> But, I've won a new host just now! I need to test it ... So, i'm going to install the same version and try juju on it
[17:16] <niemeyer> jorge: Can you please run 5 times in a row and paste the output for all of them?
[17:16] <jorge> when I do that I come back to report here.
[17:16] <jorge> yes
[17:17] <jorge> any of the commands? juju ssh 0 or juju status?
[17:19] <niemeyer> jorge: juju ssh 0
[17:19] <niemeyer> jorge: Is Embrapa deploying OpenStack internally?
[17:20] <jorge> yes, this is a test of openstack ... i want to run some other scenarios, simulations etc
[17:21] <niemeyer> jorge: Very nice
[17:21] <niemeyer> jorge: Glad to see you guys are putting juju to good use there
[17:21] <niemeyer> jorge: Please step by if you have any other issues
[17:21] <jorge> do you know embrapa?
[17:21] <niemeyer> jorge: Yeah, my Dad worked there for most his life until retiring
[17:22] <jorge> I'm trying to use juju to create (or use charms) aiming to set up clusters quickly.
[17:23] <jorge> niemeyer: cool!! so, you are from Brazil?
[17:23] <niemeyer> jorge: Cool, that's something it does well :)
[17:23] <niemeyer> jorge: Yep
[17:45] <jcastro> This charm could use a review: https://bugs.launchpad.net/charms/+bug/919907
[17:45] <_mup_> Bug #919907: new-charm salt-master <new-charm> <Juju Charms Collection:New> < https://launchpad.net/bugs/919907 >
[17:46] <jcastro> https://bugs.launchpad.net/charms/+bug/918803
[17:46] <_mup_> Bug #918803: charm for python-oops-tools <new-charm> <Juju Charms Collection:New> < https://launchpad.net/bugs/918803 >
[17:46] <jcastro> this one too!
[18:06] <niemeyer> Hmm, curious
[18:06] <niemeyer> Aha, ok..
[18:07] <niemeyer> https://code.launchpad.net/~yellow/charms/oneiric/buildbot-master/trunk
[18:07] <niemeyer> There are branches with no revisions.. that's ok I suppose
[18:07] <SpamapS> Weird.
[18:24] <niemeyer> SpamapS: Hmm
[18:25] <niemeyer> No, it's working.. I'm clearly doing something silly here
[18:26] <niemeyer> Ah, and I found what it is
[18:32] <adam_g> hazmat: hey, i had a chance to point juju at openstack essex, seemed to have bootstrapped just fine.
[18:33] <hazmat> adam_g, good to know, thanks
[18:38] <adam_g> if we start testing a cloud image on this openstack CI stuff instead of ttylinux, ill probably add at least a 'juju bootstrap' test of some kind
[19:00] <andrewsmedina> someone can explain me what does this juju code? http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/juju/providers/ec2/launch.py#L91
[19:18] <SpamapS> andrewsmedina: the comment above it is accurate as far as I can read it
[19:21] <andrewsmedina> SpamapS: it's not works with my openstack setup =/
[19:23] <SpamapS> andrewsmedina: what version of openstack?
[19:24] <andrewsmedina> SpamapS: diablo
[19:24] <SpamapS> andrewsmedina: we test against diablo daily.. should work
[19:28] <andrewsmedina> SpamapS: here it's returns a unexpected error in ec2 client
[19:28] <andrewsmedina> should be a problem in my openstack setup.
[19:28] <andrewsmedina> I don't know where
[19:31] <SpamapS> andrewsmedina: what version of Ubuntu are you running juju on?
[19:31] <andrewsmedina> 11-10
[19:31] <SpamapS> andrewsmedina: ok, that should be fine. Hrm.
[19:32] <andrewsmedina> SpamapS: how you install your openstack setup?
[19:32] <SpamapS> andrewsmedina: do you have the juju from oneiric, or the one from our PPA?
[19:32] <SpamapS> andrewsmedina: I'm not sure, our IS team set it up for us.
[19:33] <andrewsmedina> SpamapS: from your PPA
[19:34] <SpamapS> andrewsmedina: maybe try updating openstack from oneiric-proposed
[19:34] <SpamapS> andrewsmedina: *lots* of fixes
[19:34] <SpamapS> python-nova | 2011.3+git20111117-0ubuntu1 | oneiric-proposed | all
[19:35] <andrewsmedina> ok
[19:35] <andrewsmedina> I installed openstack from source
[19:36] <SpamapS> uh
[19:36] <SpamapS> why?
[19:41] <andrewsmedina> SpamapS: I create a script like devstack
[19:42] <SpamapS> andrewsmedina: ah ok cool
[20:20] <hazmat> andrewsmedina, the fix for that might not have made it to the diablo release tarballs,  there was an error with the internal group authorization previously in ostack, but i thought it was fixed for diablo though
[20:37] <jcastro> SpamapS: chords, strings, we brings .... charm review pls?
[21:11] <gary_poster> jcastro, hey.  I'm on a squad from the LP team using juju for a project.  We are writing a buildbot master recipe and a buildbot slave recipe ATM.  We have an internal review process, but we don't know juju well enough to be competent reviewers.  We want to get better at writing and reviewing.  Could we have a juju review mentor--someone who looks over our internal reviews of our charm branches to improve both the reviews an
[21:11] <gary_poster> d the branches?  If so...how would we set that up? :-)
[21:17] <jcastro> absolutely
[21:18] <jcastro> m_3: SpamapS: marcoceppi:  should we just have them tag their branches/bugs with new-charm and ping pong back and forth?
[21:19] <m_3> jcastro: yeah
[21:19] <m_3> jcastro gary_poster: is there any reason for those to be separate?
[21:20] <m_3> those charms should eventually be pushed to lp:charms too I assume?
[21:20] <gary_poster> m_3 definitely
[21:21] <gary_poster> m_3, jcastro, we'll use the new-charm tag and look forward to your help.  Thank you!
[21:21] <m_3> ok, then yeah... charm review queue = { charm bugs, with 'new-charm' tag, attached branch, and New/FixCommitted status }
[21:22] <m_3> you might have to kick some of us to get through reviews a little faster :)
[21:24] <gary_poster> m_3, heh, cool, understood.  Those three people jcastro mentioned are the ones to kick?  and charm bugs, I assume you mean https://bugs.launchpad.net/charms ?
[21:27] <m_3> gary_poster: anyone in ~charmers can review, but most are done by marcoceppi, nijaba, SpamapS, and myself.  yes, bugs.lp.net/charms
[21:27] <gary_poster> awesome, thanks again m_3
[21:27] <m_3> sure thank
[21:27] <m_3> s/thank/thang/
[21:27] <gary_poster> heh
[22:18] <SpamapS> jcastro: reviewing charms now, had to triage bugs this morning. :)
[22:18] <SpamapS> jcastro: cuase charming is life, and life is charming
[22:20] <jcastro> <3
[22:31] <SpamapS> heh.. salt calls servers minions
[22:31] <SpamapS> +1 for mirth
[22:31] <SpamapS> hrm.. people keep making the mistake of starting their charm as a subdir of the bzr branch, rather than just one branch for the whole charm
[22:32]  * SpamapS wonders if there was a recent change that is leading people to this
[22:35] <m_3> all of our charm dev docs/presentations should really assume they've never heard of bzr
[22:36] <m_3> start from scratch
[22:38] <SpamapS> hah, are you suggesting that bzr is obscure? ;)
[22:40] <m_3> I'd never suggest such a thing
[22:41] <m_3> rather that our user-base is so broad
[22:41] <m_3> :)
[22:41] <SpamapS> juju, user base, SOOOO BIG
[22:41] <SpamapS> HUGE
[22:42] <SpamapS> bzr user base, smarr

[22:42] <m_3> really though... the concept of putting even configuration under revision control isn't all that new
[22:43] <m_3> sorry... logic barf... isn't all that old
[22:43] <SpamapS> true
[22:43] <SpamapS> I wanted to make /etc a davfs mount of an SVN server on our servers back when I was wild and crazy and didn't know better....
[22:43] <m_3> ha!
[22:44] <SpamapS> I would have gotten away with it if it hadn't been for you meddling (git) kids
[22:44] <m_3> I still start long-lived hw and libvirt instances with (cd /etc; git init; git commit -m'initial revision')
[22:46] <m_3> itching to move them to juju
[22:46] <m_3> actually excited about the testing stack
[22:47] <m_3> learned a bunch of lessons about longer-term -vs- development stacks already!
[22:49] <m_3> really love to choose instance size per service... that bootstrap node can get expensive otherwise!
[22:50] <SpamapS> m_3: yeah, I'd like to have the option to consider the bootstrap node as available for deploys too
[22:50] <SpamapS> m_3: there's always my old branch which adds --machine to deploy and add-unit.. ;)
[22:50] <SpamapS> Probably out of date now
[22:50] <m_3> right
[22:51] <m_3> 398's not working with 447 clients... I'd imagine your branch pre-dates that
[22:51] <SpamapS> m_3: did you report that as a bug yet? ubuntu-bug juju is all you need to do on the box with r398 on it
[22:52] <m_3> haven't taken time to really isolate it... definitely happens with O running O, and P running P in local and O running O in ec2
[22:53] <m_3> there's a problem with config-get coming up empty (not even defaults)
[22:53] <m_3> but don't know what else is going on
[22:54] <m_3> juju-origin: ppa works around it
[22:56] <SpamapS> We really need to fix the bug where you can't upgrade-charm if config-changed or install or upgrade-charm has had an error.
[22:57] <SpamapS> My dev method for the mediagoblin charm has been juju scp'ing the new, fixed script up, --retry, then upgrade-charm again.
[22:57] <SpamapS> which is t3h lame
[22:59] <m_3> yup
[23:00] <SpamapS> Wow the error message on ambiguous relations is *SO much better* now
[23:00] <m_3> so I'm about to do something... and I want you to talk me out of it
[23:00] <SpamapS> 2012-01-27 14:58:12,782 ERROR Ambiguous relation 'oops-tools postgresql'; could refer to: 'oops-tools:db postgresql:db' (pgsql client / pgsql server) 'oops-tools:db postgresql:db-admin' (pgsql client / pgsql server)
[23:00] <SpamapS> m_3: DO IT YOU KNOW YOU WANT TO
[23:00] <SpamapS> err.. I mean.. what is it?
[23:00] <m_3> config-changed for jenkins just calls install again... I wanna catch certain config changes, and then pass it on to install again
[23:01] <m_3> the only way to do that is to save out the config set every time the hook is called and compare changes
[23:02] <m_3> wow!  yeah, that message is just nice... makes me want to go relate things just to see the message
[23:04] <SpamapS> m_3: config-get is always available, so why even have special logic in config-changed if you're just going to run the installer again?
[23:05] <m_3> only run it conditially based on which config params are passed
[23:05] <m_3> that's what I want to know... which config params were passed to this particular call of config-changed
[23:06] <SpamapS> ah
[23:06] <SpamapS> you want to use it to kick things off. :)
[23:06] <m_3> well... and change the config without a reinstall
[23:06] <m_3> but yes, in general, I like the ability to use 'juju set' to control the service
[23:07] <m_3> add new test jobs
[23:07] <m_3> kick off tests
[23:07] <m_3> etc
[23:07] <SpamapS> juju set charmtester runstuff=(x=$(juju get charmtester runstuff); echo $(($x+1))))  ? ;)
[23:07] <m_3> juju set charmtester add_branch=lp:my/new/charm
[23:07] <m_3> ha!
[23:08] <SpamapS> If only it could feed data back..
[23:08] <m_3> you laugh... but there might be legitimate use for that in some relation vars
[23:08] <m_3> (varnish)
[23:09] <m_3> I totally want a 'juju get' too
[23:11] <m_3> right, like your 'runstuff' above
[23:12] <SpamapS> juju get exists!!
[23:13] <m_3> is it coupled to config-set from within the hooks?
[23:14] <m_3> what's it getting?
[23:14]  * m_3 digs through the code
[23:14] <SpamapS> the values
[23:14] <SpamapS> so you can retrieve what the current value of a config option is
[23:16] <m_3> cool... never used that one before
[23:16] <SpamapS> fairly recent feature
[23:17] <m_3> we're missing config-set tho
[23:17] <m_3> _that_'d be a really useful combo
[23:18] <SpamapS> Yeah actually it would
[23:18] <SpamapS> Hey I had a crazy idea while walking to/from lunch
[23:18] <m_3> I could get my api_token _back_ instead of forcing one forward
[23:19] <SpamapS> what if zookeeper just ran on client machines..and juju open-tunnel opened a tunnel to the machines it was managing, for the agent to connect *back* through
[23:20] <SpamapS> kind of like the local provider works.. but not local. :)
[23:22] <_mup_> Bug #922900 was filed: config-set callable from hooks <juju:New> < https://launchpad.net/bugs/922900 >
[23:22] <m_3> bug filed
[23:22] <SpamapS> http://ec2-184-73-125-5.compute-1.amazonaws.com/
[23:23] <SpamapS> oops-tools
[23:23] <SpamapS> *ALMOST* ready for promulgation
[23:23] <m_3> hmmm... my charmtester environment of xlarge's would be cheaper
[23:24] <m_3> actually when you think about it, that's a nice thing about the architecture as it stands... the zk node really could be moved around
[23:24] <m_3> might be some security-group nonsense
[23:24] <m_3> and you'd ack the fact that there might be a perf hit with really large infras
[23:26] <SpamapS> I think a service with 1000 units and 2 or 3 peer relationships is going to get ugly to scale up/down because of all the joined and departed churn that will go on.
[23:26] <m_3> that's gonna happen no matter where things live
[23:26] <SpamapS> 1000 nodes running relation-list with 1000 responses all at once :)
[23:27] <SpamapS> m_3: unless it becomes decentralized
[23:27] <m_3> every peer's connected to every other, so it'd be combinatorially explosive
[23:28] <m_3> yup
[23:28] <m_3> what would that look like
[23:28] <m_3> we'd need every node to have a 'torrent map' of the whole
[23:28] <m_3> be a big change
[23:29] <SpamapS> well unless we just used bittorent to keep the map updated ;)
[23:29] <m_3> hey, I'm a huge fan of peer-based infra
[23:30] <SpamapS> anyway, I want to get to the long living 10 unit service before I start thinking about 1000 unit services. :)
[23:33] <m_3> I just honestly think we need to make sure that services with peer relations _also_ have master-slave options so they can scale
[23:34] <m_3> there's this is cool the way it _should_ work -vs- make it work and be stable
[23:39] <SpamapS> hm, should we enforce that readme always be called README or README.* ?
[23:40] <m_3> sure
[23:40] <m_3> it's really not hard to be flexible with it tho
[23:41] <m_3> I _do_ like having the option of README.rst, README.md, or just README
[23:42] <SpamapS> what I mean is, README(\..+)?
[23:42] <SpamapS> so, not readme.txt
[23:42] <SpamapS> or ReadMe.md
[23:42] <m_3> old gnu std used to bug me... but it was useful sometimes in looking at noise that it screamed INSTALL at you
[23:42] <SpamapS> heh, our INSTALL is hooks/install
[23:45] <m_3> but yeah, it's worth asserting emphatically that they should read it :)
[23:45] <m_3> so on this note, it bugs me to see non-hooks in hooks/
[23:45] <SpamapS> I agree on that
[23:45] <m_3> I'd rather bin/ or lib/ or scripts/
[23:46] <SpamapS> but I'm ok with "common.py" or a single "all-hooks.sh" that is symlinked to
[23:46] <m_3> scripts, templates, files for things you're installing elsewhere
[23:46] <m_3> bin/ lib/ for things your hooks are using
[23:47] <m_3> IMO that should be symlinked into bin/all-hooks.sh or lib/common.py... with only the hook links in hooks/
[23:47] <m_3> no biggie of course
[23:47] <m_3> just style
[23:48] <SpamapS> it gets REALLY hard with things like python though
[23:48] <m_3> huh?
[23:48] <SpamapS> now you're adding things to PYTHONPATH .. and hating the guy who kicked stuff out
[23:48] <SpamapS> import common will import ./common.py
[23:48] <SpamapS> as relative to the current python script...
[23:49] <m_3> CWD is pretty set tho
[23:49] <SpamapS> i am not sure its not going to do a readlink if $0 is a symlink..
[23:49] <SpamapS> not CWD
[23:49] <SpamapS> relative to the python script
[23:49] <m_3> right, but some people write wrt __file__
[23:49] <m_3> understand
[23:49] <SpamapS> just so complicated for a tiny win
[23:49] <m_3> yeah
[23:50] <m_3> agree... very tiny win
[23:55] <SpamapS> m_3: nice job on the charm tester thing.. hopefully next week we can work on running charm-embedded tests with it. :)
[23:59] <m_3> SpamapS: thanks!  I'll roll it out to the list once we get a real url... then we can extend it