[00:04]  * negronjl is afk
[00:13] <marcoceppi> m_3: Thanks for the review and consolidating everything into one
[00:13] <marcoceppi> I was never able to wrap my head around Cheetah, which is why I still bow to the gods of sed :)
[00:15] <marcoceppi> SpamapS: you too ^^ (thanks!) :)
[00:18] <m_3> marcoceppi: yeah, I'm kinda trying to add discussion as part of the review too... those parts aren't crucial, just opinion, but it's good to talk them over
[00:19] <m_3> marcoceppi: I took some notes on cheetah from a shell script...
[00:19]  * m_3 digs around
[00:20] <SpamapS> I think most of the time heredocs are simpler than cheetah
[00:20] <marcoceppi> m_3: I just find cheetah to be, overkill? for the small configurations
[00:21] <SpamapS> cheetah is great when you want to loop over complex objects
[00:21] <SpamapS> but most of the time just >>'ing to a new version of the file is actually a better way to go IMO
[00:24] <marcoceppi> SpamapS m_3 are there any other lightweight configuration tools, that just use templating? Like Smarty for bash?
[00:26] <SpamapS> marcoceppi: the only time I see templating as an important option is when you can look at the template w/o the logic that fills it... like HTML templates that are worked on by designers.
[00:26] <SpamapS> marcoceppi: for config files.. just build the thing by appending.
[00:26] <SpamapS> where I've used cheetah.. I've felt it was overkill ;)
[00:26] <m_3> disagree... there's lots of use for lightweight templating
[00:27] <SpamapS> Its mostly a style choice
[00:27] <m_3> it's not that painful either... no code... it can be called from the command line
[00:27] <SpamapS> heredocs are just one form of templating
[00:27] <m_3> agree... totally a style choice
[00:28] <m_3> stitching various here-docs together can even do complex templating
[00:34] <marcoceppi> m_3: I'd be interested in your bash notes :)
[00:37] <SpamapS> DOWNLOAD=`ch_get_file "http://iweb.dl.sourceforge.net/project/phpmyadmin/phpMyAdmin/3.4.7.1/phpMyAdmin-3.4.7.1-all-languages.tar.gz" "726df0e9ef918f9433364744141f67a8"`
[00:37] <SpamapS> that is so awesome
[00:38] <SpamapS> cat > $apache_config_file <<EOF
[00:39] <marcoceppi> SpamapS: <3
[00:39] <SpamapS> marcoceppi: so, for my money, its better to > to a tmpfile .. then mv -f
[00:40] <SpamapS> marcoceppi: and probably worth adding a charm helper function that takes a heredoc as its stdin and a filename to rename to as $!
[00:40] <SpamapS> err
[00:40] <SpamapS> $1
[00:40] <marcoceppi> sh/file.sh sound good?
[00:40] <marcoceppi> or should there just be one file, helper.sh ?
[00:41] <SpamapS> not sure...
[00:41] <SpamapS> I like the idea of breaking them up into groups
[00:41] <marcoceppi> I may be trying to split too soon :\
[00:41] <SpamapS> but it also means people have to lookup which one each function is in...
[00:42] <SpamapS> maybe have an all.sh that just sources them all
[00:42] <marcoceppi> good idea
[00:42] <SpamapS> one thing that sucks is that we want to cleanup temp files on any fails..which is usually done w/ a trap.. but traps in sourced files can be ugly
[00:43] <SpamapS> marcoceppi: also.. VirtualHost with ServerName .. not sure thats a great idea
[00:43] <SpamapS> marcoceppi: for the most part I think we can think of these as standalone boxes
[00:44] <SpamapS> tho I guess the first VirtualHost is also the default one
[00:44] <marcoceppi> What about when co-location happens?
[00:44]  * marcoceppi looks longingly into the future
[00:44] <SpamapS> marcoceppi: that isn't something I think we should think about. co-location is supposed to be for things that don't conflict.
[00:45] <SpamapS> and anyway, if you co-located two things that did it this way, they'd have conflicting ServerName fields and break.
[00:46] <m_3> I kinda like adding apache vhosts w explicit aliases (to /phpmyadmin) instead of overloading root
[00:46] <marcoceppi> SpamapS: I guess, I keep envisioning there being an http relation with an apache interface. So an apache charm and charms like thinkup and phpmyadmin would require the apache charm and apache charm would setup the proper virtualhosts based on X things
[00:46] <m_3> results in the lame "It works" though :)
[00:46] <marcoceppi> via colocation
[00:47] <m_3> marcoceppi: more likely you'd use a reverse-proxy in any real situation though
[00:47] <SpamapS> that works now without charms.. just apt-get install the two things, they're both on port :80 at /foo and /bar ..
[00:48] <SpamapS> charms tho, are oriented around configuring the machine for the one thing they do
[00:48] <marcoceppi> right, it's because I have a slightly different use case that I look at warping juju to do that :)
[00:50] <SpamapS> I think the way subordinate/colocated charms is being implemented, it may work that way..
[00:50] <SpamapS> where you could have an apache charm.. and deploying apps that support the apache interface as subordinates of it, would give them a chance to tell apache that they want to be at /foo
[00:52] <SpamapS> open-port 80/tcp
[00:52] <SpamapS> chown -R www-data:www-data /var/www
[00:52] <SpamapS> marcoceppi: open the port only when the service is configured
[00:52] <marcoceppi> kk, thanks
[00:52] <SpamapS> Since phpmyadmin is kind of nothing w/o databases.. I'd say it should be opened in the relation hook, not install
[00:53] <SpamapS> Or, can it work w/o any sources configured?
[00:55] <m_3> marcoceppi: simple cheetah example from the command line: http://paste.ubuntu.com/753201/
[00:57] <m_3> marcoceppi: and the other extreme... a more complex template using eruby https://gist.github.com/1402781
[00:59] <marcoceppi> SpamapS: It _can_ be run without a db, but it's pointless without one
[01:20] <SpamapS> marcoceppi: I don't understand this change...
[01:20] <SpamapS> 8	-if not slave and not broken and not admin:
[01:20] <SpamapS> 9	+if not slave and not broken:
[01:21] <SpamapS> marcoceppi: seems like we'd still want that.. who cares if the database already exists?
[01:22] <SpamapS> marcoceppi: anyway, I have to run.. but that does seem confusing
[01:22] <SpamapS> marcoceppi: oh sorry, in relation to https://code.launchpad.net/~marcoceppi/charm/oneiric/mysql/db-admin-relation-fix/+merge/83690
[01:28] <marcoceppi> SpamapS: In my trials, if you destroy phpMyAdmin then deploy the service again, MySQL would error on join because it tries to create the database again
[01:33] <SpamapS> marcoceppi: we should disable the creation on admin
[01:34] <SpamapS> marcoceppi: they have root.. let them create their own db :)
[01:34] <marcoceppi> SpamapS: that was my first course of action, then I realized that phpMyAdmin requires a database
[01:34] <SpamapS> oh haha
[01:34] <marcoceppi> So I put it back, looking back on it now it should be disabled though
[01:34] <marcoceppi> I'll update my branch for the merge proposal
[01:34] <SpamapS> just use 'mysql'
[01:35] <SpamapS> or create one conditionally.. if not exists..
[01:35] <SpamapS> anyway.. GONE
[01:35] <marcoceppi> SpamapS: I'll just have the db-admin hook create it, no bigs
[01:37] <marcoceppi> SpamapS: Damn, well when you get back I've pushed up the changes
[02:46]  * marcoceppi wanders away to play Zelda
[03:23] <hazmat> zelda +1
[03:38]  * SpamapS creates ppa:charmers/charm-helpers
[03:41] <marcoceppi> sweet
[03:42] <SpamapS> More I think about it, the more I think we need a charm-helpers addon that reads metadata.yaml for declarative stuff
[03:43] <SpamapS> or a juju add on I guess
[03:43]  * SpamapS loathes writing specs, so would prefer charm-helpers ;)
[03:44] <marcoceppi> How would it read metadata.yaml? as in metadata defining what helpers needed to be loaded into the environments?
[03:45] <SpamapS> Just have a single command at the outset that bootstraps in charm-helpers
[03:45] <SpamapS> and after it does the declarative bits, runs hooks/install.charmhelper or something
[03:46] <SpamapS> actually you can just have install be something like
[03:47] <SpamapS> #/bin/sh
[03:47] <SpamapS> charm-helper
[03:47] <SpamapS> ... install steps
[03:48] <SpamapS> charm-helper would need to be in the OS
[03:48] <SpamapS> or in juju's cloud init
[03:50] <SpamapS> otherwise for now just    add-apt-repository -y ppa:charmers/charm-helper && apt-get update && apt-get -y charm-helper
[04:44] <flaccid> sweet
[05:43] <zoomzz> Hey folks i am having a little trouble bootstraping
[05:44] <zoomzz> Fails to find and register any hosts
[05:45] <zoomzz> Even though the hosts r alive and pingable
[05:46] <zoomzz> My aim is to deploy openstack on servers deployed through juj and orchestra
[05:49] <zoomzz> Cobbel does its job and allows the hosts to build but the do not register and juju bootstrap fails to find an orchestra server
[08:34] <koolhead11> hazmat: around
[09:37] <koolhead11> is zookeeper a compulsary pkg to be on the VM clients/cloud-image once they are started via juju-bootstrap?
[09:39] <koolhead11> if that is the case then i can never run juju in my openstack environment if the Internal IP has no internet access :(
[09:40] <rog> koolhead11: i should think so. zookeeper is used for all juju coordination (e.g. hook execution)
[09:40] <rog> koolhead11: is that because you can't install zookeeper with no internet access?
[09:40] <koolhead11> rog: http://paste.ubuntu.com/753467/
[09:40] <koolhead11> i confirmed it after the console log
[09:41] <koolhead11> rog: true
[09:41] <koolhead11> so if am running juju with openstack then i need to have internal network access to internet
[09:41] <rog> koolhead11: what command is that paste output from?
[09:42] <koolhead11> $ euca-get-console-output instance-id
[09:42] <rog> koolhead11: alternatively you could use an image with zookeeper preinstalled, presumably
[09:42] <koolhead11> rog: +1
[09:43] <koolhead11> so it means i have to re-create the image :D
[09:43] <koolhead11> with adding all these pkgs
[09:43] <rog> koolhead11: given that you've got no internet access, i don't see an alternative, other than setting up your own local PPA
[09:44] <rog> koolhead11: (and i've no idea how much work that would be)
[09:45] <rog> koolhead11: i imagine that the only package that you need to preinstall is juju, which will have all those other packages as dependencies
[09:45] <rog> koolhead11: i may be wrong though - i'm quite new to all this
[09:48] <koolhead11> rog: am going to add all those pkg and remaster the cloud-image i have just downloaded. I will have to check/work on repository issue :)
[09:48] <koolhead11> rog: thanks a lot :)
[09:54] <rog> koolhead11: no probs. hope it works!
[09:56] <koolhead11> rog: and i think the issue am facing is more to do with my internal network :(
[11:25] <koolhead11> hi all
[11:26] <koolhead11> i need one more info, i just need my ubuntu image along with cloud-init? am trying 2 build a custom ubuntu image because i have to provide network info in it
[11:32] <niemeyer> Morning all
[11:32] <niemeyer> koolhead11: Hi!
[11:32] <niemeyer> koolhead11: Custom Ubuntu images are generally not encouraged with juju
[11:33] <niemeyer> koolhead11: We use the charms to tweak the image to one's nee
[11:33] <niemeyer> d
[11:34] <fwereade> morning niemeyer!
[11:47] <niemeyer> fwereade: Hey dude
[11:58] <koolhead11> niemeyer: hellos
[11:59] <koolhead11> actully i need to add proxy info in my apt so the zookeper and other pkgs gets downloaded once instance starts
[12:03] <koolhead11> niemeyer: euca-get-console-output instance-id  is this
[12:03] <koolhead11> http://paste.ubuntu.com/753480/
[12:03] <koolhead11> i hope you agree to me on this ;D
[12:03] <hazmat> koolhead11, pong
[12:04] <koolhead11> hazmat: hola
[12:04] <koolhead11> its my running instance which has not connected to internet and failed to download zookeeper
[12:04] <hazmat> koolhead11, indeed you do need your cloud to have access to the internet
[12:04] <koolhead11> as result of that am having all the issue
[12:05] <hazmat> koolhead11, alternatively you need a proxy/cache setup
[12:05] <koolhead11> http://askubuntu.com/questions/83134/using-juju-on-openstack-returns-ssh-invalid-key-error/83698
[12:05] <koolhead11> hazmat: where will i do proxy/setup
[12:05] <koolhead11> because my nova via which my instance gets network routed needs proxy
[12:06] <koolhead11> so the only thing came to my mind is to add proxy info in the cloud-image i downloaded
[12:06] <hazmat> koolhead11, your question/problem is different then the original thread, its more appropriate to start a separate conversation regarding
[12:06] <hazmat> at least its not clear to me that their the same
[12:06] <hazmat> koolhead11, what do you expect juju to be able to do if you have no network access?
[12:07] <hazmat> just curious
[12:07] <koolhead11> hazmat: yes i got the reason for problem i was having.
[12:08] <koolhead11> by any chance juju during bootstrap has any option to define proxy? or i need to hardcode it inside my image and then upload it again to the bucket
[12:08]  * hazmat checks if cloud-init supports it
[12:09] <hazmat> koolhead11, do you have a machine accessible from the vm/cloud network that has internet access? i guess you can setup an instance that way if you associate a public address to it?
[12:09] <hazmat> g'morning
[12:09] <hazmat> all ;-)
[12:10] <hazmat> fwereade, i was looking over the restart-transitions branch last night, in general it looks good, i had one major concern about it, in that it would automatically effect error transitions (from error states) to started and up without user intervention
[12:11] <koolhead11> hazmat: if you will not kick me then does juju has option like  " juju bootstrap --proxy= " :D
[12:11] <hazmat> which is a significant change unintended change in behavior
[12:11] <hazmat> koolhead11, that seems reasonable
[12:11] <hazmat> i mean some option to that effect
[12:11] <fwereade> hazmat, reading code...
[12:11] <koolhead11> hazmat: so is it there, can i try that?
[12:12] <hazmat> koolhead11, its not there yet, i'm investigating how it can be done
[12:12] <koolhead11> hazmat: awesome!!
[12:12] <fwereade> hazmat, yes, you're absolutely right
[12:12] <koolhead11> hazmat: also you were correct, the internal network in openstack gets access to internat routed via Nova
[12:12] <uksysadmin> cool - would be useful
[12:12] <koolhead11> uksysadmin: +1
[12:13] <fwereade> hazmat, am I right in thinking that all I actually need to fix is to verify sensible states before I attempt the transition?
[12:13] <koolhead11> but in my case even nova is running on proxy :(
[12:13] <hazmat> koolhead11, if that's the case the instances should have internet access for normal apt usage?
[12:13] <hazmat> fwereade, yeah.. i think just checking that 'error' not in current_state suffices
 but in my case even nova is running on proxy :(
[12:13] <hazmat> fwereade, but its a little tricker than that
[12:14] <hazmat> hmm
[12:14] <hazmat> fwereade, just wondering if the unit is in error, do the unit rel states need to be initialized
[12:14] <fwereade> hazmat, I had wondered whether I could guarantee error state by "error" in current state
[12:14] <koolhead11> hazmat: so you got this trick issue :D
[12:14] <hazmat> probably not, they can be lazy initiaized
[12:15] <hazmat> fwereade, yeah.. its a property of all the error states that they have an error in the name, i would some strong wording to that effect in workflow.py
[12:15] <fwereade> hazmat, cool
[12:16] <hazmat> fwereade, actually just having a func or method in workflow.py is_error_state  should suffice to abstract
[12:16] <fwereade> hazmat, good point
[12:17] <fwereade> hazmat, I suspect that I'm still some way away from getting restart behaviour correct in all cases
[12:17] <fwereade> hazmat, I think I finally have a decent handle on all the workflows and lifecycles, and the hook scheduler
[12:17] <hazmat> fwereade, yeah.. there's several more corner cases
[12:17] <hazmat> fwereade, cool
[12:18] <fwereade> hazmat, but I have yet to translate all that into a coherent plan
[12:18] <hazmat> the scheduler still has transient state
[12:18] <hazmat> i'm still wary of trying to store that in zk, given the connection to zk may be dead
[12:20] <fwereade> hazmat, the scheduler certainly does; there's also some minor complication with the unit workflow, I think, in that it'll need to put the lifecycle into a sensible ._running state when it's recovering into a surprising state (if it's *only* error states, it's not too bad, but I'm not certain of that yet)
[12:21] <_mup_> Bug #897645 was filed: juju should support an apt proxy for private clouds <juju:Confirmed> < https://launchpad.net/bugs/897645 >
[12:21] <fwereade> hazmat, and I agree, I think we want the HS state on disk
[12:22] <fwereade> hazmat, the relation stuff does indeed seem to be ok if we do everything lazily
[12:22] <koolhead11> hazmat: any ubuntu image with cloud-init should work for juju i suppose
[12:24] <hazmat> fwereade, interesting re _running, we do need to ensure that its the lifecyle is running, regardless of the transition to started in this case, it will need an additional public accessor/mutator, although i wonder if needs to support external on/off or just on.
[12:24] <hazmat> needs more thought
[12:24] <hazmat> koolhead11, yes, but we do require fairly recent versions of cloud-init
[12:25] <fwereade> hazmat, as I understand it, the current transitions will start and stop the lifecycle as appropriate
[12:25] <koolhead11> hazmat: the comes with oneiric
[12:25] <fwereade> hazmat, what I'm lacking is certainty about the mapping from states to should-lifecycle-be-running
[12:25] <hazmat> fwereade, they do, but that's based on a notion that start is called on the lifecycle
[12:25] <koolhead11> hazmat: cloud-init             0.5.10-0ubuntu1.5 will do?
[12:26] <hazmat> fwereade, yeah.. it needs more thought, my thought in this case, is that regardless of state we'd want lifecycle running
[12:26] <hazmat> fwereade, maybe not..
[12:27] <fwereade> hazmat: hm, a config error, for example, explicitly stops lifecycle
[12:27] <hazmat> fwereade, thinking about error states like install_error, or start_error
[12:27] <fwereade> exactly
[12:27] <fwereade> hazmat, if it's *just* errors that do that, and we can guarantee that forever, that's easy, but I'm fretting about it
[12:28] <hazmat> fwereade, yeah.. its fine without manipulating _running, effectively the guarantee ic is that its only running after started, and error states will stop that
[12:28] <hazmat> if we come back in an error state, we can still recover... the unit rels are only active if started
[12:29] <hazmat> and we don't need to manipulate the lifecycle internals
[12:29] <hazmat> koolhead11, and that corresponds to what?
[12:29] <koolhead11> hazmat: is it the latest cloud-init pkg which juju needs :D
[12:30] <koolhead11> because am making my own image adding the proxy issue in it and cloud-init has to be installed to
[12:30] <fwereade> hazmat, then perhaps it's even simpler -- if we're in an error state do *nothing*, if we're already "started" just lifecycle.start(False), otherwise do the usual transitions into started?
[12:31] <koolhead11> i cannot log in to the machine which i booted and uses image http://uec-images.ubuntu.com/
[12:32] <hazmat> fwereade, yeah... that sounds about right, just wondering if that can be simplified
[12:38] <koolhead11> hazmat: 0.5.10-0ubuntu1.5  version of cloud-init get installed, you think its sufficient to run juju ?
[12:38] <hazmat> koolhead11, that corresponds to what distro release version?
[12:39] <hazmat> koolhead11, specifically for cloud-init there where fixes for the oneiric release regarding openstack
[12:40] <koolhead11> hazmat: so it will work :D
[12:40] <hazmat> koolhead11, the version in oneiric is 0.6.1-0ubuntu22
[12:41] <koolhead11> hmm. so i will get the latest oneiric one installed.
[12:41] <hazmat> koolhead11, which implies to me that no.. 0.5.10 won't work for openstack, there was specifically a problem around installing the ssh key in  older versions
[12:41] <koolhead11> thanks. let me test this
[12:41] <koolhead11> hazmat: no my bad, i was on my natty system :)
[12:42] <koolhead11> and got that version
[12:42] <hazmat> koolhead11, no worries
[12:42] <hazmat> koolhead11, uksysadmin Bug #897645 is to track the apt-mirror/proxy support
[12:42] <_mup_> Bug #897645: juju should support an apt proxy for private clouds <juju:Confirmed> < https://launchpad.net/bugs/897645 >
[12:43] <uksysadmin> awesome, ta hazmat
[12:54] <niemeyer> mpl: ping
[13:00] <niemeyer> rog: ping
[13:00] <rog> niemeyer: hiya
[13:00] <niemeyer> rog: Yo
[13:00] <rog> niemeyer: how're them reviews coming on? :-) :-)
[13:00] <niemeyer> rog: Just looking at mpl's review, and I noticed you've replied via email, which interestingly went to the merge proposal and not to codereview
[13:01] <rog> niemeyer: interesting. i don't *think* i got an email from codereview about it.
[13:02] <rog> niemeyer: nope, i didn't. is this a flaw in the cunning plan?
[13:02] <niemeyer> rog: Not sure.. just thinking
[13:02] <niemeyer> rog: and yes, you certainly have reviews coming your way
[13:02] <rog> niemeyer: whee! i've been stacking up the merge requests, i'm afraid
[13:02] <rog> niemeyer: i hope i haven't gone way off track
[13:03] <niemeyer> rog: For now, let's start the review online in the codereview page.. this will ensure you get added to the reviewers list
[13:03] <rog> niemeyer: you mean, reply on that page rather than reply to the email?
[13:03] <niemeyer> rog: That's alright.. we'll sort it out.. please don't pile up things too much in the same branch, though
[13:03] <niemeyer> rog: That first ec2 branch took way too long, and I'm actually wondering if we'll have to break it down
[13:03] <niemeyer> rog: But let me go over it in a first pass before anything
[13:04] <rog> niemeyer: yeah, it was bigger than i wanted, but i couldn't see a way to break it down into nice easy steps
[13:04] <niemeyer> rog: That's right.. reply in the codereview page first
[13:04] <niemeyer> rog: That will add you to the reviewers list
[13:04] <rog> niemeyer: since that branch, i'm trying hard to keep the branches small and to the point
[13:04] <niemeyer> rog: That's awesome, thank you
[13:06] <mpl> niemeyer: pong
[13:06] <niemeyer> mpl: Hey there!
[13:06] <mpl> hi
[13:06] <niemeyer> mpl: Just going over your change and we're mostly ready for merging
[13:06] <niemeyer> mpl: One question: have you signed the contribution agreement from Canonical before, for any project?
[13:07] <niemeyer> mpl: I don't recall if I actually asked that before
[13:07] <hazmat> niemeyer, can lbox propose send an existing branch to codereview site ?
[13:07] <mpl> niemeyer: nope, I don't thing I did.
[13:07] <niemeyer> hazmat: Yep.. just propose it again with -cr
[13:07] <mpl> *think
[13:07] <hazmat> niemeyer, awesome
[13:07] <niemeyer> hazmat: It will find the existing merge proposal, and will do the delta
[13:07] <hazmat> niemeyer, does that work even if the merge proposal belongs to someone else?
[13:07] <niemeyer> hazmat: Hmm
[13:07] <mpl> niemeyer: same kind of procedure as with Google I suppose?
[13:07] <niemeyer> hazmat: Maaaaybe
[13:08] <hazmat> ie. does it actually resubmit it,or can it just do the diff and add the comment
[13:08] <niemeyer> hazmat: As long as you have editing permissions, I *think* it may work
[13:08] <hazmat> niemeyer, cool, i'll have to experiment
[13:08] <niemeyer> hazmat: Oh, actually, probably not
[13:08] <niemeyer> hazmat: Because it will attempt to push the branch
[13:08] <niemeyer> hazmat: But try anyway
[13:08] <hazmat> ah
[13:09] <niemeyer> mpl: Yeah, mostly
[13:09] <niemeyer> mpl: Except it's a single agreement for all Canonical projects
[13:09] <niemeyer> mpl: So you won't have to be doing this again for juju, etc
[13:09] <niemeyer> mpl: The details are here:
[13:09] <niemeyer> mpl: http://www.canonical.com/contributors
[13:09] <niemeyer> mpl: It's non-draconian, and quite straightforward
[13:09] <mpl> niemeyer: kthx, I'm on it right away then.
[13:12] <niemeyer> mpl: No problem
[13:12] <niemeyer> mpl: and thank you
[13:13] <mpl> sure, thank you for putting me on track.
[13:13] <niemeyer> mpl: I've just reviewed your change, and there's one minor nit to address..
[13:13] <niemeyer> mpl: Once you do it, just run "lbox propose -cr" again, and I'll merge it
[13:16] <mpl> I'm on a different machine now, I'll just have to setup things again.
[13:16] <mpl> CA submitted btw.
[13:16] <niemeyer> mpl: Beautiful, thanks!
[13:16] <niemeyer> Ok, I suggest going through the same procedure you did originally
[13:16] <mpl> one question regarding the submit procedure (open to anyone here):
[13:17] <niemeyer> mpl: and then run "bzr pull" once you're inside of the branch
[13:17] <mpl> when I lbox proposed, it took me to the browser page where I have to agree to let launchpad access things on my behalf
[13:17] <niemeyer> mpl: That's right
[13:17] <niemeyer> mpl: Well, almost right anyway
[13:17] <mpl> I selected the most permissive option
[13:18] <niemeyer> mpl: It's actually lbox asking Launchpad so lbox can do things on your behalf
[13:18] <niemeyer> mpl: Yeah, I think that's necessary in this case, since it's actually writing content on your behalf
[13:18] <mpl> yes. what's the minimum working level I was supposed to agree on there?
[13:18] <mpl> ok
[13:19] <niemeyer> mpl: Btw, if that's a machine you're not comfortable leaving your credentials around, make sure you kill "~/.lpad*" when you stop working there
[13:19] <mpl> ah, good to know, thx.
[13:19] <niemeyer> mpl: It's not your real credentials, but rather just a token
[13:19] <niemeyer> mpl: Even then, someone can impersonate you with it
[13:20] <mpl> ok. no worries though. both are my laptops and somehow secure.
[13:20] <mpl> *somewhat
[13:22] <mpl> niemeyer: btw, I'm new to bzr (ok with hg and pretty comfy with git) so I might make mistakes with it at first. I've read the minimum docs so far.
[13:22] <niemeyer> mpl: Not a problem by any means
[13:28] <mpl> niemeyer: so you're saying I should pull from the original branch. not simply 'bzr branch lp:~mathieu-lonjaret/gozk/zookeeper' ?
[13:29] <niemeyer> mpl: Yeah, I suggest you follow the original process, and once you have the local environment setup, get inside your feature branch and pull from your changes at that URL
[13:29] <niemeyer> mpl: The reason being that this way you'll preserve the parent link with the local trunk
[13:29] <niemeyer> mpl: You can also do it in other ways, but that one is simple to follow and explain :)
[13:30] <mpl> ah, I think I see, thx.
[13:34] <mpl> niemeyer: for further changes, I suppose it's cleaner I append them to the current commit (#22), rather than creating new commits, right? I mean, if there's such a thing with bzr...
[13:34] <niemeyer> mpl: No, we generally don't encourage reorganization of history
[13:35] <niemeyer> mpl: Even more considering it has been published
[13:35] <niemeyer> mpl: This would be discouraged even with git
[13:35] <mpl> well published, but not yet merged.
[13:35] <niemeyer> mpl: Yeah, but it's published.. the review was made on a given revision, which we should be able to look at again if necessary
[13:36] <niemeyer> rog: Is Jacek the guy you told me about at UDS?
[13:36] <rog> niemeyer: no
[13:36] <niemeyer> rog: Ok, so he mailed us independently.. cool
[13:36] <mpl> ok, I just got used to doing that way with brad for camlistore. but we're using gerrit, so nothing's "published" until it's merged yeah.
[13:37] <niemeyer> mpl: Ok, it feels like a bad idea even with gerrit
[13:37] <niemeyer> mpl: FWIW
[13:38] <niemeyer> mpl: Otherwise you have a review published, for which you don't have the codebase anymore
[13:38] <mpl> true. the only trace of the patchsets are in gerrit.
[13:39] <mpl> otoh it encourages to commit often for small changes in the same review without fearing of filling the log with ridiculous commits.
[13:40] <mpl> but anyway, not suggesting anything, was just asking. :)
[13:45] <niemeyer> mpl: Yeah, there are two religious camps.. we're on the side that don't really care about the history of feature branches
[13:45] <niemeyer> mpl: The history of trunk we care about, though
[13:45] <niemeyer> mpl: So we're generally more careful on the merge messages
[13:46] <mpl> uh, that's odd. the tests all passed on that machine and yet I have no recollection of installing/trying more zookeeper things on here.
[13:58] <mpl> niemeyer: so I've branched from lp:gozk/zk, made a feature branch, cded into it, pulled from lp:~mathieu-lonjaret/gozk/zookeeper, made my changes, made a new commit. now if I lbox propose -cr, it will know that we're still in the same CL, and not create a new one, right?
[13:58] <mpl> or am I forgetting something?
[14:02] <smoser> hazmat, i'm interested in what you think of my comments to bug 897645
[14:02] <_mup_> Bug #897645: juju should support an apt proxy for private clouds <cloud-init:New> <juju:Confirmed> < https://launchpad.net/bugs/897645 >
[14:05] <niemeyer> mpl: That's right
[14:05] <niemeyer> mpl: Just propose -cr again
[14:05] <niemeyer> mpl: Then, visit the codereview page, and mention it's ready
[14:06] <niemeyer> mpl: In the future, I'll change lbox propose so that it adds a note by itself
[14:12] <mpl> hmm I must have messed up somewhere:
[14:12] <mpl> bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/~mathieu-lonjaret/gozk/zookeeper/.bzr/branch/lock): Transport operation not possible: http does not support mkdir()
[14:14] <rog> mpl: did you kill a bzr process?
[14:15] <rog> mpl: actually, no that gives a different error
[14:17] <mpl> no, I don't think I did.
[14:21] <niemeyer> mpl: Ah, I see what's going on
[14:21] <niemeyer> mpl: try this:
[14:21] <niemeyer> mpl: within the branch
[14:21] <niemeyer> bzr push --remember lp:~mathieu-lonjaret/gozk/zookeeper
[14:21] <niemeyer> mpl: Then try to propose again
[14:21] <rog> niemeyer: maybe #juju is better for this conversation
[14:21] <niemeyer> rog: Yeah, both are cool
[14:21] <niemeyer> rog: Oh, no, I guess you're right
[14:22] <niemeyer> rog: We're moving onto juju territory
[14:22] <rog> niemeyer: that's what i thought
[14:22] <niemeyer> rog: +1
[14:22] <rog> niemeyer: i'm not sure of a good way to test the juju cloudinit code though
[14:22] <rog> niemeyer: it's a public function in the juju package, so it should have a test, but i'm not sure what level would be best.
[14:23] <rog> niemeyer: maybe just compare the output, as discussed
[14:23] <hazmat> smoser, replied on the bug
[14:23] <rog> niemeyer: at least that'll guard against regression
[14:23] <hazmat> stepping out to run some errands, back in 40m
[14:24] <niemeyer> rog: yeah, that sounds sane
[14:24] <niemeyer> hazmat: Cheers
[14:24] <mpl> niemeyer: whoa I hit a go runtime panic
[14:25] <niemeyer> mpl: Oh, I wanna know about that one
[14:25] <mpl> (while proposing). the command you gave me worked to deblock the situation though.
[14:25] <mpl> k, sending you the stack by e-mail then.
[14:25] <niemeyer> mpl: Can you please paste the output somewhere?
[14:25] <mpl> ok, a paste then.
[14:26] <smoser> hazmat, i think juju should expose cloud-init to end users.
[14:27] <mpl> niemeyer: http://paste.ubuntu.com/753711/
[14:27] <smoser> not by default, but at least allow the user to specify "start all my instances with these parts"
[14:27] <smoser> as that would generically solve local customization problems that you will undoubtedly run into.
[14:29] <niemeyer> mpl: Oh.. hmm
[14:29] <niemeyer> mpl: Damn.. I think that's the bug I fixed in Go's http package itself
[14:29] <mpl> niemeyer: I have some RL work to finish, so I might get a little laggy but I'll catch up
[14:29] <niemeyer> mpl: I'll have to recompile lbox with the latest tip
[14:30] <mpl> ok
[14:30] <niemeyer> mpl: Let me do that
[14:50] <hazmat> smoser, hmm
[14:51] <hazmat> niemeyer, what do you think about cloud-init for end users exposed directly via juju?
[14:51] <niemeyer> hazmat: Sounds bad is my first instinct
[14:52] <hazmat> smoser, it does makes sense, its already a generic high level language for customization, its just whether or not it promotes different practices (machine think vs container think) then what we're aiming for
[14:52] <niemeyer> hazmat: It's an implementation detail.. we already have providers where cloud-init doesn't exist
[14:53] <hazmat> niemeyer, just one.. and in that one cloud-init doesn't nesc. make sense, and its something we could add.. but thats a different context cloud-init there would be in a container, not a machine
[14:54] <smoser> what is the provider where you have no cloud-init ?
[14:54] <hazmat> smoser, local provider
[14:54] <smoser> lxc runs as a container, no ?
[14:54] <hazmat> smoser, yup
[14:54] <smoser> you really *should* have cloud-init there.
[14:54] <smoser> imo
[14:55] <hazmat> smoser, we debated it, its lack there is mostly as optimization, it cuts down the unit creation time via lxc clone considerably
[14:55] <niemeyer> hazmat: That's not entirely true
[14:55] <smoser> lack of cloud-init does not cut down on unit creation time.
[14:55] <smoser> a pre-customized lxc container does.
[14:55] <niemeyer> smoser: The real reason we don't use cloud-init there is because LXC containers run units, rather than mimicking EC2 machines
[14:55] <hazmat> bcsaller, ping
[14:56] <niemeyer> smoser: So we don't really want to perform the customizations cloud-init is about
[14:56] <hazmat> smoser, true, and we do use a pre-customized container, bcsaller advocated effectively for not using cloud-init there, and has some more details
[14:56] <niemeyer> smoser: cloud-init is a way to put the environment in a given state for juju to run
[14:56] <smoser> and to do other things.
[14:56] <niemeyer> smoser: It doesn't make sense to run cloud-init on your local laptop, for instance
[14:56] <smoser> that you could expose to a user
[14:57] <smoser> so you wouldn't have to modify juju to know things that the user might have to change.
[14:57] <smoser> it doesn't *not* make sense to run cloud-init there
[14:57] <smoser> if cloud-init has no data, it does nothing.
[14:57] <niemeyer> smoser: In many cases we want to know it, because we want juju to be in the loop
[14:57] <smoser> (ignoring the stupid ec2 timeout thing)
[14:57] <mpl> niemeyer: it looks like lbox proposed paniced when doing the push to rietveild. as a result, the new changes have been pushed to launchpad but not to rietveld.
[14:58] <smoser> i'm suggesting that you do not want juju in the loop for all possible local customizations
[14:58] <smoser> apt-proxy is an example.
[14:58] <smoser> but you will undoubtedly run into other cases where the user needs to run some dynamic change to the image
[14:58] <niemeyer> smoser: E.g. if there's a proxy, the user should be telling juju about that fact, and we'll be accommodating not only the first machine creation, but we should also be *changing* the proxy in all existing machines if the user decides to change it
[14:58] <niemeyer> smoser: Same thing about ssh keys
[14:58] <niemeyer> smoser: cloud-init is an implementation detail for bootstrapping a machine
[14:58] <niemeyer> smoser: juju stays around forever
[14:59] <smoser> thats fine. i dont disagree.
[14:59] <smoser> but i think you need to expose *some* way that a user can modify the image on first boot possibly before juju takes over.
[14:59] <smoser> and ideally without modifying a golden image prior to boot.
[14:59] <niemeyer> mpl: That's fine, you can propose again without damages
[14:59] <niemeyer> mpl: and lbox was just recopmiled
[15:00] <niemeyer> mpl: can you please apt-get update; apt-get upgrade;?
[15:00] <niemeyer> mpl: to get the new lbox
[15:00] <niemeyer> mpl: and try again?
[15:00] <niemeyer> smoser: charms is the way to modify the image
[15:00] <smoser> and i think that cloud-init is the thing that makes most sense there (yes, thats probalby because its "my baby") but i think you need something.
[15:00] <niemeyer> smoser: in other cases, juju itself should be in the loop
[15:00] <smoser> charms cannot modify the guest before juju runs
[15:00] <niemeyer> smoser: We already manage ssh keys, for instance
[15:00] <niemeyer> smoser: Proxy seems to fit in the same category
[15:01] <smoser> i think it is unreasonable to believe that juju will be all knowing about image customization.
[15:01] <smoser> and that you should enable a generic method.
[15:02] <smoser> even if it is only "i will run this for you in the image before juju runs"
[15:02] <smoser> (which would not be cloud-init specific)
[15:02] <smoser> but would allow the user to make modifications that juju doesn't have to know about. possibly they would only need that until juju grew appropriate legs.
[15:03] <niemeyer> smoser: Maybe.. I'm certainly interested in learning about those cases.
[15:03] <smoser> apt-proxy is the first.
[15:03] <smoser> apt-mirror is the second
[15:03] <niemeyer> smoser: Proxy is the first one we find out of the ordinary that should be supported.
[15:03] <niemeyer> smoser: and in that case, it *should* be built-in
[15:03] <niemeyer> smoser: E.g. we want to set up the proxy within LXC units in a local deployment
[15:03] <niemeyer> smoser: cloud-init doesn't run there
[15:04] <niemeyer> smoser: Same thing about apt-mirror
[15:04] <smoser> (there is no reason that cloud-init does not run there)
[15:04] <smoser> (but even if it wasn't cloud-init, juju could still make that promise)
[15:05] <smoser> if you want a list of other things that you may need to configure try 'man apt.conf'
[15:05] <smoser> or 'man resolv.conf'
[15:05] <niemeyer> smoser: co-located charms can fix those problems
[15:05] <niemeyer> smoser: Some of them, anyway
[15:05] <niemeyer> smoser: But I digress..
[15:06] <niemeyer> smoser: I don't disagree that some generic customization mechanism may be needed.
[15:06] <niemeyer> smoser: I'd tend to make it more generic, though.. such as a script
[15:06] <smoser> cloud-init "parts" can be a script.
[15:06] <smoser> its a generic mechanism.
[15:06] <niemeyer> smoser: but all of that is irrelevant for the current debate.  Proxy should be a first class option.
[15:06] <smoser> i'm curious as to why cloud-init does not exist in your lxc containers
[15:07] <smoser> i really dont think that proxy should be a first class citizen to a service orchestration solution.
[15:07] <smoser> i would think that is way below the level of something that juju should care about, but i'm willing to accept your opinion there.
[15:07] <niemeyer> smoser: juju bootstrap --http-proxy=<url>
[15:07] <niemeyer> smoser: Should be as simple as that.
[15:08] <niemeyer> smoser: No excuses for being harder.
[15:08] <smoser> its a somewhat arbitrary complication of your command line interface to me.
[15:08] <smoser> but, i'm not arguing that.
[15:08] <niemeyer> smoser: Ask koolhead11's opinion.. he was the user faced with the problem
[15:09] <niemeyer> Anyway, I have to head to lunch right now or I'll lose my wife.. biab
[15:12] <mpl> niemeyer: something odd happened. I got a 403 when it tried pushing to rietveld. I retried; it did _not_ ask for my google credentials again, and this time it worked.
[15:13] <rog> niemeyer: i've made some changes in response to your comments on ec2-error-fixes, and pushed them (with lbox propose) but i don't see the changes reflected on the codereview site
[15:13] <rog> niemeyer: i'm probably doing something wrong!
[15:17] <koolhead11> So juju wants a user Ubuntu to exist in my image/instance?
[15:18] <koolhead11> is there a way i can change it from configuration file
[15:20] <rog> niemeyer: aaargh, i forgot the -cr flag!
[15:22] <marcoceppi> m_3 SpamapS I don't think phpMyAdmin has a "read-only" mode. I think it'd probably need to create a MySQL database user with just select access which makes maintaining configuration of it kind of difficult
[15:25] <m_3> marcoceppi: cool... np.  was just thinking read-only mode would be cool
[15:26] <m_3> might be worth filing as a feature request on the charm once it's in lp:charm/phpmyadmin so we don't forget... somebody might pick it up over time
[15:26] <marcoceppi> Good idea. I'm going to try to wrestle with the package and see where that goes
[15:30] <niemeyer> rog: I've been thinking about that stuff.. I'll probably introduce support for a ".lbox" file within the branch, and take default options from there
[15:30] <niemeyer> rog: So that "lbox propose" will do the intended thing for the specific project
[15:30] <rog> niemeyer: i think that's a good idea
[15:30] <niemeyer> mpl: Superb, I'll check it again
[15:30] <rog> niemeyer: i was thinking of suggesting something like that
[15:30] <rog> niemeyer: ec2-error-fixes should be ready to rock BTW
[15:30] <niemeyer> rog: Looking at it right now
[15:34] <niemeyer> rog: Reviewed
[15:36] <niemeyer> mpl: Good stuff
[15:36] <niemeyer> mpl: Will get the zookeeper change submitted
[15:37] <niemeyer> mpl: I'll be back in 5 mins
[15:37] <niemeyer> mpl: Please let me know when you have a moment for us to talk about the next task
[15:37] <rog> niemeyer: i don't think IsNil works for checking if the length of an array is zero
[15:37] <niemeyer> rog: It works in this case, because the length is zero because it's actually a []string(nil)
[15:38] <rog> niemeyer: hmm, i'm sure i was bitten by this - DeepEqual used to return true and now it doesn't
[15:38] <rog> niemeyer: i'll check
[15:39] <rog> niemeyer: ah, i see, i did the naive translation
[15:40] <rog> niemeyer: why wasn't it IsNil before?
[15:42] <niemeyer> rog: I was misguided to think it was an empty slice due to the poor printing we had just a couple of weeks ago
[15:43] <niemeyer> rog: Thankfully, fmt was changed and it now shows the fact it's nil
[15:44] <mpl> niemeyer: cool, thx. if it's not too long, pretty much anytime will be ok to discuss.
[15:44] <niemeyer> mpl: Ok, let's go then
[15:44] <rog> niemeyer: ah, cool. done and pushed.
[15:44] <niemeyer> mpl: We don't have any kind of mechanism for logging in the juju packages right now
[15:44] <niemeyer> rog: Awesome, please feel free to submit
[15:45] <niemeyer> mpl: It'd be cool to have something _very_ simple going..
[15:45] <mpl> niemeyer: like http basic auth, with login and pass?
[15:45] <niemeyer> mpl: Oh, no, I mean just logging, not "login" :)
[15:45] <mpl> oh sorry.
[15:45] <mpl> ok.
[15:47] <niemeyer> mpl: I'm thinking about something like this:
[15:47] <niemeyer> mpl: Imagine an interface in the juju package like so:
[15:48] <niemeyer> mpl: func SetLogger(l Logger); func SetDebug(enabled bool); func Logf(format string, args ...interface{}); func Debugf(...)
[15:49]  * marcoceppi reinvents the wheel
[15:49] <niemeyer> marcoceppi: As long as it's rounder, that's fine ;-)
[15:49] <niemeyer> mpl: So one can call juju.Debugf("doing something: %v", whatever)
[15:50] <niemeyer> mpl: etc
[15:50] <niemeyer> mpl: Same thing about Logf
[15:51] <mpl> what's the diff between LogF and Debugf? Debugf is only used when debug is enabled?
[15:51] <niemeyer> mpl: Precisely
[15:52] <niemeyer> mpl: Well, it's only _effective_ when ...
[15:52] <rog> niemeyer: bzr: ERROR: Cannot lock LockDir(lp-85338896:///%2Bbranch/goamz/ec2/.bzr/branchlock): Transport operation not possible: readonly transport
[15:52] <niemeyer> mpl: I suggest pulling from lp:goetveld
[15:52] <niemeyer> rog: You've been there before, I think :)
[15:52] <rog> niemeyer: i tried rebranching
[15:53] <niemeyer> rog: Yeah, it's the same issue.. we have to rename
[15:53] <rog> niemeyer: perhaps you haven't updated the alias for ec2?
[15:53] <rog> ah yes
[15:53] <niemeyer> rog: Please push to the same URL, but under ~gophers
[15:53] <mpl> niemeyer: ok, got it, thx. anything else?
[15:53] <niemeyer> rog: I'll switch the official URL
[15:53] <niemeyer> mpl: Yeah, I suggest having a look at lp:goetveld
[15:53] <niemeyer> mpl: Check the log.go file
[15:54] <rog> niemeyer: pushed
[15:54] <niemeyer> mpl: There's a lot you won't care about, but there's somethings you can mimic
[15:54] <niemeyer> rog: Thanks, switching now
[15:55] <niemeyer> rog: Done
[16:48] <niemeyer> rog: exp/ssh is coming along pretty well
[16:48] <niemeyer> rog: I'm hoping we'll be able to use it by the time we get tehre
[16:48] <rog> niemeyer: yeah, i'm really happy to see that
[16:48] <rog> niemeyer: 'cos we need it!
[16:48] <rog> niemeyer: and i wasn't looking forward to writing it...
[16:49] <niemeyer> rog: Well, we'd definitely not write an *ssh* package in a juju context :-)
[16:49] <niemeyer> rog: We'd do the same we do in the current impl.. just wrap ssh
[16:49] <niemeyer> rog: and that may actually turn out to be a better idea in the long run, anyway
[16:49] <rog> niemeyer: ah, i didn't realise that's what it did
[16:49] <niemeyer> rog: But.. I'm hopeful at least
[16:50] <niemeyer> rog: Yeah, it's very straightforward actually
[16:50] <niemeyer> rog: Well, hmmm.. ok.. not _that_ straightforward if we take into account the error handling and retry logic
[16:51] <rog> niemeyer: does it just pipe into ssh? or use a local proxy port?
[16:51] <niemeyer> rog: It manages the proces
[16:51] <niemeyer> s
[16:51] <niemeyer> rog: Ok.. what's the next MP in the pipeline?
[16:52] <niemeyer> rog: I've been looking for the tip, but it looks like they already include the ec2 stuff
[16:52] <rog> niemeyer: you mean, what am i working on next, or what should you review next?
[16:52] <niemeyer> rog: What's the tip of the iceberg? :-)
[16:52] <niemeyer> rog: The latter
[16:52] <rog> niemeyer: probably ec2-ec2test
[16:52] <niemeyer> rog: Cool
[16:52] <niemeyer> rog: Btw, can you please give me a hand merging https://codereview.appspot.com/5445048/?
[16:53] <niemeyer> rog: It'd be good to run tests before merging.. I think mpl didn't have the env setup for that
[16:53] <rog> niemeyer: ok. what about things that depend on it?
[16:53] <niemeyer> rog: They'll have to be fixed in due time as well
[16:53] <niemeyer> rog: But there's no way to break that circle besides doing this
[16:54] <rog> niemeyer: indeed - i just wondered if we should push a load of stuff together
[16:55] <niemeyer> rog: Pushing loads of stuff together never sounds great ;-)
[16:55] <rog> niemeyer: ok, seems good to me
[16:55] <niemeyer> rog: What depends on it right now, either way?
[16:55] <rog> lol
[16:55] <rog> maybe... nothing
[16:56] <niemeyer> 8)
[16:56] <niemeyer> rog: Btw, there's an update for lbox
[16:57] <rog> niemeyer: cool. what new goodies in the box today?
[16:58] <niemeyer> rog: Trivial changes. Besides recompiling with the Go http fix, it allows leaving the description empty, which is something I've noticed you missing
[16:59] <rog> niemeyer: yeah - sometimes the single line summary is enough
[16:59] <niemeyer> rog: Agreed
[16:59] <rog> niemeyer: thanks
[17:00] <rog> niemeyer: hmm, should i compile zookeeper against weekly or release?
[17:00] <rog> niemeyer: i think i should probs stick to release and then tag a weekly version too
[17:01] <niemeyer> rog: I vote for weekly/tip for the moment
[17:01] <niemeyer> rog: Since that's what both you and me are using
[17:01] <niemeyer> rog: That said,
[17:01] <niemeyer> rog: let's not announce the new interface right now
[17:02] <niemeyer> rog: So we can move that onto lp:gozk by the time the next stable is out
[17:02] <niemeyer> rog: So everybody is happy
[17:02] <rog> niemeyer: ah, of course this still isn't public
[17:02] <rog> niemeyer: cool.
[17:02] <niemeyer> rog: Or rather, make it well known that the pkg name is now gozk/zookeeper
[17:02] <rog> niemeyer: the official path is currently launchpad.net/gozk, right?
[17:03] <niemeyer> rog: It is
[17:03] <rog> niemeyer: the only problem is that i won't be able to straight-merge mathieu's patch, 'cos i'll have to do error-fixes first
[17:03] <rog> niemeyer: but i can do that
[17:05] <niemeyer> rog: Hmm..
[17:06] <SpamapS> https://launchpadlibrarian.net/86149655/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr418-1juju1~precise1_FAILEDTOBUILD.txt.gz
[17:06] <niemeyer> rog: Wasn't that fixed?
[17:06] <SpamapS> juju hasn't been able to run its test suite in a clean chroot on precise for a while.. :-P
[17:06] <niemeyer> rog: Or is it still in the queue?
[17:06] <rog> niemeyer: i don't think it did an error-fixes on zk
[17:06] <rog> s/it/i/
[17:06] <rog> or...
[17:07] <niemeyer> rog: If you want to quickly push that change, I'm happy to do a fast review on it
[17:07] <rog> niemeyer: error-fixes before rename?
[17:07] <rog> rather than combining?
[17:07] <rog> currently i'm combining, but i can do two merges
[17:07] <SpamapS> hazmat: think you can take a look at that build failure? we haven't had a good build on precise since r413
[17:08] <SpamapS> hazmat: something with argparse and stale .pyc's
[17:08] <hazmat> SpamapS, sure just wrapping up an email and i'll take a look
[17:08] <rog> niemeyer: i think i'll do that. mathieu's patch should merge fine.
[17:08] <niemeyer> rog: Yeah
[17:09] <niemeyer> rog: I'm happy with either
[17:09] <rog> i'll keep 'em separate.
[17:11] <rog> niemeyer: that way mathieu gets his name on the merge
[17:13] <niemeyer> rog: Good one
[17:15] <rog> niemeyer: one small request BTW, when it gets stable, could we make lbox a little less verbose please?
[17:16] <niemeyer> rog: +1.. I also had that in mind
[17:16] <rog> niemeyer: one or two lines of output would be nice... (and a -verbose flag for debugging)
[17:19] <hazmat> SpamapS, not sure what thats about, i'll try running the tests  on a precise image
[17:20] <niemeyer> rog: Agreed
[17:20] <niemeyer> rog: We may need more lines, due to the URLs, but not much else
[17:20] <rog> niemeyer: cool
[17:26] <rog> niemeyer: it'd be good if lbox could use parent branch for target URL if push branch of target isn't there.
[17:26] <rog> (maybe it does now)
[17:29] <rog> niemeyer: https://codereview.appspot.com/5440056/
[17:29]  * rog loves gofix
[17:30] <niemeyer> rog: It does already
[17:30] <rog> niemeyer: cool
[17:30] <rog> niemeyer: i'll update my lbox now
[17:31] <rog> "lbox is already the newest version."
[17:31] <rog> ah, apt-cache!
[17:32] <niemeyer> rog: Reviewed
[17:32] <niemeyer> rog: I love codereview :)
[17:32] <hazmat> rog +1
[17:32] <niemeyer> rog: apt-get update as well, maybe
[17:33] <rog> niemeyer: that's what i was intending to mean, if i'd remembered the magic commands properly
[17:37] <rog> niemeyer: another minor thing: it'd be nice if lbox didn't add a new diff to a codereview file if nothing changes in that file.
[17:41] <mpl> rog, niemeyer: catching up: I don't care about my name on the merge btw. what only matters to me is to learn and that the other guys doing the work (ie you), know what I do too. :)
[17:41] <rog> niemeyer: pushed
[17:41] <rog> mpl: yeah, but it's nice as a historical record :-)
[17:45] <rog> niemeyer: how do i create lp:gozk/zookeeper
[17:46] <niemeyer> rog: We just have to rename the series.. let me do that
[17:47] <rog> niemeyer: ah, ok. i just pushed to ~gophers/gozk/zookeeper
[17:47] <niemeyer> mpl: Thanks, but it's just honest that we have your name in the merge proposal
[17:48] <niemeyer> rog: Beautiful, if I haven't screwed up anything, it should all be working
[17:49] <mpl> niemeyer: sure. as long as it's not a burden on the workflow.
[17:49] <rog> niemeyer: all seems good. goinstall, cd $GOROOT/...; gotest, all clean
[17:52] <niemeyer> Woohay
[18:05] <mpl> niemeyer: I grepped for zk and zookeeper in lp:juju/go and found nothing relevant. I suppose I shall leave that one alone then?
[18:11] <m_3> jcastro: yo... about to talk to upstream voltdb devs about charming... you wanna include anything?
[18:12] <rog> i'm off now, see y'all tomorrow.
[18:12] <m_3> rog: 0/
[18:18] <mpl> bai
[18:22] <rog> niemeyer: i said ec2-ec2test was next, but actually go-juju-initial-ec2 is before that, and is more central (there are some comments there from fwereade that need thinking about too)
[18:33] <niemeyer> rog: No worries.. will look at that one
[18:39] <marcoceppi> So, apparently you can pass assoc arrays to bash functions. My whole morning is shot
[18:45] <niemeyer> Folks, I have a doctor appointment, so leaving a bit early today.
[18:45] <niemeyer> Wish me luck. ;)
[18:48] <m_3> niemeyer: luck!
[18:48] <SpamapS> marcoceppi: wha?
[18:48] <jcastro> m_3: not really, invite them along to charm school if they want
[18:49] <SpamapS> q/win 45
[18:49] <SpamapS> doh
[18:49] <marcoceppi> SpamapS: can't pass associative arrays to a bash function
[18:50] <SpamapS> marcoceppi: the moment you use arrays in bash.. you should start thinking about the rewrite in python/ruby/perl ;)
[18:51] <marcoceppi> SpamapS: heh, yeah. I tried moving what I was doing wit templating into a bash function. If only I could pass assoc arrays it would be perfect.
[18:56] <SpamapS> marcoceppi: you may be overthinking it
[18:56] <marcoceppi> This is what I have now, if you want to take a look
[18:56] <m_3> marcoceppi: unfortunately the only associative array available is the environment itself... :(  I guess that gets "passed" everywhere tho
[18:57] <SpamapS> yeah don't be mucking with global
[18:57]  * m_3 sigh...
[18:57] <SpamapS> no sigh.. this is why we have better languages. :)
[18:57] <m_3> true
[18:57] <SpamapS> Oh and you can, btw, pass in a bunch of local variables to be set.. its evil.. but eval lets you do basically anything. ;)
[18:57] <m_3> it's frustrating sometimes... because it's _almost_ a real lang :)
[18:58] <marcoceppi> http://paste.ubuntu.com/753978/
[18:58] <marcoceppi> It was an attempt to make a quick template parser for charm-helper
[18:58] <marcoceppi> sadly, it stops just short of working
[18:59] <SpamapS> so...there may be a way yet to do this
[18:59] <SpamapS> remember that functions have their own stdin/stdout
[18:59] <SpamapS> so don't bother separating the variables/template
[18:59] <marcoceppi> I thought about using a json parser, but then backed away slowly
[18:59] <SpamapS> what you really want to do is handle the file bit of the template.. not the template itself
[19:00] <SpamapS> replace_file /etc/mysql/my.cnf <<<EOF
[19:00] <SpamapS> or rather
[19:00] <SpamapS> ch_replace_file
[19:01] <SpamapS> marcoceppi: really tho.. I'm hesitant to encourage this level of logic in shell :)
[19:08] <marcoceppi> So have the file come via stdin then each param is a string replace? ch_parse_tpl key1 val1 key2 val2 < /file/path.tpl
[19:09] <marcoceppi> I think this idea is a bust though, at least in charm-helper terms for now
[19:12] <jcastro> SpamapS: hey let's clear up the scheduling for SCALE, he has the room set aside all day but we don't need it that long right?
[19:12] <jcastro> I was thinking like, 2-5pm or something like that?
[19:13] <SpamapS> jcastro: indeed, also do we have any idea if we'll have a chance to talk about it earlier in the day?
[19:13] <SpamapS> marcoceppi: no, just do the variable substitution in calling code
[19:13] <jcastro> SpamapS: ok, I will fire off another email to him, but we'd want only like a few hours in the afternoon right?
[19:13] <SpamapS> marcoceppi: for looping.. you're already pushing the limits of shell.
[19:14] <SpamapS> jcastro: I think so yes
[19:14] <SpamapS> jcastro: make it easy for people to attend for 1 hour and still get to the other talks going on at the same time.
[19:14] <SpamapS> jcastro: I feel like we can make use of the room earlier in the day too though.. call it the Juju lounge or something. :)
[19:14] <marcoceppi> SpamapS: Bash can handle it!! http://www.youtube.com/watch?v=BhsTmiK7Q2M
[19:15] <SpamapS> please summarize
[19:15] <SpamapS> I refuse to watch videos about anything except babies/puppies/cats
[19:15] <jcastro> charmers: incoming status.net charm!
[19:16]  * SpamapS braces
[19:19] <jcastro> SpamapS: ... and time for newbie question.
[19:19] <jcastro> SpamapS: so you know how mediawiki we can add haproxy and scale the web part with add-unit.
[19:19] <SpamapS> jcastro: yes
[19:19] <jcastro> do we have a special designation for charms that allow that vs. charms that are just "single-stack" I guess
[19:20] <jcastro> I was thinking for best practice for charms, we should encourage relationships with things that allow charms to scale up
[19:20] <jcastro> so like "be add-unit friendly"
[19:22] <SpamapS> well mediawiki is an app that has this built in
[19:22] <jcastro> ah ok
[19:22] <SpamapS> it has nothing to do with the charm
[19:22] <SpamapS> we have discussed giving charms some metadata suggesting min/max units
[19:23] <SpamapS> so that people don't add-unit on a stateful service and screw themselves over
[19:45] <hazmat> SpamapS, wrt to that.. take something like mysql which can assume either a master or slave, in master mode it really only accepts a single unit, but slave can have multiple
[19:45] <hazmat> although in that context, it seems like it might be better to separate out the slave as a separate charm
[19:48] <_mup_> Bug #897834 was filed: charms should be able to spec max unit and maybe step <juju:New> < https://launchpad.net/bugs/897834 >
[19:51] <SpamapS> hazmat: the way replication works now, same charm, separate service
[19:51] <hazmat> SpamapS, right, but if we wanted to support that max unit metadata for the charm it would be splitting it out
[19:55] <SpamapS> hazmat: I think for mysql, such a thing is not as useful.
[19:56] <SpamapS> hazmat: but for say, game server charms it makes some sense.
[20:08] <hazmat> SpamapS, why wouldn't it make sense for the mysql master, isn't the reasoning the same?
[20:09] <SpamapS> hazmat: I don't want two charms for that. I'd prefer to have a mysql charm which is capable of being morphed from a slave into a master.
[20:11] <hazmat> SpamapS, hm.. morphing is a good point, but if has more than one unit when it morphs its an unknown state afaics
[20:13] <SpamapS> hazmat: thats ok, we can promote the leader, and let the other two pound sand.
[20:13] <_mup_> juju/rest-agent-api r403 committed by kapil.thangavelu@canonical.com
[20:13] <_mup_> rest spec
[20:49] <_mup_> juju/support-num-units r417 committed by jim.baker@canonical.com
[20:49] <_mup_> Merged trunk
[21:45] <jimbaker>  hazmat, with ref to https://code.launchpad.net/~jimbaker/juju/support-num-units/+merge/81660 and your point about error handling on removing service units: using the analogy of rm in the shell, juju remove-unit should remove all the units it can in the list, logging any it can't remove (invalid name, not found, etc)
[21:54] <hazmat> jimbaker, i'm not sure rm is a good analogy, but that behavior seems reasonable, currently its an exception on first failure
[21:54] <jimbaker> hazmat, exactly
[21:54] <hazmat> jimbaker, i don't follow
[21:55] <jimbaker> hazmat, i'm just agreeing with your statement that it stops on first failure for juju remove-unit
[21:55] <hazmat> rm will also do the same
[21:56] <jimbaker> hazmat, no, it will remove the files it can in a list, reporting the ones it cannot
[21:58] <jimbaker> hazmat, re whatever way we will go, i will simply add to the api change thread so it can be properly agreed upon
[21:59] <hazmat> jimbaker, rm will error out if an encounters a file it can't remove
[22:01] <hazmat> jimbaker, i guess not
[22:01] <hazmat> nm
[22:02] <hazmat> jimbaker, but yes that sounds like reasonable behavior, report errors after acting upon those that can be acted upon
[22:02] <jimbaker> hazmat, so we have two things: 1) remove files that it can; 2) report a non-zero status code
[22:02] <jimbaker> hazmat, cool
[22:02] <SpamapS> hazmat: any luck on precise? juju seems totally broken on my precise box
[22:03] <jimbaker> i mean of course, remove service units that it can :)
[22:19] <SpamapS> hazmat: http://paste.ubuntu.com/754204/ .. thats just running 'juju status' on my precise box
[22:20] <marcoceppi> m_3: (or anyone) how did you deal with unattended installs and debconf?
[22:21] <SpamapS> marcoceppi: the default frontend is noninteractive, so it should just choose the default
[22:21] <SpamapS> marcoceppi: or rather, it is explicitly set that way in juju
[22:21] <marcoceppi> I need to override a default selection
[22:22] <SpamapS> debconf-set-selections
[22:22] <marcoceppi> how do I know what selection to set?
[22:22] <SpamapS> debconf-get-selections ? ;-)
[22:22] <marcoceppi> I guess that's the question I meant to ask
[22:22] <marcoceppi> I checked :)
[22:23] <SpamapS> /var/lib/dpkg/info/$packagename.template should have the questions that will be asked
[22:23] <SpamapS> after unpack of the package
[22:23] <hazmat> SpamapS, doh.. forgot about it, my instance is still running though, un momento
[22:25] <marcoceppi> interesting
[22:35] <m_3> marcoceppi: notice that it does this in your fork https://gist.github.com/1336585
[22:35] <marcoceppi> yeah, but I can't figure out where this selection is. It doesn't show up in the template
[22:37] <m_3> debconf-get-selections | grep <someapp>
[22:38] <m_3> might catch it ift's in an aux package
[22:38] <marcoceppi> Ah, there we go. I needed debconf-utils
[22:39] <marcoceppi> Coooool beans lets try this out.
[22:40] <jimbaker> hazmat, i'm going to suggest that the behavior in the branch be maintained. the reason is that there's precedent in juju terminate-machines to fail upon the first error
[22:46] <hazmat> jimbaker, fair enough
[22:50] <SpamapS> oh man, I miss my 40 core openstack cloud
[22:50] <SpamapS> provisioned hadoop so fast
[22:50] <_mup_> juju/support-num-units r418 committed by jim.baker@canonical.com
[22:50] <_mup_> Review points
[23:04] <moozz> bootstrap is giving me some pain
[23:05] <moozz> Could not find any Cobbler systems marked as available and configured for network boot.
[23:05] <moozz> anyone help?
[23:05] <moozz> sux because cobbler is pushing them out
[23:05] <_mup_> juju/trunk r419 committed by jim.baker@canonical.com
[23:05] <_mup_> merge support-num-units [r=fwereade,hazmat][f=809599]
[23:05] <_mup_> Modified juju add-unit and juju deploy to take --num-units/-n option;
[23:05] <_mup_> juju remove-unit takes a list of service unit names to be removed.
[23:06] <moozz> I am using latest ubuntu server btw
[23:07] <moozz> the error comes in when I run juju bootstrap
[23:08] <moozz> anyone able to offer some pointers?
[23:15] <moozz> So I would have thought to make a cobbler system available I would enable the network boot flag for a system and then run juju bootstrap onece
[23:15] <moozz> ?
[23:16] <moozz> anyone know much about this stuff?
[23:28] <_mup_> juju/ssh-passthrough r413 committed by jim.baker@canonical.com
[23:28] <_mup_> Review points
[23:33] <_mup_> juju/ssh-passthrough r414 committed by jim.baker@canonical.com
[23:33] <_mup_> Resolved conflicts
[23:36] <hazmat> moozz, you also need to setup it up with a management class that juju is using
[23:37] <_mup_> juju/trunk r420 committed by jim.baker@canonical.com
[23:37] <_mup_> merge ssh-passthrough [r=fwereade,hazmat][f=812441]
[23:37] <_mup_> Modified juju ssh to enable standard ssh arguments, by passing them
[23:37] <_mup_> through to the underlying ssh command.
[23:46] <jimbaker> hazmat, originally i proposed (i think from a recommendation by m_3 ) that juju scp canonicalize relative paths under /var/lib/juju/units/SERVICE-UNIT. however, this has two issues. 1) this path is owned by root, not the ubuntu user. 2) this is different for machine IDs, which uses /home/ubuntu, since there's no real good default
[23:47] <jimbaker> so i propose a simple fix: all relative paths should be with respect to /home/ubuntu, that is default scp behavior
[23:47] <m_3> moozz: sorry, haven't worked with juju on bare metal yet
[23:47] <SpamapS> moozz: acquired-mgmt-class and available-mgmt-class need to be setup .. juju will only take machines if they are in available-mgmt-class
[23:51] <hazmat> SpamapS, think i've got most of  the test failures on precise fixed, mostly it was timeouts due to status test setting up a large tree, i did find two minor fixes for tests that weren't running predictably 100%
[23:51] <hazmat> doing one more full run to verify
[23:52] <hazmat> jimbaker, sounds reasonable
[23:52] <hazmat> jimbaker, i believe ssh does the same
[23:52] <hazmat> juju ssh that is
[23:53] <hazmat> SpamapS, what kind of box was that 40 core? and where cann i get one? ;-)
[23:53] <jimbaker> hazmat, cool. i will send out the email about this command, just found it when i was verifying some examples on the proposed api change. i had full mocks. but apparently i had missed when i actually tested against ec2