[00:03] <jimbaker> hazmat, ok, just making sure
[00:04] <jimbaker> hazmat, i did just triage bug 846055 as invalid after some more analysis
[00:04] <_mup_> Bug #846055: Occasional error when shutting down a machine from security group removal <juju:Invalid> < https://launchpad.net/bugs/846055 >
[00:04] <hazmat> jimbaker, cool
[00:05] <jimbaker> (related to my fix of bug 863510 a little bit ago)
[00:05] <_mup_> Bug #863510: destory-environment errors and hangs forever <juju:Fix Released by jimbaker> < https://launchpad.net/bugs/863510 >
[00:18]  * hazmat noodles on a charm browser
[02:38] <SpamapS> ifup /win 20
[12:02] <jamespage> morning all
[12:02] <jamespage> local provider is working really well for me
[12:03] <jamespage> however can't upgrade charms - http://paste.ubuntu.com/703911/
[12:03] <jamespage> not the end of the world as destroying restarting is only a couple of minutes :-)
[12:04] <hazmat> jamespage, g'morning
[12:04] <jamespage> morning hazmat
[12:05] <hazmat> oh.. ugh
[12:06] <hazmat> i forgot that the unit agent downloads the charm...
[12:06] <hazmat> so a filesystem solution isn't going to work very well
[12:06] <hazmat> since the unit and machine agent both need access and are on separate fs mounts
[12:07] <hazmat> jamespage, yeah.. upgrade is definitely broken
[12:28] <_mup_> Bug #869945 was filed: upgrade broken for local provider <juju:New> < https://launchpad.net/bugs/869945 >
[12:43] <rog> can i ask a quick question about the juju source again, please?
[12:43] <rog> i'm trying to understand the machine startup process
[12:43] <hazmat> rog, go for it
[12:44] <rog> in ec2/__init__.py, there are these two lines:
[12:44] <rog>         constraints = machine_data.get("constraints", {})
[12:44] <rog>         return EC2LaunchMachine(self, master, constraints).run(machine_id)
[12:45] <hazmat> rog, so contraints are how we get the image
[12:45] <hazmat> or what sort of machine we run, or where we run it
[12:46] <rog> i'm wondering where the zookeepers argument to EC2LaunchMachine.start_machine comes form
[12:46] <rog> s/form/from/
[12:47] <hazmat> the second line starts the machine, the master param is whether we start a a zookeeper and provisioning agent on the node by default. the machine id is to inform the machine of its zk machine id so its machine agent can connect back to the right place.
[12:47] <rog> (and i think i've got confused over the two start_machine implementations... let me have another look)
[12:47] <rog> ah, machine_id is the machine id of the zookeeper machine, not the new machine?
[12:48] <hazmat> rog yes..
[12:48] <hazmat> the provider machine id is only known after the instance has been launched
[12:49] <hazmat> rog, some of it is a little confusing because of the desire to reuse implementation and lots of similiar named things... it looks like constraints doesn't actually determine machine type now that i look at it, just image selection.
[12:50] <hazmat> rog, start_machine gets called by a base class in common/launch.py
[12:50] <hazmat> from the launchmachine base class run method
[12:51] <rog> hazmat: yeah, it's winding in and out of the base class.
[12:51] <rog> i was confused
[12:51] <rog> i think i see it now
[12:51] <hazmat> rog, it uses the findzookeeper class to get the zks to populate the arg to start_machine
[12:51] <hazmat> rog, cool
[12:51] <rog> yeah, i had looked at that before, but hadn't made the connection
[12:51] <rog> thanks
[12:51] <hazmat> np
[12:52] <_mup_> juju/hooks-with-noninteractive-apt r395 committed by kapil.thangavelu@canonical.com
[12:52] <_mup_> set debian noninteractive
[12:53] <rog> hazmat: one last, Q: where's the run(machine_id) method defined?
[12:54] <hazmat> rog which one?
[12:54] <rog> the one in  return EC2LaunchMachine(self, master, constraints).run(machine_id)
[12:54] <hazmat> rog, its the primary entrance point into a  LaunchMachine class, its invoked by the provider facade' start_machine method  defined in each provider package
[12:55] <hazmat> rog, its defined on the common/launch.py LaunchMachine class
[12:56] <rog> so it is. grep fail.
[12:57] <rog> ah, so machine_id *is* the id of the new machine, not of the zookeeper machine
[12:59] <rog> thanks again
[13:01] <smoser> bug 863629
[13:01] <_mup_> Bug #863629: libvirt-lxc: virFileOpenTtyAt can't be called on /some/other/dev/pts <patch> <server-o-nrs> <libvirt (Ubuntu):Confirmed> < https://launchpad.net/bugs/863629 >
[13:03] <fwereade> hazmat: lp:810649 (Revision number should be optional in metadata) has now been fully addressed, I think; but not in the manner suggested in the bug
[13:03] <fwereade> hazmat: shall I mark it invalid?
[13:03] <hazmat> fwereade, i'm not sure
[13:03] <hazmat> fwereade, the common case the bug is raising isn't addressed
[13:04] <hazmat> which is i modify a formula, go to deploy it, and transparently the on in storage is used instead
[13:04] <hazmat> s/on/one
[13:04] <fwereade> hazmat: hmm, you're right
[13:23] <hazmat> fwereade, so i started to look at exposing the provider storage over http to allow units to download for upgrades
[13:23] <fwereade> hazmat: oh yes?
[13:24] <hazmat> fwereade, yeah.. i forget the unit agents download the charms directly for upgrades only
[13:24] <hazmat> but i ran into the issue that the charm urls (from provider storage) aren't the same if i bind the webserver on localhost in the host
[13:24] <hazmat> its localhost in the provider and 192.168.122.1 in the unit
[13:25]  * hazmat wonders if he should break for coffee
[13:26] <fwereade> what does 192.168.122.1 resolve to in the host?
[13:27] <hazmat> ah.. i can bind it explicitly to that interface probably
[13:27] <hazmat> fwereade, it is the host
[13:27] <hazmat> yeah.. that's the ticket
[13:27] <hazmat> fwereade, thanks
[13:32] <fwereade> hazmat: yw
[13:39] <niemeyer> Morning all!
[13:39] <niemeyer> Sorry, a bit late.. there was a fierce fight with bed this morning
[13:39] <fwereade> heya niemeyer
[13:40] <niemeyer> fwereade: yo!
[13:40] <niemeyer> fwereade: Some of the fight was useful.. I woke up with a thought in my head about errors and the store
[13:40] <fwereade> niemeyer: we could certainly make the error handling much more sophisticated
[13:40] <niemeyer> fwereade: We need to tell people about non-existent and bad charms somehow
[13:41] <fwereade> niemeyer: bad in what sense?
[13:41] <niemeyer> fwereade: I think it's straightforward, but we need a patch soonish, and support on the fake thingy to see if it's working
[13:41] <niemeyer> fwereade: bad as in there's content in a branch that the store can't pack
[13:41] <fwereade> niemeyer: ah, ok
[13:42] <fwereade> niemeyer: the fake needs some work anyway, it's only barely good enough to tell that it ought to work
[13:42] <niemeyer> fwereade: My suggestion is this:
[13:42] <fwereade> niemeyer: and doesn't even consider usernames
[13:43] <niemeyer> fwereade: let's introduce a couple of additional keys for each entry returned through /charm-info
[13:43] <niemeyer> fwereade: "warning", and "error"
[13:43] <niemeyer> fwereade: The store would take these like that:
[13:43] <niemeyer> Erm, sorry
[13:43] <niemeyer> fwereade: The client would take these like that:
[13:44] <niemeyer> 1) If there's a "warning", print it as a warning (duh) and continue using the received info normally
[13:44] <fwereade> niemeyer: daring and unorthodox, but I can get behind that
[13:44] <fwereade> :p
[13:44] <niemeyer> 2) If there's an "error" raise a CharmError with the received string and the given charm URL
[13:44] <fwereade> niemeyer: fair enough; what about multiple errors?
[13:45] <fwereade> niemeyer: well, I guess we don't need to worry about them yet
[13:45] <fwereade> niemeyer: API sounds sensible though
[13:45] <niemeyer> fwereade: Yeah, we'll sort them out in the server side for now
[13:45] <niemeyer> fwereade: Please note these go inside each individual charm's json doc
[13:45] <niemeyer> fwereade: So, e.g.:
[13:45] <niemeyer> fwereade: {charm_url: {"error": "no metadata.yaml found"}}
[13:45] <fwereade> niemeyer: yep, they're charm info not request info
[13:45] <niemeyer> fwereade: Yeah, +1
[13:46] <fwereade> niemeyer: sounds good, I'll have a go at that now then
[13:46] <niemeyer> fwereade: Thanks!
[13:46] <niemeyer> SpamapS: We have a couple of bug fixes in the pipeline, FYI
[13:47] <niemeyer> SpamapS: One is merged, the other fwereade is working on right now
[13:48] <_mup_> Bug #870000 was filed: client should understand errors and warnings from the charm store <juju:In Progress by fwereade> < https://launchpad.net/bugs/870000 >
[13:50] <fwereade> niemeyer: suggestion: list of warnings, rather than restricting ourselves to just one?
[13:51] <niemeyer> fwereade: List of warnings and list of errors? Hmm
[13:51] <fwereade> niemeyer: I'd imagined an error to be a "you're boned, processing stops now" condition
[13:52] <fwereade> niemeyer: whereas if a warning doesn't stop anything, more warnings are possible
[13:52] <niemeyer> fwereade: Yeah, but there is always the "you're _seriously_ boned" case
[13:52] <niemeyer> fwereade: +1 on lists for both
[13:53] <fwereade> niemeyer: ok, sounds good
[13:53] <hazmat> niemeyer, are you planning on doing a web ui on the store to start?
[13:53] <niemeyer> fwereade: and calling them "warnings"/"errors" instead
[13:53] <fwereade> niemeyer: indeed
[13:53] <niemeyer> hazmat: Not to start.. I'm planning to maybe get it in time at all :-)
[13:53] <hazmat> niemeyer, i was playing around with something yesterday, just because i needed a list of interfaces available from other formulas
[13:54] <niemeyer> hazmat: Nice
[13:54] <niemeyer> hazmat: Did you put the client interface to test?
[13:54] <hazmat> niemeyer, i'm just querying lp and scanning bzr branches
[13:54] <niemeyer> hazmat: Ah, ok
[13:54] <hazmat> niemeyer, is there a store endpoint up already?
[13:54] <niemeyer> hazmat: Nope
[13:55] <hazmat> SpamapS, there's one more bug in progress on a fix for local provider upgrades as well
[13:55] <hazmat> SpamapS, feel free to merge the deb dir removal as well
[13:56] <niemeyer> hazmat, SpamapS: Erm, hold on?
[13:56] <niemeyer> hazmat, SpamapS: Please don't remove the debian dir now.. the PPA depends on it, this isn't important right now I'd guess?
[13:57] <hazmat> niemeyer, its not to me.. but SpamapS had a pending branch out for a while regarding
[13:57] <hazmat> niemeyer, i pushed it to the review queue, and its currently awaiting a merge
[13:58] <niemeyer> hazmat: Ok, I'm pushing it back then
[13:58] <niemeyer> hazmat: and retargetting to the florence milestone
[13:59] <niemeyer> hazmat: There's no reason for us to rush this in and have to fix the PPA _right now_
[13:59] <hazmat> niemeyer, fine by me
[14:20] <niemeyer> Review queue is empty
[14:21] <niemeyer> hazmat: and man, good catch on the DEBIAN_FRONTEND
[14:22] <niemeyer> Totally forgot about it
[14:35] <rog> niemeyer: i'm still waiting for some feedback on the changes i made on my merge proposals in response to your comments, BTW. not that it's that crucial.
[14:35] <niemeyer> rog: Yeah, I know.. I've been focusing on the release since yesterday
[14:36] <rog> niemeyer: that's fine, just checking.
[14:36] <niemeyer> rog: The changes to the Server interface I think should really be postponed, btw
[14:36] <rog> niemeyer: you mean the factoring out of the service package?
[14:36] <niemeyer> rog: Yeah
[14:36] <niemeyer> rog: I'll check your branches now
[14:36] <rog> niemeyer: i've gone with you on that, yes
[14:36] <niemeyer> rog: Cool, cheers
[14:37] <rog> niemeyer: i've merged back in the fixes that were in that branch
[14:37] <niemeyer> rog: Sweet, checking it out
[14:38] <niemeyer> rog: Your juju branch is ready for action, btw
[14:38] <rog> ?
[14:40] <rog> ah, you mean fix-tutorial-with-expose?
[14:44] <rog> niemeyer: when i try to push to lp:juju, i get:
[14:44] <rog> bzr: ERROR: Cannot lock LockDir(lp-82305296:///%2Bbranch/juju/.bzr/branchlock): Transport operation not possible: readonly transport
[14:44] <niemeyer> rog: You probably have a wrong url there
[14:44] <rog> ok
[14:44] <niemeyer> rog: What's "bzr info" telling you about the push location?
[14:44] <rog> i did an explicit push
[14:45] <rog> % bzr push lp:juju
[14:46] <rog> http://paste.ubuntu.com/703998/
[14:59]  * hazmat is annoyed by twistd
[15:05] <rog> odd
[15:06] <jamespage> hazmat: http://paste.ubuntu.com/704017/
[15:06] <jamespage> not sure libzookeeper-java is going to give us quite enough for the local provider
[15:06] <hazmat> jamespage, ah.... that's why we wanted zookeeperd
[15:06] <hazmat> jamespage, we can work around that
[15:06] <jamespage> well zookeeper is actually enough
[15:06] <hazmat> but its a bug
[15:07] <jamespage> that way nothing starts - but you still get the configuration files
[15:08] <hazmat> jamespage, i think zookeeperd actually setups /etc/zookeeper/conf
[15:08] <hazmat> we source it for env variables
[15:08] <jamespage> I just tried on a clean server install - zookeeper is enough
[15:08] <hazmat> jamespage, cool
[15:08] <jamespage> zookeeperd just installs the init scripts I think
[15:09] <rog> QQs: what's the difference between machine_id and instance_id, and why does ec2.securitygroup.open_provider_port take both a machine and machine_id?
[15:09] <rog> ( can't you get a machine_id from a machine?)
[15:09] <hazmat> jamespage, i'm not sure who set its up i see /etc/zookeeper/conf_example from zookeeper .. but it looks like its some sort of script that setups the actual directory /etc/zookeeper/conf
[15:10] <jamespage> hazmat, the configuration is managed by the alternatives system
[15:10] <jamespage> (I think)
[15:10]  * jamespage goes to look
[15:10]  * jamespage worries is memory is not what it used to be
[15:12] <jamespage> hazmat: yep - http://tinyurl.com/6a88tag
[15:12]  * jamespage not so worried anymore
[15:13] <jamespage> I personally don't like that much - inherited from previous package maintainer
[15:15] <hazmat> jamespage, well your the maintainer now.. :-)
[15:15] <hazmat> jamespage, thanks, i'll update the pkg check and docs for zookeeper
[15:16] <_mup_> juju/local-provider-docs r394 committed by kapil.thangavelu@canonical.com
[15:16] <_mup_> update dependency s/libzookeeper-java/zookeeper
[15:16] <hazmat> jamespage, alternatively we could manually scan /usr/share/java for the ones we need (minus version numbers).. but its more of a slippery slope
[15:16] <hazmat> libs we need that is
[15:16] <hazmat> bcsaller, you mentioned you might have some additions to the local provider docs?
[15:17] <hazmat> for some reason twistd won't daemonize for me..
[15:17]  * hazmat smells a rabbit hole
[15:28] <hazmat> wrong cli arg
[15:35] <rog> niemeyer:
[15:35] <rog> package provider
[15:35] <rog> // Machine represents a running machine instance.
[15:35] <rog> type Machine interface {
[15:35] <rog> 	Id() string
[15:35] <rog> 	DNSName() string
[15:35] <rog> 	PrivateDNSName() string
[15:35] <rog> }
[15:35] <rog> type Port struct {
[15:35] <rog> 	Proto string
[15:35] <rog> 	Number int
[15:35] <rog> }
[15:35] <rog> type Interface {
[15:35] <rog> 	// StartMachine asks for a new machine instance to be created.
[15:36] <rog> 	// The id of the new machine is given by id.
[15:36] <rog> 	// The currently running list of zookeeper machines
[15:36] <rog> 	// is given by zookeepers.
[15:36] <rog> 	// It returns the new Machine (which is not necessarily
[15:36] <niemeyer> rog: paste.ubuntu.com
[15:36] <rog> 	// running yet).
[15:36] <niemeyer> rog: paste.ubuntu.com
[15:36] <niemeyer> rog: paste.ubuntu.com
[15:36] <niemeyer> rog: paste.ubuntu.com
[15:36] <rog> 	StartMachine(id string, zookeepers []Machine) (Machine, os.Error)
[15:36] <rog> 	// Machines returns the list of currently started instances.
[15:36] <rog> 	Machines() ([]Machine, os.Error)
[15:36] <rog> 	// OpenPort opens a new port on m to the outside world.
[15:36] <niemeyer> WTF
[15:36] <rog> 	OpenPort(m Machine, p Port) os.Error
[15:36] <rog> 	// ClosePort closes the port on m.
[15:36] <rog> 	ClosePort(m Machine, p Port) os.Error
[15:36] <rog> 	// OpenedPorts returns the list of currently open ports
[15:36] <rog> 	// on m.
[15:36] <rog> 	OpenedPorts(m Machine) ([]Port, os.Error)
[15:36] <rog> 	// FileURL returns a URL that can be used to access the given file.
[15:36] <rog> 	URL(file string) (string, os.Error)
[15:36] <rog> 	// Get returns the contents of the given file as a string.
[15:36] <rog> 	Get(file string) (string, os.Error)
[15:36] <rog> 	// Put writes contents to the given file.
[15:36] <rog> 	Put(file string, contents string) os.Error
[15:36] <rog> 	// Destroy shuts down all machines and destroys the environment.
[15:36] <rog> 	Destroy() os.Error
[15:36] <rog> }
[15:36] <rog> // Register registers a new provider. Name gives the name
[15:36] <rog> // of the provider. The connect function is to be used to connect
[15:36] <rog> // to the given provider type; attrs gives any provider-specific
[15:36] <rog> // attributes; and it should return the newly created provider.Interface.
[15:36] <rog> //
[15:48] <rog> muchos apologies folks
[15:48] <rog> it seems that xclip does not work
[15:50] <niemeyer> rog: It worked very well apparently! ;-)
[15:50] <niemeyer> rog: Btw, your url was indeed wrong
[15:50] <niemeyer> rog: The trunk branch belongs to the juju team
[15:50] <niemeyer> rog: which you're part of
[15:50] <niemeyer> rog: So to be able to commit/push, you need to be using it as lp:~juju/juju/trunk
[15:51] <niemeyer> rog: Please be extra careful there
[15:52] <rog> niemeyer: no, xclip didn't work. i'd told it to hold the URL!
[15:52] <niemeyer> rog: LOL
[15:52] <niemeyer> rog: update-server-interface reviewed
[15:52] <niemeyer> rog: A few comments, but good stuff overall
[15:52] <rog> it seems that X clipboards are fundamentally broken, a fact which i knew once, but had forgotten.
[15:52] <niemeyer> rog: Yeah, I do know that one
[15:52] <niemeyer> rog: For a while!
[15:52]  * niemeyer => lunch
[15:53] <rog> niemeyer: i'm also off for the weekend
[15:53] <niemeyer> rog: Nice, enjoy!
[15:53] <rog> niemeyer: have a good one!
[15:53] <niemeyer> rog: Btw, warm +1 on "error"
[15:53] <niemeyer> rog: Let's try to get this one in
[15:53] <rog> niemeyer: yeah, i think it works ok
[15:53] <rog> niemeyer: when i thought of it, i was "yeah, that works"
[15:53] <niemeyer> rog: I was looking for an alternative to the ugly error.Value
[15:54] <rog> niemeyer: me too.
[15:54] <niemeyer> rog: But couldn't find anything else.. a standard "error" would be delicious
[15:54] <rog> yup
[15:54] <niemeyer> Hmmm.. delicious.. lunch!
[15:54] <niemeyer> Cheers!
[15:54] <niemeyer> :)
[15:54] <rog> ttfn
[15:54] <rog> have a good w/e
[15:59] <rog> ha ha! it seems that my editor uses the xclip's default clipboard, but everything else uses a different one. so i can't make xclip work with both. for god's sake.
[16:04] <rog> force majeur: http://paste.ubuntu.com/704053/
[16:05] <rog> that seems to work. i promise that i will try very hard not to do multiline pastes again
[16:09] <hazmat> plan9?
[16:10] <rog> hazmat: a plan 9 compatibility library i use to introduce some sanity into my command line
[16:11] <rog> hazmat: (and my C programs, when i write them)
[16:11] <rog> rc is a nice minimal shell for scripting in.
[16:11] <rog> it fixes some of the fundamental problems with direct sh/csh derivatives
[16:13] <rog> niemeyer: here's what i was originally trying to paste
[16:13] <rog> http://paste.ubuntu.com/704063/
[16:14] <rog> a sketch of what the juju provider interface might look like in Go.
[16:15] <rog> any inputs as to whether or not that might be approaching sufficient would be much appreciated.
[16:15] <rog> right, gotta go. see y'all on monday.
[16:16] <rog> i'll leave the machine on IRC for a while though, so i'll see any comments.
[16:28] <hazmat> rog have a good one
[16:55] <niemeyer> rog: THe interface looks pretty good
[17:02] <niemeyer> The new logo is quite neat
[17:02] <niemeyer> https://juju.ubuntu.com/
[17:05] <hazmat> niemeyer, but i liked the juju man ;-)
[17:06] <niemeyer> hazmat: Yeah, I liked it as well, but some people didn't
[17:15] <niemeyer> hazmat: Do you want to have a quick look at this, given we don't have much time to get it wrong: https://code.launchpad.net/~fwereade/juju/charm-store-errors/+merge/78635
[17:16] <hazmat> niemeyer, looking
[17:16] <hazmat> niemeyer, i've got one last branch that i need to push as well
[17:17] <hazmat> niemeyer, also if you have a chance to look over the local provider docs
[17:18] <niemeyer> hazmat: I've already reviewed everything this morning
[17:18] <niemeyer> hazmat: If you have a few comments in the branches that were up
[17:18] <niemeyer> s/If//
[17:18] <hazmat> niemeyer, nice
[17:21] <hazmat> niemeyer, fwiw empty tuple is also a single allocation in python ()
[17:22] <hazmat> hmm.. maybenot
[17:24] <hazmat> interesting.. id func in python has some strange behavior id(object()) == id(object())
[17:26] <jimbaker> hazmat, that's almost certainly because the memory is immediately reclaimed, then used again
[17:27] <jimbaker> if you hold a ref to the first object(), no such equality
[17:27] <hazmat> jimbaker, thanks.. that was rather confusing
[17:28] <hazmat> so indeed python does a single allocation for the empty tuple
[17:28] <jimbaker> such object pooling is an important optimization in cpython. also  in part why unladen swallow was doomed
[17:29] <jimbaker> hazmat, correct, i believe that's the behavior in jython too
[17:29] <jimbaker> again, just an optimization
[17:30] <jimbaker> http://docs.python.org/library/functions.html#id - Two objects with non-overlapping lifetimes may have the same id() value
[17:31] <niemeyer> hazmat: Hmm.. that was actually my point
[17:31] <hazmat> niemeyer, oh.. i thought you where just referencing go
[17:31] <niemeyer> hazmat: No, it was a brain hiccup
[17:32] <niemeyer> hazmat: There's no such thing as a () object in Go
[17:33] <niemeyer> hazmat: The empty tuple is a rock. :-)
[17:33] <hazmat> niemeyer, this interface is a little.. we're expecting to get back json errors and warnings from the charm server?
[17:33] <hazmat> embedded in a charm info?
[17:33] <niemeyer> hazmat: Hmm
[17:33] <hazmat> oh.. its a collection url
[17:33] <niemeyer> hazmat: I'd put it slightly differently
[17:34] <niemeyer> hazmat: We're expecting to get errors and warnings related to the charm as part of the charm info
[17:34] <hazmat> niemeyer, what sort of errors and warnings?
[17:34] <niemeyer> hazmat: Completely broken charms, for intance
[17:34] <niemeyer> instance
[17:35] <niemeyer> hazmat: Since it's a bazaar branch, there's no way to prevent them from being pushed
[17:35] <hazmat> niemeyer, i noticed :-).. all kinds of style variations on the pushes
[17:35] <hazmat> trunk-1, trunk, random stuff .. its really all over the place already
[17:35] <hazmat> it looked like trunk-1 was an attempt to add series to existing trunks
[17:36] <niemeyer> hazmat: Yeah.. the store will help making them more even
[17:36] <niemeyer> hazmat: That's not entirely strange.. hmm.. I suspect it may actually have been done by LP itself at some point
[17:36] <niemeyer> hazmat: Either way, we'll only be looking at /trunk for now
[17:36] <hazmat> that leaves about 22 charms
[17:37] <hazmat> from ~charmers
[17:37] <hazmat> out of 70 some
[17:37] <hazmat> numbers change.. practices evolve
[17:37] <niemeyer> hazmat: Sounds ok.. easy to fix
[17:38] <niemeyer> hazmat: In practice, we'll want the branch name to become invisible in the future
[17:38] <hazmat> niemeyer, definitely
[17:38] <hazmat> niemeyer, some times the end segment is actually the charm name.. only for collectd and collectd-node that i saw
[17:39] <niemeyer> hazmat: You mean it's repeated? Like collectd/collectd?
[17:39]  * hazmat digs out his script
[17:39] <hazmat> niemeyer, yup
[17:39] <hazmat> skipping r-B ~charmers/charm/oneiric/collectd/collectd
[17:39] <hazmat> skipping r-B ~charmers/charm/oneiric/collectd/collectd-node
[17:40] <niemeyer> hazmat: Cool.. these should be collectd/trunk and collectd-node/trunk
[17:40] <niemeyer> hazmat: It also highlights the importance of a strong convention there
[17:40] <hazmat> niemeyer, well i expect juju push charm_name .. .will help alot
[17:40] <niemeyer> hazmat: =1
[17:40] <niemeyer> +1
[17:42] <hazmat> review in.. back to networking local provider storage
[17:42] <hazmat> fwereade, ^
[17:43] <niemeyer> hazmat: Cheers!
[18:19] <_mup_> juju/go-charm-bits r14 committed by gustavo@niemeyer.net
[18:19] <_mup_> Moved expanding logic to its own function as suggested by Rog
[18:19] <_mup_> in the review.
[18:19] <_mup_> juju/local-provider-storage r395 committed by kapil.thangavelu@canonical.com
[18:19] <_mup_> web access to the local provider disk storage
[18:40] <_mup_> juju/local-provider-storage r396 committed by kapil.thangavelu@canonical.com
[18:40] <_mup_> wire in storage server into provider bootstrap and destroy-env
[18:42] <_mup_> juju/go-charm-bits r15 committed by gustavo@niemeyer.net
[18:42] <_mup_> Remove internal filepath.Rel. It's now upstream.
[18:44] <_mup_> juju/go r12 committed by gustavo@niemeyer.net
[18:44] <_mup_> Merged go-charm-bits branch [r=fwereade,rogpeppe]
[18:44] <_mup_> This fixes several problems in the bundling and expansion of
[18:44] <_mup_> charms in the Go port.
[18:50] <hazmat> niemeyer, jimbaker, bcsaller if you have a moment the network'd local provider storage could use a look
[18:51] <jimbaker> hazmat, i will take a look
[18:51] <hazmat> jimbaker, thanks
[19:01] <hazmat> niemeyer, just a fwiw, the zk project ended up linking directly to our out of date docs on zookeeper usage within juju
[19:02] <bcsaller> hazmat: http://pastebin.ubuntu.com/702946/
[19:26] <niemeyer> Hmm
[19:26] <niemeyer> hazmat: Which docs are that?
[19:26] <niemeyer> SpamapS: How is the openstack conf going?
[19:32] <hazmat> niemeyer, the ones linked to from the zookeeper poweredby wiki page.. they linked (not at my request to) to https://juju.ubuntu.com/docs/internals/zookeeper.html
[19:32] <hazmat> i gave them juju.ubuntu.com as a link.. but oh well.. i did notice how out of date those docs are
[19:33] <niemeyer> hazmat: Cool.. still a quite reasonable overview I guess, from the perspective of someone interested in a vague feeling of what we do
[19:34] <niemeyer> hazmat: From our end, though, yeah, that's quite out of date
[19:42] <_mup_> juju/hooks-with-noninteractive-apt r396 committed by kapil.thangavelu@canonical.com
[19:42] <_mup_> also capture APT_LISTCHANGES_FRONTEND for noninteractive hook usage
[19:44] <hazmat> bcsaller, +1 on the trivial.. we should have someone else look as well
[19:44] <hazmat> jimbaker, thanks for the review
[19:45] <hazmat> niemeyer, jimbaker could either of you look at that trivial ben just posted.. it forces the dnsmasq into the resolver config by inserting into head.. both jamespage and bcsaller had problems getting dns resolution working without it.. i don't understand why its needed as it should be picked up from dhcp.. but it works and solves a real issue for some.
[19:46] <hazmat> er.. fixes the issue for those who had it
[19:47] <niemeyer> hazmat: Isn't it because it's being regenerated/
[19:47] <niemeyer> ?
[19:47] <hazmat> niemeyer, so we set it manually for the chroot customize script.. but when the container boots, it should pick it up from the dhcp server (dnsmasq)
[19:48] <niemeyer> hazmat: Ok.. I don't understand it either, but if it fixes the problem it looks good for the moment
[19:48] <niemeyer> bcsaller: Extra space after the ">" please, as usual for other similar lines in the same file
[19:48] <hazmat> but for some reason its not.. doing this insertion into /etc/resolvconf/resolve.conf.d/base  assures that its *always* included in the generated resolve.conf
[19:49] <hazmat> i wonder if its a race condition exposed by ssd vs rotating disk.
[19:49] <hazmat> jimbaker, does your desktop you tested local provider with have a rotating disk?
[19:51] <niemeyer> hazmat: What content ends up in the file when it doesn't work?
[19:51] <hazmat> niemeyer, resolv.conf is empty
[19:51] <niemeyer> Hmm
[19:51] <hazmat> which makes no sense.. because their networking is running, and dnsmasq handed out the address, and can resolve the container name..
[19:51] <niemeyer> hazmat: Has someone tried to re-get the dhcp information after that?
[19:51] <niemeyer> hazmat: I'm wondering if the started dhcp is actually not answering DNS requests properly
[19:51] <hazmat> niemeyer, not afaik
[19:52] <jimbaker> hazmat, it has an ssd
[19:52] <hazmat> oh well there goes that idea..
[19:53] <hazmat> bcsaller, ^ you want to try and debug root cause
[19:54] <jimbaker> hazmat, i certainly have not needed ben's trivial, not certain what makes my env diff
[19:54] <hazmat> jimbaker, it worked for you, me, and kim0|holiday without the change.. but it didn't work for jamespage or bcsaller without it
[19:54] <bcsaller> hazmat: I can look at it again, sure
[19:55] <hazmat> bcsaller, we should still go ahead and commit, i'm curious if we can get a better understanding of the problem, else its a chicken ;-)
[19:56] <hazmat> bcsaller, might need tcpdump to look at the queries on the wire
[19:57]  * hazmat gives up on sup
[19:57] <hazmat> too many random messages won't load
[20:04] <_mup_> juju/local-provider-docs r395 committed by kapil.thangavelu@canonical.com
[20:04] <_mup_> address review comments
[20:06] <_mup_> juju/trunk r396 committed by kapil.thangavelu@canonical.com
[20:06] <_mup_> merge local-provider-docs [r=jimbaker,niemeyer][f=867991]
[20:06] <_mup_> Basic usage docs for using the local provider.
[20:07] <_mup_> juju/go-charm-url r16 committed by gustavo@niemeyer.net
[20:07] <_mup_> Syncing regex from Python code.
[20:10] <hazmat> niemeyer, btw.. it looks like the resolved install_error does indeed go all the way through to start, i must have fixed it and forgotten about it
[20:11] <niemeyer> hazmat: Ah, phew.. sweet
[20:12] <hazmat> so afaics the pending for eureka is good, just two important items in the review queue.
[20:12] <hazmat> actually i'd consider the orchestra docs to be important but i think andreas is still at openstack conf
[20:13]  * hazmat goes back to playing with a charm browser
[20:35] <_mup_> juju/go-charm-url r17 committed by gustavo@niemeyer.net
[20:35] <_mup_> - Changed error messages as suggested by Rog.
[20:35] <_mup_> - More tests on charm URL parsing.
[20:35] <_mup_> - Fixed parsing bug.
[20:57] <_mup_> juju/go r13 committed by gustavo@niemeyer.net
[20:57] <_mup_> Merged go-charm-url branch [r=rogpeppe,fwereade]
[20:57] <_mup_> This introduces support for charm URLs in the Go port.
[21:20] <niemeyer> I'll head out for some exercising
[21:20] <jimbaker> niemeyer, enjoy!
[21:20] <niemeyer> Will try to work a bit on that stuff over the weekend
[21:20] <niemeyer> jimbaker: Cheers
[21:26] <hazmat> niemeyer, cool, have a good one
[21:57] <hazmat> hmm. bzr has to have a command to get out the rev id
[22:04] <jimbaker> hazmat, you might want to look at the butler code in ftests
[22:16] <hazmat> jimbaker, probably nott
[22:16] <hazmat> but thanks
[22:17] <hazmat> jimbaker, i'm working against revids not revnos
[22:17] <hazmat> its a hidden command in bzr.. revision-info
[22:28] <jimbaker> hazmat, sounds good
[22:49]  * niemeyer observes hazmat implementing a second store
[22:51] <hazmat> race? ;-)  .. more seriously i just want a web interface to see formulas and interfaces, its rather hard to make something that can connect to multiple things without some sort of interface repo
[22:51] <hazmat> unless your writing all the charms yourself
[22:52] <hazmat> niemeyer, also playing around with redis as a queue server
[22:52] <hazmat> fairly lightweight.. still not sure i'm using it usefully.. but dropping everything into mongo at the end
[22:52] <niemeyer> hazmat: Sure, as long as you're not planning to put that online for people to consume, that's fine
[23:10] <SpamapS> niemeyer: pretty insane response to the demo w/ Jane
[23:10] <niemeyer> SpamapS: Ohhh.. please tell me about it!
[23:10] <SpamapS> niemeyer: the talk later was a little bit disorganized but the buzz was *HUGE*
[23:11] <SpamapS> niemeyer: OpenStack people are very excited. Cisco has a similar project called Donabe that focuses more on how to define the network resources needed between apps but clearly has the same focus..
[23:11] <SpamapS> err... s/focus/direction/ there
[23:11] <niemeyer> SpamapS: That's very exciting for us too!
[23:11] <SpamapS> niemeyer: anyway, we deployed hadoop on openstack in 5 minutes.. people were *very* impressed.
[23:12] <SpamapS> and the status2gource visualization was wowing people ;)
[23:13] <SpamapS> niemeyer: we also made it clear that we had deployed openstack using juju
[23:13] <SpamapS> niemeyer: https://launchpad.net/donabe btw ;)
[23:14] <niemeyer> SpamapS: 5 minutes?  Woah
[23:14] <SpamapS> niemeyer: to be clear, openstack was already deployed.. we just spun up hadoop-master, hadoop-slave, and ganglia (and 5 additional units with add-unit hadoop-slave)
[23:15] <SpamapS> niemeyer: but the commands all seemed to resonate with the audience
[23:15] <SpamapS> I believe there will be vidio
[23:15] <SpamapS> video even
[23:15] <niemeyer> SpamapS: juju making its magic.. neat :-)
[23:16] <SpamapS> niemeyer: yeah, it helped that all the nodes were on a single machine, so network transfer speed was basically 1/2 of RAM bus speed
[23:16] <niemeyer> SpamapS: I bet
[23:16] <SpamapS> and we had a pretty beefy 6x15krpm RAID5 backing all instance volumes
[23:17] <SpamapS> niemeyer: the box we had was actually a 40 core machine w/ 128G of ram
[23:17] <SpamapS> really fun to play with
[23:19] <SpamapS> definitely a lot to do to make the juju on openstack experience more robust.. the EC2 stuff on nova works ok.. but often gives back responses that txaws gets confused by
[23:20] <SpamapS> anyway, about to board flight back to LA
[23:24] <niemeyer> SpamapS: Woah
[23:24] <niemeyer> SpamapS: I've never been even close to such a machine :)
[23:24] <niemeyer> SpamapS: COol, have a good flight (and weekend!)