[12:15] <nvucinic> hi guys, is there any way to find out what can i call in post in phone-home  ?
[12:24] <harmw> what do you mean? just put in an url and it'll GET (or HEAD) that, perhaps even POST :)
[12:25] <harmw> yea, I think it supports POST as well - you should just look in the src though, to make sure
[12:26] <nvucinic> yeah, it supports post, i got the data
[12:26] <nvucinic> my question was this
[12:26] <nvucinic> example: http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/891/doc/examples/cloud-config-phone-home.txt
[12:26] <nvucinic> what can i get except pub_keys and instance id ?
[12:27] <harmw> ah, you mean which items are available to post
[12:27] <harmw> or: keys
[12:28] <harmw> well, you could just look at L9 :P
[12:28] <harmw> post: all
[12:29] <harmw> just to get an idea of all available stuff, have it post everything it knows
[12:39] <nvucinic> let's try all :)
[12:39] <harmw> :)
[12:45] <nvucinic> not much, id, hostname and keys
[12:47] <harmw> well, if you need more you could fiddle in some bash script in user-data
[12:47] <harmw> a little adventurous, but fun :)
[13:09] <smoser> nvucinic, yeah, i think there isn't much else. i'm open to adding things.
[13:10] <nvucinic> smoser: yeah, i looked at script now :) 
[13:11] <harmw> smoser: PING
[13:13] <smoser> harmw, whats up?
[13:13] <harmw> uhm, let me think for just a minute...
[13:13] <harmw> hm hm, what was it know...
[13:13] <harmw> :p
[13:14] <harmw> take a look at the fixes from last week and merge, pretty please :)
[13:15] <smoser> did you get the tests to pass on linux ?
[13:15] <harmw> harlowja fixed them
[13:15] <smoser> sweet
[13:15] <smoser> i will merge then. thanks.
[13:16] <harmw> awesome
[13:16] <harmw> and version bump to a shiny new .tar.gz , so I can submit my fbsd port as well
[13:20] <smoser> you really need that, eh?
[13:20] <harmw> if possible, yes
[13:20] <nvucinic> smoser: is there any way where i can check  what can i get from cloud.get ?
[13:20] <harmw> even 0.7.5.1 is good enough, ofcourse
[13:20] <nvucinic> like hostname, instance id ... 
[13:22] <smoser> harmw, ok. we'll get something this week.
[13:22] <smoser> nvucinic, let me look.
[13:22] <harmw> nice!
[13:23] <nvucinic> smoser: thx :)
[13:24] <smoser> nvucinic, http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/971/cloudinit/config/cc_phone_home.py
[13:24] <smoser> unfortunately, it looks like there isnt a lot. 
[13:25] <smoser> looks like the url can have 'INSTANCE_ID' replaced in it.
[13:25] <smoser> and the things youi can post are listed at POST_LIST_ALL
[13:26] <nvucinic> smoser: yeah i replaced instance_id 
[13:26] <nvucinic> smoser: i see that hosname is defined here all_keys['hostname'] = cloud.get_hostname()
[13:26] <nvucinic> so cloud.get_hostname is how you get hostname 
[13:27] <nvucinic> cloud.get_instance_id is how you get instance id
[13:27] <nvucinic> is there cloud.get_xxx with list of things that i can get  ?
[13:27] <harmw> probably not that hard to add some stuff to the cloud object
[13:28] <nvucinic> harmw: yeah, but what else can you get with cloud.get 
[13:29] <harmw> its not cloud.get_BLA
[13:29] <harmw> its get_hostname()
[13:29] <harmw> or get_bla()
[13:29] <harmw> functions inside the cloud object (?)
[13:29] <harmw> so look inside for whatever functions are there to call :)
[13:29] <nvucinic> harmw: great, where can i get list of functions that i can use?: )
[13:30] <harmw> hehe, browsing the sourcode would be a great place to start
[13:30] <harmw> I don't think this is listed on readthedocs
[13:30] <nvucinic> also if you look at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/971/cloudinit/config/cc_phone_home.py
[13:30] <nvucinic> it's defined as cloud.get_hostname
[13:31] <harmw> def handle(name, cfg, cloud, log, args):
[13:31] <harmw> there's cloud
[13:32] <harmw> so you kinda just want to lookup what/where cc_phone_home is called from
[13:32] <harmw> well handle(), that is
[13:33] <smoser> harmw, nvucinic the cloud object is terribly not well defined. 
[13:34] <smoser> cloudinit/cloud.py
[13:34] <nvucinic> yes, i can see that :)
[13:34] <smoser> nvucinic, so look in ther eand you can see.
[13:34] <smoser> and you can get cloud.datasource also
[13:34] <nvucinic> smoser: great, ty
[13:34] <smoser> which likely has more (not terribly well defined) information.
[13:35] <smoser> we want to get those settled down to a reasonable strucutre that we can assume is available on all clouds
[13:51] <nvucinic> smoser: i would like to have ip adress as argument in phone-home script 
[13:53] <smoser> nvucinic, you can probably get at it.
[13:53] <smoser> is this ec2, nvucinic  ?
[13:54] <nvucinic> smoser: no it's that god awful setup with NoCloud 
[13:55] <smoser> nvucinic, hten its not there. sorry.
[13:55] <nvucinic> smoser: so custom script would be the right way? 
[13:56] <smoser> yeah. 
[13:56] <smoser> you can do a part-handler, or a '#!' 
[13:57] <smoser> just run some code that reads 'ip' output (or does it in python)
[13:57] <nvucinic> yeah
[16:21] <smoser> harmw, would you mind opening a bug for "here is one unresolved issue regarding path resolving of modules (?), which is why there is one DataSource explicitly enabled in the configuration file (and working/being used)."
[16:27] <smoser> harmw, merged. please test it.
[17:24] <harmw> smoser: sure thing
[17:24] <harmw> and thanks :)
[19:07] <JayF> harlowja: thanks for the comment, I'm going to pick up work on that again today or tomorrow :)
[19:08] <harlowja> kk
[19:08] <harlowja> cool
[19:12] <harlowja> btw, guys something scott and i just threw together, https://etherpad.openstack.org/p/cloud-init-next
[19:12] <harlowja> comments welcome
[19:12] <harlowja> additions welcome...
[19:12] <harlowja> harlowja JayF  ^
[19:12] <harlowja> harmw i mean
[19:12] <harlowja> ha
[19:16] <harmw> harlowja: looks fine to me, though I don't have any higher level additions to share
[19:16] <harlowja> k
[19:16] <harmw> well, Windows would be nice ofcourse
[19:17] <harlowja> windows
[19:17] <harlowja> argggg
[19:17] <harlowja> lol
[19:17] <harmw> like merging some stuff from cloudbase
[19:17] <harlowja> ya, that one, tried that before :-/
[19:17] <harlowja> i'm not so looking forward to that
[19:17] <harmw> hehe
[19:26] <harmw> smoser: just filed that bug
[19:26] <smoser> thanks harmw 
[19:28] <harmw> ok, lets see if the trunk still works :)
[19:28] <harmw> smoser/harlowja, anyone of you using Designate btw?
[19:29] <harlowja> not me
[19:29] <harmw> oh ok
[19:34] <smoser> not me
[19:34] <smoser> harmw, bug number ?
[19:34] <harmw> https://bugs.launchpad.net/cloud-init/+bug/1364580 
[19:39] <harlowja> hmmm, odd
[19:41] <harlowja> can we get a log from that happening harmw ?
[19:41] <JayF> harlowja: is 'work with coreos on their go clone' useful input ;)
[19:42] <harlowja> that sounds right up there with the windows clone
[19:42] <harlowja> lol
[19:42] <JayF> harlowja: I'm 80% kidding, but I did put a note in there about making it easy to vendor your own versions of cloud-init (which go would do)
[19:42] <harlowja> what does mean 'vendor your own versions'?
[19:42] <JayF> so right now
[19:42] <JayF> For Rackspace OnMetal
[19:42] <JayF> every single one of our images
[19:42] <JayF> we package and deploy a custom cloud-init
[19:42] <harlowja> yup, same here
[19:43] <harlowja> *although not that custom, like 1 patch
[19:43] <harlowja> or 2
[19:43] <JayF> This is kind of a pain in the ass because you have to 'lock' the rpm/yum version in place
[19:43] <JayF> which may break ON dist-upgrade or may just BREAK dist-upgrade
[19:43] <harlowja> ok ,don't do that lock stuff then ;)
[19:43] <harlowja> use the vendor provided one ;)
[19:43]  * harlowja which is what we at y! are trying to get to (using the fedora one)
[19:43] <JayF> Vendor provided one doesn't support a huge stack of what we're doing
[19:44] <harlowja> isn't that just getting the code upstream that does that ? :-P
[19:44] <JayF> (which is why I'm here helping upstream our support, so we get out of this... but you and  I both know if it goes upstream now I'm still packaging my own for 5+ yrs)
[19:44] <harlowja> hopefully more like 2 years
[19:44] <harlowja> haha
[19:44] <harlowja> ;)
[19:44] <JayF> It's not :(
[19:44]  * JayF is also working on kernel version support for CentOS 6.5, which is Old Software(tm)
[19:44] <harlowja> overall yes though i know, avoid the fork hell at all possible
[19:45] <JayF> I'd say model that into upstream
[19:45] <harlowja> 6.5 isn't that old, thats brand new
[19:45] <harlowja> we have 5.8 images lol
[19:45] <JayF> cloud-init should assume that a lot of vendors will be packaging cloud-init and make it easier
[19:45] <harlowja> sure, so then what part not easy?
[19:45] <JayF> harlowja: Yeah, it doesn't seem bad, but even a few years with an older kernel makes huge perf differences.
[19:45] <JayF> Python generally doesn't make it easy :)
[19:46] <JayF> right now, like I said, we're just building a new RPM for a distro and 'lock'ing it there
[19:46] <harlowja> so more of what u are getting at is the go statically linking model?
[19:46] <JayF> but some supported 'omnibus' style install (like opscode uses for chef w/the ruby "omnibus" stuff)
[19:46] <JayF> I'm getting at I want to be able to deploy and run a 'tarball' or single file or something very isolated of cloud-init
[19:47] <harlowja> right, the same as statically linking
[19:47] <JayF> well except statically linking is an implementation of my requirement :)
[19:47] <harlowja> sure sure
[19:47] <JayF> I'm specifically not saying statically link it because I know that's mostly a nonstarter
[19:48] <harlowja> so then if we remove more of dependencies in http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/requirements.txt 
[19:48] <harlowja> what about that?
[19:48] <JayF> think about it more this way:
[19:48] <JayF> as a vendor providing a cloud server
[19:48] <JayF> *service
[19:48] <JayF> we need to be able to iterate quickly on cloud-init just like we can on our cloud
[19:49] <JayF> packaging is somewhat of a difficulty there currently, among a bucket of other things :)
[19:49] <harlowja> sure
[19:49] <harlowja> believe me i know, lol
[19:50] <harlowja> but it still feels like a hammer looking for a nail
[19:50] <JayF> the other thing is maybe a philisophical shift: we want a mechanism to update a *booted* system
[19:51] <JayF> i.e. I attach some kind of 'cloud storage' volume, update the metadata service to reflect that, and my cloud init data source sees that, and mounts the volume
[19:51] <harlowja> ya, smoser  has thoughts about that as well afaik
[19:51] <JayF> our use case gets more interesting because we want cloud-init (and other software) to do a lot of the tasks relegated today to a hypervisor
[19:52] <JayF> because hypervisors are for people who want to do things easy :P (just kidding, although it feels that way sometimes :D)
[19:52] <harlowja> what about helping build out something like https://github.com/cloudcadre/giftwrap
[19:53] <harlowja> https://github.com/cloudcadre/giftwrap#giftwrap
[19:53] <smoser> hm.. on a call, will look above later
[19:53] <harlowja> JayF afaik the above does something like what u want
[19:53] <harlowja> https://github.com/cloudcadre/giftwrap#how-it-works
[19:55] <harlowja> so thats why i wonder if its more of a problem thats outside cloud-inits scope
[19:55] <JayF> harlowja: IMO the problem is more on the other side
[19:55] <harlowja> how so?
[19:55] <JayF> harlowja: I build the package, install it in my default image, customer upgrades, gets a 'newer' version of cloud-init that doesn't have my patches, and cloud-init re-runs and does $unexpected_things because it doesn't support our data source
[19:55] <JayF> or if you do a package manager 'hold' in the distro, you restrict the customer from upgrading entire classes of packages
[19:56] <JayF> I don't know how to solve this problem, tbh, but I think something along the lines of chef-omnibus packages have to be the way
[19:56] <harlowja> hmmm, ok, sounds like u need smarter customers ;)
[19:56] <harlowja> problem solved!
[19:56] <harlowja> ha
[19:56] <JayF> because then I never install distro cloud-init in my images, I install cloud-init-omnibus which puts everything in /opt/cloud-init/ and depends on nothing outside that dir (except maybe libc) to run
[19:57] <JayF> harlowja: Our entire spiel at Rackspace is Managed [cloud|server|etc]... my job is to help write software to enable that.
[19:57] <JayF> harlowja: which means I'm the smarter customer :)
[19:57] <harlowja> :)
[19:59] <harlowja> i just start to wonder if there is something wrong though, like amazon afaik doesn't need to do this, yahoo has so far gotten along just fine without this kind of isolation thing, so is it really a problem that we have to resolve by isolating all of cloud-init (and its own version of python and dependencies) to say /opt/cloud-init
[19:59] <JayF> I'd venture to say I'm feeling this pain right now
[19:59] <JayF> because we're doing /new things/
[20:00] <JayF> nobody before us, afaik, has run a public, software-operated, bare metal API-driven provisioning system
[20:00] <harlowja> sure sure
[20:00] <JayF> We're using network configs that are different (vlans on top of bonding)
[20:00] <JayF> on top of all that :)
[20:01] <harlowja> ontop of nicira? 
[20:01] <harlowja> ;)
[20:01] <JayF> fwiw our first major cloud-init change was to stop excluding disk partitions from consideration when looking for configdrives
[20:01] <JayF> harlowja: today, no actually :( $vendor_reasons
[20:01] <harlowja> ya, i know about some of that i think ;)
[20:01] <JayF> We
[20:02] <JayF> We've blogged about some of it in various places
[20:02] <harlowja> ya, i just start to wonder if this is really a good way to go down
[20:03] <harlowja> all of yahoo typically gets installed in /home/y/ and that really has not turned out so well over the years
[20:03] <JayF> hah :)
[20:03] <harlowja> to much packages that forget to get managed that someone forked to adjust to fit into /home/y...
[20:03] <JayF> have you seen/used omnibus installed chef?
[20:03] <harlowja> not personally, some other people are
[20:03] <JayF> https://github.com/opscode/omnibus
[20:03] <JayF> as a pattern I don't think this is bad
[20:04] <JayF> kinda an uglier fork of the container pattern, really
[20:04] <JayF> just ship a package, with all it's deps shoved into a dir away from the rest of the os
[20:04] <harlowja> ya, i'm not sure what yahoo is replacing it with (something chef afaik) 
[20:04] <JayF> it's ugly I'm sure when you get to the level of installing entire sub-operating systems in ~y/
[20:04] <harlowja> yup
[20:05] <JayF> but your configuration system (be it chef, cloud-init, whatever) being able to install and run regardless of what OS deps are installed
[20:05] <JayF> could be a valuable thing
[20:05] <JayF> again sorta goes to the world view of a tool like cloud-init being 'out of band' similar to a hypervisor... even though obviously it's not :)
[20:05] <harlowja> ya, i'd be +2 to cloud-init in the kernel
[20:06] <harlowja> like kvm 
[20:06] <harlowja> lol
[20:06] <harlowja> i'm sure linus would be fine with that
[20:06] <JayF> HAH
[20:06] <JayF> but you get the general worldview I'm looking with?
[20:06] <harlowja> ya
[20:06] <harlowja> i do
[20:06] <harlowja> although i still have mixed feelings about it, knowing past experiences :-P
[20:06] <harmw> harlowja: you broke my fbsd code!
[20:06] <JayF> the ability to configure the image to utilize $cool_things in my cloud means I need to be able to update cloud-init rapidly as I build $cool_things
[20:06] <harlowja> harmw i might have, ha, who knows
[20:07] <harmw> my branch was working, but trunk now doesn't
[20:07] <harlowja> JayF ya, sure, thats just a slipper slope of becoming your own distro ;)
[20:07] <harlowja> harmw i blame smoser 
[20:07] <harlowja> lol
[20:07] <harmw> :P
[20:07] <JayF> and while I don't expect that long-term to be a 'fork' as it is right now... I do expect to continue to want to run pretty close to 'master' cloud-init ... especially when talking about versions shipped in old (>1y old) distros
[20:08] <JayF> harlowja: heh, we'd not be A distro... we'd be a fork of /every/ distro which is why I'm talking to you guys upstream trying to solve the problem in a better way for everyone than just me doing something on my own
[20:08] <harlowja> isn't that called the 'JayF distro' (with description being a ' a fork of /every/ distro')
[20:08] <harlowja> lol
[20:08] <JayF> hah
[20:09] <JayF> It's called coroes
[20:09] <harlowja> but better!
[20:09] <JayF> *coreos
[20:09] <harlowja> lol
[20:09] <harlowja> jayfos
[20:09] <JayF> and all my 'forks' are containers
[20:09] <JayF> dude if I had my own distro
[20:09] <JayF> it'd be called Old OS
[20:09]  * JayF runs oldos.org
[20:09] <harlowja> lol
[20:09] <harlowja> nice
[20:10] <harmw> btw harlowja I put an example in that bugrport
[20:10] <harlowja> harmw that log looks fine :-/
[20:10] <harlowja> 'Using distro class <class 'cloudinit.distros.freebsd.Distro'' ?
[20:11] <harlowja> looks like its trying the openstack one
[20:11] <harmw> 2014-09-02 19:47:06,862 - importer.py[DEBUG]: Failed at attempted import of 'freebsd' due to: No module named freebsd
[20:11] <harmw> 2014-09-02 19:47:06,892 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.freebsd.Distro'>
[20:11] <harlowja> ya, look at the line after that
[20:11] <harlowja> its basically logging teh attempts
[20:12] <harlowja> ya, perhaps i should shutup that log
[20:12] <harmw> i posted another one
[20:12] <harmw> nah wait, let me post one with specifying the datasource in cloud.cfg
[20:13] <harmw> Ill do that later on
[20:13] <harlowja> kk
[20:13] <harlowja> i'm gonna shutup that log that says failed imports
[20:13] <harlowja> to confusing
[20:13] <harlowja> u aren't the first to say wtf it failed importing
[20:13] <harlowja> when it really didnt, ha
[20:13] <harmw> yea, Ill try and find out why the F fbsd is failing on me right now with trunk code
[20:14] <harlowja> kk
[20:22] <harlowja> JayF anyways, lets see what smoser thinks, he's from more of the world of distro-land than i am
[20:23] <harlowja> yahoo i know is trying to move forward with stuff like docker and all, so i can see how it might all work out 
[20:23] <JayF>  I am trying to move forward with stuff like docker and all :) 
[20:23] <JayF> but I have to support whatever reasonable patterns my customer wants
[20:23] <JayF> s/customer wants/customers want/
[20:25] <harlowja> ya, although sometimes customer not doing the right thinkg ;)
[20:25] <harlowja> *thing
[20:28] <JayF> that's where the 'reasonable' part comes in :)
[20:29] <harlowja> ;)
[20:30] <smoser> harmw, i'm being asked for cirros 0.3.3 with mtu fix.
[20:30] <harmw> don't we have that already?
[20:30] <harmw> the fix, that is
[20:31] <harmw> or is that to a pending merge?
[20:32] <smoser> i forget. i thought maybe i had it with other changes. let me look if we can easily do a 0.3.3 
[20:32] <smoser> that has it.
[20:32] <harmw> there is some code from me (?) in LP to adress this
[20:32] <harmw> udhcpc wrapper stuff
[20:35] <harmw> smoser: https://bugs.launchpad.net/cirros/+bug/1301958
[20:39] <smoser> right
[20:39] <smoser> +## currently disabled as it depends on busybox and buildroot
[20:39] <smoser> +## version numbers (in the patch paths).
[20:39] <smoser> +## note, that without this the cirros-udhcpc is inert, but unoffensive
[20:39] <smoser> that was why it was non-trivial
[20:40] <harmw> ahyes, you had to upgrade buildroot first
[20:45] <harmw> hmgrr, setting the hostname is broken in my fbsd code... it doesn't set what it got from metadata, but instead re-uses `hostname`, or something
[20:51] <harmw> ah, so datasource.get_hostname() is returning that
[20:57] <harmw> bingo: why is it entering this... if self.metadata or 'local-hostname' not in self.metadata:
[20:57] <smoser> later all. i have to run.
[20:58] <gholms> o/
[21:08] <harmw> harlowja: http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1000.1.1/cloudinit/sources/__init__.py
[21:08] <harmw> I believe that has brutally broken the setting of hostnames
[21:08] <harmw> atleasy in my current testing of the fbsd module
[21:09] <harmw> since I just confirmed to have a perfectly fine hostname (host.bla.tld) inside self.metadata['local-hostname']
[21:10] <harmw> so please revert that smoser or harlowja :)
[21:10] <harlowja> ya, seems like it could
[21:10] <harmw> JayF: you broke it :p
[21:11] <harlowja> harmw lets see what JayF thinks, i wonder if that was just a mistake
[21:11] <harmw> I'm guessing thats a yes :)
[21:11] <harlowja> me to
[21:12] <JayF> wait what
[21:12] <harmw> oh well, it's always nice to gte the oppertunaty to dive in to the inner workings of c-i :P
[21:12] <harlowja> ;)
[21:12] <JayF> oh man I have a handful of changes like that :(
[21:13] <harmw> lets hope your hands are small, shall we :P
[21:13] <JayF> Ooooh, I see how this one changes the logic
[21:13] <harlowja> pretty sure its 'if not self.metadata or 'local-hostname' not in self.metadata: '
[21:13] <harlowja> or supposed to be ^
[21:13] <JayF> pep8 screams about that
[21:13] <harlowja> odd
[21:13] <harmw> the old line was fine though, just reverted it locally and *everything* just works now :)
[21:14] <JayF> harmw: make pep8 will fail though, I presume :(
[21:14]  * JayF might be too used to being able to trust unit tests
[21:14] <harmw> don't know, I don't have alle the test tools in this instance to verify
[21:14] <harlowja> let me try
[21:18] <harlowja> harmw JayF https://code.launchpad.net/~harlowja/cloud-init/fixer-up/+merge/233126 
[21:18] <harlowja> also https://code.launchpad.net/~harlowja/cloud-init/better-errors/+merge/233125
[21:21] <harlowja> seems like pep8 doesn't complain about 233126
[21:21] <harlowja> and retains the same old logic
[21:21] <JayF> gimme one sec, fixing a network issue
[21:21] <JayF> should we also write a check to catch this instance?
[21:21] <JayF> s/check/test/
[21:22] <harlowja> probably
[21:22] <JayF> if you say pep8 works, then +1 to the change
[21:23] <harmw> harlowja: 126 is spot on
[21:24] <harlowja> ya, (cloud-init)harlowja@getlookcrowd[0]:~/dev/os/cloud-init $ pep8 cloudinit/sources/__init__.py 
[21:24] <harlowja> (cloud-init)harlowja@getlookcrowd[0]:~/dev/os/cloud-init $ 
[21:24] <harlowja> no errors/complaints
[21:24] <harmw> god, and this only took 2 precious hours of my tonight-time..
[21:24] <JayF> harmw: :( I'm sorry man
[21:25] <harmw> and you should!
[21:25] <harmw> :p
[21:25] <harmw> nah, I've seen worse
[21:26] <harmw> harlowja: you're merging this? or do you need permission from the supreme uber godlike master wizard?
[21:27] <harlowja> harmw i can merge, although i'll just let smoser to,  where'd he go
[21:27] <harlowja> i usually just delegate to him ha
[21:27] <harmw> :>
[21:27] <harlowja> :)
[21:28] <harmw> well, other than this the fbsd code works btw. Having it set the hostname and adding a user+ssh key wrks just fine
[21:29] <harlowja> cool
[21:34] <JayF> I can't pull this right now, merge it if you think it's right
[21:36] <JayF> sorry :( dealing with local network stuff 
[21:37] <harlowja> ya, i'll let smoser pull it in unless its so critical can't wait
[21:37] <harlowja> then i'll try to remember the merge stuff, haha
[21:38] <JayF> harmw: we were just talking around here the otehr day about how neat it'd be to have fbsd support :)
[21:38] <JayF> harmw: (here=rackspace onmetal)