[00:07] <skullone> anyone run into 'No distribution found for distro rhel" error/
[00:10] <skullone> Running centos6.5, and seems a recent update in EPEL may possibly have broken cloud-init
[00:22] <skullone> ah, found the issu
[00:22] <skullone> someone has a newer python-configobj in our internal repo, which seems to be buggy, and it trips up cloud-init
[00:39] <JayF> I don't think centos6.5 offers a cloud-init package officially?
[00:56] <smoser> skullone, can/should probably just drop the python-configobj usage.
[00:56] <smoser> that is really only for *very* legacy stuff.
[01:04] <harlowja> and for all that sysconfig stuff on rhel, lol
[01:04] <harlowja> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/distros/parsers/sys_conf.py#L56
[01:04] <harlowja> ;)
[02:33] <skullone> JayF: been on EPEL for a while now
[02:33] <skullone> not official, but about as official as itll be
[14:01] <smoser> harmw, 0.7.6 ?
[14:19] <harmw> yea well, I'm expecting my cloud to come back online so I can scavange the fbsd port files from there :p
[14:20] <harmw> but nonetheless, I don't see a reason to keep you from releasin 0.7.6
[14:20] <harmw> well, I did the only testing so that kinda sucks
[14:22] <harmw> smoser: I think 0.7.6 would be nice, and just wait till next week with the proper announcement
[14:22] <harmw> the port should be ready by then as well
[14:22] <smoser> proper announcement ?
[14:23] <harmw> yea well, announcement or whatever
[14:23] <harmw> I don't even know if you do that though :p
[15:02] <harmw> hm, using neutron ML2 with openvswitch... brctl shoudn't display any bridges, right
[15:02] <harmw> lets hope a node reboot will make sure the right neutron service is started this time
[15:10] <smoser> harmw, i'd just send an email to cloud-init log
[15:10] <smoser> err blog
[15:10] <smoser> er.... mailing list
[15:11] <harmw> oh my, we have our very own ML?
[15:15] <smoser> :)
[15:18] <harmw> you realy want me to google now, right?
[15:19] <smoser> https://lists.launchpad.net/cloud-init/maillist.html
[15:20] <smoser> harmw, i'm going to call 0.7.6
[15:21] <harmw> sure
[15:21] <harmw> 39 of 39 messages, page 1
[15:21] <harmw> realy
[15:22] <harmw> that's all:p
[15:24] <harmw> 'IRC room dead? + Feature requests '
[15:24] <harmw> I'd hardly call this room dead
[15:27] <smoser> :)
[15:27] <smoser> thanks harm.
[15:27] <smoser> https://launchpad.net/cloud-init/trunk/0.7.6
[15:27] <harmw> wicked
[15:31] <smoser> email sent
[18:17] <harmw> smoser: connectivity with my cloud is restored :)
[18:27] <smoser> whoowhoo
[19:18] <harlowja> hiren_ how's the fork going
[19:18] <harlowja> u gonna be out for a while ?
[19:28] <harmw> harlowja: you ever played with serial console access on kvm/openstack?
[19:28] <harmw> eg. write to console.log, instead of just reading it?
[19:32] <smoser> harmw, ?
[19:32] <smoser> write to consolelog ?
[19:32] <harmw> yup, providing serial input 
[19:32] <harmw> but console.log is readonly, ofcourse
[19:34] <smoser> from inside ?
[19:34] <smoser> or outside ?
[19:34] <harmw> outside
[19:34] <smoser> i think it imght have ben hiren_ who i was talkign to at last summit. it was someone from yahoo.
[19:35] <smoser> the issue is that libvirt puts it to a file
[19:35] <smoser> so that it can be logged.
[19:35] <harmw> yup
[19:35] <smoser> you'd need a multi-plexer of some sort there as the target
[19:35] <smoser> if you wanted to get a console then.
[19:35] <smoser> such multiplexers do exist.
[19:35] <smoser> there is also bug 832507
[19:35] <smoser> that is related.
[19:37] <harlowja> harmw i've not done much directly, someone here's done it
[20:35] <harmw> finally
[20:35] <harmw> victory
[20:36] <harmw> both disks in this instance where feeling ill and bcause there was no vnc way of accessing 'em, I just mounted both to some other instance
[20:36] <harmw> +1 for virsh attach
[20:37] <harmw> smoser: cheetah is no longer a dependency, right?
[20:39] <smoser> correct.
[20:41] <harmw> ok, building freebsd port now
[20:50] <harlowja> JayF yt
[20:50] <JayF> yeah but pretty busy
[20:51] <JayF> what's up?
[20:51] <harlowja> np
[20:51] <JayF> irc is async so ask but I reserve the right to make you wait ;)
[20:51] <harlowja> a co-worker wanted to try out a rackspace baremtal thing, and i was like maybe JayF can help 
[20:51] <JayF> the wonderful thing is
[20:51] <JayF> you don't need my help
[20:52] <harlowja> ?
[20:52] <JayF> just signup for a standard rackspace cloud account; when you spin up servers use the "OnMetal - *" images from image-list and onmetal-{io,compute,memory}1 flavors
[20:52] <JayF> or if you use mycloud.rackspace.com, there's a tab for it
[20:52] <harlowja> but then i like have to pay for that? :-P
[20:52] <JayF> I mean, free stuff you have to talk to folks other than me :P
[20:52] <harmw> couponcode: JayF-R0XXX
[20:53] <JayF> for purposes of "look at the shiny thing we did with ironic/cloud-init" I would spin up a node for you
[20:53] <harlowja> hmmmm, sureeeeeee, would that be possible :)
[20:53] <JayF> but if it's a co-worker, that's a little far down the path
[20:53] <harlowja> nah, she just wants to look at what u guys did with cloud-init and ironic
[20:53] <JayF> I mean, our patched cloud-init is here : https://github.com/pquerna/cloud-init/pull/1/files
[20:54] <JayF> and our builders are: https://github.com/racker/cloud-init-docker-build which use https://github.com/racker/cloud-init-debian-pkg and https://github.com/racker/cloud-init-fedora-pkg
[20:54] <JayF> the only thing we're doing otherwise to enable it
[20:55] <JayF> is a patch to enable Configdrive in Ironic, which is upstream somewhere in gerritt
[20:56] <harlowja> kk, that was one of her questions, config drive and such, anyways
[20:57] <JayF> jroll: just in search of our upstream patches for configdrive
[20:57] <JayF> jroll: have they ever been split out from the giant agent patch of giantness?
[20:58] <jroll> JayF: oh, we haven't upstreamed them yet because process
[20:58] <jroll> openstack process, that is
[21:01] <JayF> jroll: so the only configdrive support we have in the open right now is in Josh's old giant agent patch?
[21:01]  * harlowja not this josh
[21:01] <harlowja> lol
[21:01] <JayF> harlowja: JoshNang is josh :)
[21:01] <harlowja> ;)
[21:02] <JayF> harlowja: Rackers running Ironic are slowly invading all your IRC haunts, better watch out
[21:02] <harlowja> :-P
[21:02] <jroll> JayF: yes
[21:02] <JoshNang> o/
[21:03] <spandhe_> hey JayF, are u guys using config drive for Ironic?
[21:03] <harlowja> spandhe_ (the coworker)
[21:03] <harlowja> ;)
[21:03] <JayF> hi spandhe_ 
[21:03] <JayF> yeah, we hope to have it upstream'd in K
[21:04] <JayF> there's a spec up
[21:04] <JayF> and I can link you to a really old patch that has a lot of other stuff in it that has basically what we're running
[21:04] <spandhe_> JayF: thats cool! can you give me the link for the spec?
[21:04] <JayF> https://review.openstack.org/#/c/98930/ + https://review.openstack.org/#/c/99235/
[21:06] <JayF> spandhe_: and some details on the general way we run onmetal -> http://developer.rackspace.com/blog/how-we-run-ironic-and-you-can-too.html -- that includes a link to the large older patch that'll never go upstream (https://review.openstack.org/#/c/84795/) which contains the configdrive bits
[21:06] <JayF> just one of those things were that patch reflected, more or less, what we were doing at the time, but to get it upstream we had to chop it into smaller pieces
[21:07] <JayF> we're all also in #openstack-ironic and glad to answer any questions you have :)
[21:07] <spandhe_> thanks JayF ! I am going to go through these.. Will bug you in case I have any questions.. 
[21:08] <JayF> spandhe_: just bug #openstack-ironic in general, during PST business hours and sometimes outside of it the team is in there
[21:08] <JayF> and I think we all listed our IRC names on that blog jim wrote that I just linked
[21:36] <harmw> smoser: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194295
[22:58] <smoser> woot. thanks.
[22:59] <gholms> Nice
[23:00] <smoser> gholms, you going to re:invent?
[23:00] <smoser> or openstack summit?
[23:01] <gholms> There's apparently an all-hands during re:invent, so now I don't get to go to that.  Again.  :-\
[23:01] <gholms> Not sure about the openstack summit.  When was that?