[15:09] <ancoron_z> Hello everyone. I have a small question about errors during cloud-init execution
[15:10] <ancoron_z> is it possible to have them reported using OpenStack events or messages?
[15:20] <smoser> ancoron_z, you could have somethign to that.
[15:20] <smoser> recent cloud-init writes /run/cloud-init files
[15:21] <smoser> that you could read and pass back/post somewhere.
[15:21] <smoser> but there isn't a place to report back to in openstack
[15:23] <ancoron_z> smoser: I was digging through the code a bit and found the notion of a "PublishErrorsHandler" and a fallback to some OSLO-related handler: https://github.com/stackforge/cloudbase-init/blob/master/cloudbaseinit/openstack/common/log.py#L503
[15:23] <ancoron_z> I just didn't find any documentation what this actually means or is supposed to do
[15:27] <JayF> That's not cloud-init.
[15:27] <JayF> That's something else.
[15:31] <ancoron_z> Ah, sorry. Yes, I see.
[15:32] <JayF> No problem, just letting you know you're probably in the wrong place.
[15:32] <JayF> That's also a pretty new project as well, I'm curious how much of cloud-init's functionality they've dup'd already
[15:33] <ancoron_z> Yes, I'm trying my first steps with it currently.
[15:34] <JayF> If you're interested in cloud-init, it's hosted in launchpad, and has packages for most distributions
[15:34] <ancoron_z> yes, I found it already - looking at trunk branch currently
[15:34] <JayF> And generally either cloud-init or nova-agent is used by most folks to configure cloud instances on boot for Openstack.
[15:35] <JayF> I think cloudbase-init is just trying to generally add competition to that space; I know nothing other than the openstack-dev email about it.
[15:35] <ancoron_z> My first use case is support for Windows - for whatever cloud use-case (??? I wouldn't have one...)
[15:37] <alexpilotti> JayF: cloudbase-init is not trying to add competition
[15:37] <alexpilotti> JayF: we simply started the project to support Windows in OpenStack and other clouds
[15:37] <JayF> alexpilotti: aha, awesome :)
[15:37] <alexpilotti> JayF: and we’d happy to merge with cloud-init under the proper conditions
[15:37] <JayF> alexpilotti: in that case, I'm sure one day I'll be using your stuff too, haha
[15:37] <alexpilotti> heh
[15:38] <JayF> alexpilotti: I work on Openstack Ironic w/Rackspace. We use cloud-init extensively for our stuff (which is all linux-images atm)
[15:38] <alexpilotti> JayF: nice, I know that the HP folks are working on Ironinc on the Windows side (using cloudbase-init)
[15:39] <alexpilotti> JayF: we also started to work with Ironic, it’s (ahem ironically) the only major bare metal deployment prject we didn’t contribute to
[15:39] <JayF> Spiffy, like I said, knowing it's focused on windows means I'll likely end up chatting and working with you as well at one point :)
[15:39] <alexpilotti> JayF: as we ported Ubuntu MaaS and Crowbar to Hyper-V/Windows
[15:39] <JayF> Ah, I don't know things about Crowbar
[15:40] <JayF> I mean, we wrote a dpeloy driver for ironic, we're pretty embedded into that, haha
[15:40] <alexpilotti> JayF: SUSE is using it, we did the port with them more than 1 year ago
[15:40] <JayF> nice
[15:41] <alexpilotti> but considering that Crowbar as a project is getting nowhere
[15:41] <alexpilotti> the remaining alternatives are Ironic and MaaS
[15:41] <alexpilotti> we just finished the latter, so uit’d be time to start helping out on Ironic as well :-)
[15:41] <JayF> I mean, I have trouble finding affection for MaaS
[15:41] <alexpilotti> c’mon is a nice tool
[15:42] <JayF> since it's very limited scope and was basically developed to integrate with Openstack ... and all that dev work went into something Canonical specific instead of Ironic :(
[15:42] <alexpilotti> well, competition helps IMO
[15:43] <alexpilotti> anyway, we’d like to do some more work for WIndows on Ironic
[15:43] <JayF> Well right now I'd think your likely best bet is the agent driver that we've written
[15:43] <JayF> since it's the only one that support whole disk imagers
[15:43] <JayF> *images
[15:43] <alexpilotti> k
[15:44] <JayF> I mean, pretty much it just puts a glance image onto a disk
[15:44] <alexpilotti> is it on stackforge?
[15:44] <JayF> it's in ironic proper
[15:44] <JayF> just the agent deploy driver
[15:44] <alexpilotti> cool
[15:44] <JayF> (the other one is called pxe and utilizes iscsi, but atm doesn't support anything but pxe booting forever)
[15:44] <alexpilotti> that should make things transparent, like curtin for MaaS
[15:44] <JayF> if you had an image that would run windows on your hardware
[15:45] <JayF> I'd imagine Ironic would and could deploy it today
[15:45] <JayF> using the agent driver
[15:45] <alexpilotti> once the image boots and cloud-init  /cloudbase-init or whatever else starts
[15:45] <alexpilotti> you just offer standard Nova metadata via HTTP?
[15:46] <JayF> If your cloud is setup to expose a metadata service
[15:46] <JayF> Ironic is like a hypervisor in the ecosystem
[15:46] <JayF> Nova-compute has a driver which talks to the Ironic-API to provision nodes
[15:46] <alexpilotti> cool, I was expecting that
[15:46] <JayF> not much differently than it would, say, talk to a xenapi
[15:46] <alexpilotti> yep, I’m aware of the driver
[15:47] <alexpilotti> the only thing I didn’t check yet, if there are changes in the metadata model
[15:47] <JayF> I mean, not at all
[15:47] <JayF> we're using a downstream patch at Rackspace to support ConfigDrive
[15:47] <alexpilotti> perfect
[15:47] <JayF> but that's actively being upstreamed
[15:47] <JayF> and we're open about what/how we run even if it's not in Ironic proper yet
[15:48] <alexpilotti> another dummy question, what lights out options does ironic support?
[15:48] <JayF> feel free to idle in #openstack-ironic and ask questions :)
[15:48] <alexpilotti> I mean IPMI, ATM, etc
[15:48] <JayF> alexpilotti: lights out?
[15:48] <JayF> alexpilotti: ah
[15:48] <JayF> alexpilotti: we call those currently "Management" or "Power" drivers
[15:48] <JayF> there's two for IPMI (one shells to ipmitool; this is what we run, one uses a native ipmi python library)
[15:49] <JayF> one for iLO that supports doing the deploy via virtual media instead of pxe (as well as just talking BMC to the iLO)
[15:49] <JayF> one for DRAC, one for iBoot, one for SNMP-driven PDUs
[15:49] <JayF> that's what I can think of off the top of my head
[15:49] <JayF> oh yeah, the AMD seamicro boxes too, they have a management driver
[15:49] <alexpilotti> ah ok, so no ATM?
[15:49] <JayF> ATM?
[15:50] <alexpilotti> the one that comes with Intel vPro
[15:50] <JayF> I don't know? Apparently not, then?
[15:50] <alexpilotti> AMT, sorry
[15:50] <JayF> Yeah, I haven't heard of it
[15:50] <JayF> but like I said, we use agent_ipmitool driver
[15:50] <JayF> so that's what I know the most about :)
[15:50] <alexpilotti> http://en.wikipedia.org/wiki/Intel_Active_Management_Technology
[15:51] <alexpilotti> let me switch to the ironic channel :-)
[15:51] <ancoron_z> Isn't AMT related to WS-MAN?
[15:54] <ancoron_z> Oh yes, found it again: https://software.intel.com/en-us/articles/ws-management-and-intel-active-management-technology-a-primer
[15:55] <ancoron_z> sometimes, it would really be nice if all would colaborate on a standard...
[18:46] <smoser> hey harlowja 
[18:46] <smoser> tell me how youou'd do this.
[18:47] <harlowja> smoser sup
[18:47] <smoser> i need to edit a file in /etc/
[18:47] <smoser> as the nova user
[18:47] <smoser> and lock it while editing is taking place
[18:47] <smoser> and that file owned by root
[18:47] <smoser> one idea is a util program that does that. and put that program in /etc/nova/rootwrap.d/
[18:48] <smoser> but i want to know how the master would do it.
[18:48] <harlowja> lol
[18:48] <harlowja> lock it from whom?
[18:49] <smoser> advisory locking
[18:49] <smoser> as in it can only ever exist in a correct state, and when i'm editing it, then 2 processes editing it need to not collide.
[18:49] <smoser> hm.
[18:49] <smoser> thats assuming (i was) that nova compute is multi-threded
[18:50] <smoser> but now i think i'm wrong on that.
[18:50] <harlowja> nah, u are right, it is
[18:50] <smoser> so maybe i dont need the locking per say.
[18:50] <harlowja> its eventlet multi-threaded, which for all reasoning is 'multi-threaded'
[18:51] <harlowja> so i think u need to make a rootwrap.d program (to get around the nova user problem)
[18:51] <harlowja> and then have that program use https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L55 
[18:51] <harlowja> (or similar)
[18:51] <harlowja> to avoid collisions
[18:51] <smoser> yeah. i didn't know of os.common.lockutils
[18:51] <smoser> was just oging to use python lockfile (which is probaly what that uses)
[18:51] <smoser> hm..
[18:51] <harlowja> not really 
[18:51] <harlowja> python lockfile sorta not so good
[18:52] <harlowja> ^ is better
[18:52] <smoser> nice.
[18:52] <harlowja> 'Since the lock is always held on a file
[18:52] <harlowja> descriptor rather than outside of the process, the lock gets dropped
[18:52] <harlowja> automatically if the process crashes, even if __exit__ is not executed.'
[18:52] <harlowja> python lockfile (this is being taken over by openstack btw) is getting better
[18:52] <harlowja> *see https://review.openstack.org/#/c/122253/ 
[18:52] <harlowja> for that takeover
[18:53] <harlowja> which is still failing jenkins, arg
[18:53] <smoser> ok. 
[18:53] <harlowja> so there u go
[18:53] <harlowja> :-P
[18:53] <smoser> i'm pretty sure i could trick some program already in rootwrap.d to do what i wanted.
[18:54] <harlowja> likely
[18:54] <smoser> hopefully later today i'll forget that i opened /etc/nova/rootwrap.d/
[18:54] <harlowja> hahahahha
[18:54] <harlowja> i avoid looking there
[18:54] <harlowja> *scary*
[18:54] <smoser> kill_shellinaboxd: KillFilter, root, /usr/local/bin/shellinaboxd, -15, -TERM
[18:54] <harlowja> :-/
[18:54] <smoser> that one sounds good. :)
[18:55] <smoser> but 'dd', 'cp', all sorts of stuff that can do arbitrary things.
[18:55] <harlowja> yup
[18:55] <harlowja> thats why i avoid looking there
[18:55] <harlowja> i don't want to cry
[18:56]  * harlowja still wishes the openstack community would just make sudo better
[18:56] <harlowja> but that never seemed to happen
[18:56] <harlowja> or do something else, lol
[18:57] <smoser> well, sudo or not. if you have arbitrary code execution of nova user. you ahve root.
[18:57] <harlowja> sure
[18:58] <smoser> so in a sense, you can just make it easier and run as root :)
[18:58] <harlowja> :)
[18:59] <smoser> so is there some way to do this wrapper thing other than like above ?
[18:59] <harlowja> run as root?
[18:59] <harlowja> lol
[18:59] <harlowja> then just do it in nova code (with a lock there)
[19:00] <smoser> no. i meant, is the 'kill_shellinaboxd' the type of thing.
[19:00] <harlowja> write to tempfile, run as root to copy it (do this in a locked block)
[19:00] <smoser> i just add a command to my program and then have to deal with knowing the path to my program
[19:00] <smoser> thats the pita part of it.
[19:00] <harlowja> write from nova -> tempfile, use rootwrap + cp, lol
[19:01] <smoser> yeah, but rootwrap wont get me atomic updates.
[19:01] <smoser> er.. cp wouldnt .
[19:01] <smoser> anyway, thanks for your help.
[19:01] <harlowja> how so, wrap the code in nova that is doing this in a lock
[19:01] <harlowja> with lock:
[19:02] <harlowja>    get tempfile
[19:02] <harlowja>   do stuff
[19:02] <harlowja>     call cp with rootwrap
[19:02] <smoser> right. but that wouldnt be atomic.
[19:02] <smoser> i need os.rename
[19:03] <harlowja> isn't mv the same (use mv instead of copy?)
[19:09] <JayF> smoser: you should consider nova compute as multithreaded
[19:09] <JayF> smoser: because many clustered hypervisors (like Ironic/VMWare) can have multiple nova computes
[19:09] <JayF> smoser: even though it sounds like you have to do that anyway
[19:09] <harlowja> well anything, even using eventlet is still conceptually multi-threaded
[19:10] <harlowja> eventlet doesn't take that part away, it just hides it a little
[19:12] <harlowja> eventlet == temptress imho 
[19:12] <harlowja> tempts u into a pit that u can't get back out of
[19:35] <harlowja> durn systemd, lol
[19:35] <harlowja> https://code.launchpad.net/~harlowja/cloud-init/fixed-rhel7-test/+merge/238766
[19:40] <harlowja> guess i'm the first one to run tests no rhel7, lol 
[19:53] <harlowja> *on rhel7
[19:59] <JayF> I mean, I don't know about running tests
[19:59] <JayF> we sure as hell use cloud-init on centos7 though
[22:29] <harlowja> ya, before building an rpm i have a little piece of bash script that runs the tests and such (then applies a few internal patches)
[22:32] <harlowja> and just got asked to build a custom one for rhel7 (instead of using the epel one)
[22:47] <harlowja> JayF how are u building cloud-init (using the epel one)? wondering cause using the buildin way of using tools/brpm  required some tweaks for rhel7
[22:47] <harlowja> *for rhel77
[23:14] <gholms> harlowja: What if you just copypaste Fedora's spec file?
[23:14] <harlowja> nearly, i fixed it all up enough to get this to work
[23:14] <harlowja> using some parts of the fedora one
[23:14] <harlowja> https://code.launchpad.net/~harlowja/cloud-init/rpm-spec-fixups and others i'll push up
[23:14] <harlowja> https://code.launchpad.net/~harlowja/cloud-init/fixed-rhel7
[23:19] <harlowja> gholms check those out, trying to make it work in the easiest manner
[23:27] <JayF> harlowja: you can reproduce our build using  https://github.com/racker/cloud-init-docker-build
[23:27] <harlowja> git clone https://github.com/jayofdoom/cloud-init-fedora-pkg
[23:27] <harlowja> :-/
[23:27] <harlowja> questionable, lol
[23:27] <JayF> haha
[23:27] <JayF> those are all moved to /racker/
[23:28] <JayF> so sorry if the pointers are outta date a little
[23:28] <harlowja> hehe
[23:28] <harlowja> ideally u should be able to checkout cloudinit; run make rpm
[23:28] <JayF> but all our builds come from that docker stuff
[23:28] <JayF> Well we absolutely can't because we have our downstream patch still
[23:28] <harlowja> or if u have patches run packages/brpm -p $patch1 -p $patch2
[23:28] <JayF> but I agree with the general gist of what you say
[23:28] <JayF> I just like using docker buiilders like that because you can build on any machine that has docker
[23:28] <JayF> you're welcome to use that, fork it, etc
[23:29] <JayF> I'm going to go back to not looking at a computer screen and hoping it nukes the headache
[23:29] <harlowja> thx, although cloud-init building of rpms should work (instead of not) :-P
[23:29] <JayF> gl and if you have specific questions not sure I can answer them because all my knowledge is in those scripts and I recovered the space in my brain :P
[23:29] <harlowja> np
[23:29] <JayF> but feel free to ask anyway :P
[23:30] <harlowja> i have a similar script, although it doesn't use docker, trying to rid myself of any patches (which are similar to the ones there in that repo)
[23:30] <harlowja>  https://code.launchpad.net/~harlowja/cloud-init/rpm-spec-fixups ...
[23:30] <JayF> yeah I'm not 100% sure I didn't initially make those cloud-init-{distro}-pkg repos, only have maintained them
[23:30] <JayF> but I'd suspect they all started from a shipped upstream rpm and then we made changes
[23:31] <JayF> but yeah, good luck and the work is appreciated :)
[23:31]  * JayF &
[23:31] <harlowja> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/packages/ :-P
[23:31] <harlowja> u should use that, ha