=== harlowja is now known as harlowja_away [15:09] Hello everyone. I have a small question about errors during cloud-init execution [15:10] is it possible to have them reported using OpenStack events or messages? [15:20] ancoron_z, you could have somethign to that. [15:20] recent cloud-init writes /run/cloud-init files [15:21] that you could read and pass back/post somewhere. [15:21] but there isn't a place to report back to in openstack [15:23] smoser: I was digging through the code a bit and found the notion of a "PublishErrorsHandler" and a fallback to some OSLO-related handler: https://github.com/stackforge/cloudbase-init/blob/master/cloudbaseinit/openstack/common/log.py#L503 [15:23] I just didn't find any documentation what this actually means or is supposed to do [15:27] That's not cloud-init. [15:27] That's something else. [15:31] Ah, sorry. Yes, I see. [15:32] No problem, just letting you know you're probably in the wrong place. [15:32] That's also a pretty new project as well, I'm curious how much of cloud-init's functionality they've dup'd already [15:33] Yes, I'm trying my first steps with it currently. [15:34] If you're interested in cloud-init, it's hosted in launchpad, and has packages for most distributions [15:34] yes, I found it already - looking at trunk branch currently [15:34] And generally either cloud-init or nova-agent is used by most folks to configure cloud instances on boot for Openstack. [15:35] I think cloudbase-init is just trying to generally add competition to that space; I know nothing other than the openstack-dev email about it. [15:35] My first use case is support for Windows - for whatever cloud use-case (??? I wouldn't have one...) [15:37] JayF: cloudbase-init is not trying to add competition [15:37] JayF: we simply started the project to support Windows in OpenStack and other clouds [15:37] alexpilotti: aha, awesome :) [15:37] JayF: and we’d happy to merge with cloud-init under the proper conditions [15:37] alexpilotti: in that case, I'm sure one day I'll be using your stuff too, haha [15:37] heh [15:38] alexpilotti: I work on Openstack Ironic w/Rackspace. We use cloud-init extensively for our stuff (which is all linux-images atm) [15:38] JayF: nice, I know that the HP folks are working on Ironinc on the Windows side (using cloudbase-init) [15:39] JayF: we also started to work with Ironic, it’s (ahem ironically) the only major bare metal deployment prject we didn’t contribute to [15:39] Spiffy, like I said, knowing it's focused on windows means I'll likely end up chatting and working with you as well at one point :) [15:39] JayF: as we ported Ubuntu MaaS and Crowbar to Hyper-V/Windows [15:39] Ah, I don't know things about Crowbar [15:40] I mean, we wrote a dpeloy driver for ironic, we're pretty embedded into that, haha [15:40] JayF: SUSE is using it, we did the port with them more than 1 year ago [15:40] nice [15:41] but considering that Crowbar as a project is getting nowhere [15:41] the remaining alternatives are Ironic and MaaS [15:41] we just finished the latter, so uit’d be time to start helping out on Ironic as well :-) [15:41] I mean, I have trouble finding affection for MaaS [15:41] c’mon is a nice tool [15:42] since it's very limited scope and was basically developed to integrate with Openstack ... and all that dev work went into something Canonical specific instead of Ironic :( [15:42] well, competition helps IMO [15:43] anyway, we’d like to do some more work for WIndows on Ironic [15:43] Well right now I'd think your likely best bet is the agent driver that we've written [15:43] since it's the only one that support whole disk imagers [15:43] *images [15:43] k [15:44] I mean, pretty much it just puts a glance image onto a disk [15:44] is it on stackforge? [15:44] it's in ironic proper [15:44] just the agent deploy driver [15:44] cool [15:44] (the other one is called pxe and utilizes iscsi, but atm doesn't support anything but pxe booting forever) [15:44] that should make things transparent, like curtin for MaaS [15:44] if you had an image that would run windows on your hardware [15:45] I'd imagine Ironic would and could deploy it today [15:45] using the agent driver [15:45] once the image boots and cloud-init /cloudbase-init or whatever else starts [15:45] you just offer standard Nova metadata via HTTP? [15:46] If your cloud is setup to expose a metadata service [15:46] Ironic is like a hypervisor in the ecosystem [15:46] Nova-compute has a driver which talks to the Ironic-API to provision nodes [15:46] cool, I was expecting that [15:46] not much differently than it would, say, talk to a xenapi [15:46] yep, I’m aware of the driver [15:47] the only thing I didn’t check yet, if there are changes in the metadata model [15:47] I mean, not at all [15:47] we're using a downstream patch at Rackspace to support ConfigDrive [15:47] perfect [15:47] but that's actively being upstreamed [15:47] and we're open about what/how we run even if it's not in Ironic proper yet [15:48] another dummy question, what lights out options does ironic support? [15:48] feel free to idle in #openstack-ironic and ask questions :) [15:48] I mean IPMI, ATM, etc [15:48] alexpilotti: lights out? [15:48] alexpilotti: ah [15:48] alexpilotti: we call those currently "Management" or "Power" drivers [15:48] there's two for IPMI (one shells to ipmitool; this is what we run, one uses a native ipmi python library) [15:49] one for iLO that supports doing the deploy via virtual media instead of pxe (as well as just talking BMC to the iLO) [15:49] one for DRAC, one for iBoot, one for SNMP-driven PDUs [15:49] that's what I can think of off the top of my head [15:49] oh yeah, the AMD seamicro boxes too, they have a management driver [15:49] ah ok, so no ATM? [15:49] ATM? [15:50] the one that comes with Intel vPro [15:50] I don't know? Apparently not, then? [15:50] AMT, sorry [15:50] Yeah, I haven't heard of it [15:50] but like I said, we use agent_ipmitool driver === zz_gondoi is now known as gondoi [15:50] so that's what I know the most about :) [15:50] http://en.wikipedia.org/wiki/Intel_Active_Management_Technology [15:51] let me switch to the ironic channel :-) [15:51] Isn't AMT related to WS-MAN? [15:54] Oh yes, found it again: https://software.intel.com/en-us/articles/ws-management-and-intel-active-management-technology-a-primer [15:55] sometimes, it would really be nice if all would colaborate on a standard... === gondoi is now known as zz_gondoi === zz_gondoi is now known as gondoi === harlowja_away is now known as harlowja === gondoi is now known as zz_gondoi [18:46] hey harlowja [18:46] tell me how youou'd do this. [18:47] smoser sup [18:47] i need to edit a file in /etc/ [18:47] as the nova user [18:47] and lock it while editing is taking place [18:47] and that file owned by root [18:47] one idea is a util program that does that. and put that program in /etc/nova/rootwrap.d/ [18:48] but i want to know how the master would do it. [18:48] lol [18:48] lock it from whom? [18:49] advisory locking [18:49] as in it can only ever exist in a correct state, and when i'm editing it, then 2 processes editing it need to not collide. [18:49] hm. [18:49] thats assuming (i was) that nova compute is multi-threded [18:50] but now i think i'm wrong on that. [18:50] nah, u are right, it is [18:50] so maybe i dont need the locking per say. [18:50] its eventlet multi-threaded, which for all reasoning is 'multi-threaded' [18:51] so i think u need to make a rootwrap.d program (to get around the nova user problem) [18:51] and then have that program use https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L55 [18:51] (or similar) [18:51] to avoid collisions [18:51] yeah. i didn't know of os.common.lockutils [18:51] was just oging to use python lockfile (which is probaly what that uses) [18:51] hm.. [18:51] not really [18:51] python lockfile sorta not so good [18:52] ^ is better [18:52] nice. [18:52] 'Since the lock is always held on a file [18:52] descriptor rather than outside of the process, the lock gets dropped [18:52] automatically if the process crashes, even if __exit__ is not executed.' [18:52] python lockfile (this is being taken over by openstack btw) is getting better [18:52] *see https://review.openstack.org/#/c/122253/ [18:52] for that takeover [18:53] which is still failing jenkins, arg [18:53] ok. [18:53] so there u go [18:53] :-P [18:53] i'm pretty sure i could trick some program already in rootwrap.d to do what i wanted. [18:54] likely [18:54] hopefully later today i'll forget that i opened /etc/nova/rootwrap.d/ [18:54] hahahahha [18:54] i avoid looking there [18:54] *scary* [18:54] kill_shellinaboxd: KillFilter, root, /usr/local/bin/shellinaboxd, -15, -TERM [18:54] :-/ [18:54] that one sounds good. :) [18:55] but 'dd', 'cp', all sorts of stuff that can do arbitrary things. [18:55] yup [18:55] thats why i avoid looking there [18:55] i don't want to cry [18:56] * harlowja still wishes the openstack community would just make sudo better [18:56] but that never seemed to happen [18:56] or do something else, lol [18:57] well, sudo or not. if you have arbitrary code execution of nova user. you ahve root. [18:57] sure [18:58] so in a sense, you can just make it easier and run as root :) [18:58] :) [18:59] so is there some way to do this wrapper thing other than like above ? [18:59] run as root? [18:59] lol [18:59] then just do it in nova code (with a lock there) [19:00] no. i meant, is the 'kill_shellinaboxd' the type of thing. [19:00] write to tempfile, run as root to copy it (do this in a locked block) [19:00] i just add a command to my program and then have to deal with knowing the path to my program [19:00] thats the pita part of it. [19:00] write from nova -> tempfile, use rootwrap + cp, lol [19:01] yeah, but rootwrap wont get me atomic updates. [19:01] er.. cp wouldnt . [19:01] anyway, thanks for your help. [19:01] how so, wrap the code in nova that is doing this in a lock [19:01] with lock: [19:02] get tempfile [19:02] do stuff [19:02] call cp with rootwrap [19:02] right. but that wouldnt be atomic. [19:02] i need os.rename [19:03] isn't mv the same (use mv instead of copy?) [19:09] smoser: you should consider nova compute as multithreaded [19:09] smoser: because many clustered hypervisors (like Ironic/VMWare) can have multiple nova computes [19:09] smoser: even though it sounds like you have to do that anyway [19:09] well anything, even using eventlet is still conceptually multi-threaded [19:10] eventlet doesn't take that part away, it just hides it a little [19:12] eventlet == temptress imho [19:12] tempts u into a pit that u can't get back out of [19:35] durn systemd, lol [19:35] https://code.launchpad.net/~harlowja/cloud-init/fixed-rhel7-test/+merge/238766 [19:40] guess i'm the first one to run tests no rhel7, lol [19:53] *on rhel7 [19:59] I mean, I don't know about running tests [19:59] we sure as hell use cloud-init on centos7 though [22:29] ya, before building an rpm i have a little piece of bash script that runs the tests and such (then applies a few internal patches) [22:32] and just got asked to build a custom one for rhel7 (instead of using the epel one) [22:47] JayF how are u building cloud-init (using the epel one)? wondering cause using the buildin way of using tools/brpm required some tweaks for rhel7 [22:47] *for rhel77 [23:14] harlowja: What if you just copypaste Fedora's spec file? [23:14] nearly, i fixed it all up enough to get this to work [23:14] using some parts of the fedora one [23:14] https://code.launchpad.net/~harlowja/cloud-init/rpm-spec-fixups and others i'll push up [23:14] https://code.launchpad.net/~harlowja/cloud-init/fixed-rhel7 [23:19] gholms check those out, trying to make it work in the easiest manner [23:27] harlowja: you can reproduce our build using https://github.com/racker/cloud-init-docker-build [23:27] git clone https://github.com/jayofdoom/cloud-init-fedora-pkg [23:27] :-/ [23:27] questionable, lol [23:27] haha [23:27] those are all moved to /racker/ [23:28] so sorry if the pointers are outta date a little [23:28] hehe [23:28] ideally u should be able to checkout cloudinit; run make rpm [23:28] but all our builds come from that docker stuff [23:28] Well we absolutely can't because we have our downstream patch still [23:28] or if u have patches run packages/brpm -p $patch1 -p $patch2 [23:28] but I agree with the general gist of what you say [23:28] I just like using docker buiilders like that because you can build on any machine that has docker [23:28] you're welcome to use that, fork it, etc [23:29] I'm going to go back to not looking at a computer screen and hoping it nukes the headache [23:29] thx, although cloud-init building of rpms should work (instead of not) :-P [23:29] gl and if you have specific questions not sure I can answer them because all my knowledge is in those scripts and I recovered the space in my brain :P [23:29] np [23:29] but feel free to ask anyway :P [23:30] i have a similar script, although it doesn't use docker, trying to rid myself of any patches (which are similar to the ones there in that repo) [23:30] https://code.launchpad.net/~harlowja/cloud-init/rpm-spec-fixups ... [23:30] yeah I'm not 100% sure I didn't initially make those cloud-init-{distro}-pkg repos, only have maintained them [23:30] but I'd suspect they all started from a shipped upstream rpm and then we made changes [23:31] but yeah, good luck and the work is appreciated :) [23:31] * JayF & [23:31] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/packages/ :-P [23:31] u should use that, ha