[12:42] <smoser> jclift, you culd just read /var/lib/cloud/instance/user-data
[12:43] <smoser> it wont have any meta-data tags, but will have the user-data.
[12:43] <smoser> and user-data.i is mime-multipart "pre-processed" (as in has consumed #include for you if you were interested in that).
[13:25] <jclift> smoser: Thanks.  Looked into that yesterday, but the way Rackspace does stuff, metadata doesn't get into userdata.
[13:25] <jclift> Figured out a way of doing it though.
[13:26] <jclift> smoser: Does this seem terrible? https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/snippets/metadata_retriever.py
[13:27] <jclift> Saved that to git for a bit after figuring it out yesterday.  Investigating potentially alternative approaches instead atm.
[13:28] <smoser> jclift, you're right. metadata isn't going ot get into user-data.
[13:28] <smoser> a feature that i've wanted to add in the past is 'cloud-init query'
[13:29] <smoser> either a cmdline tool, or a explicit promise of how you can load data from the cloud in a datasource agnostic way.
[13:29] <smoser> ie, i think the answer might just be to dump json into /run some where.
[13:30] <smoser> but we dont have that now. so what you have isn't horrendus. 
[13:30] <jclift> Yeah.  Could be useful.
[13:30] <jclift> Thanks. :)
[13:30] <smoser> and i doubt i would have even bothered sucggesting reading the 'system_info'['paths']
[13:30] <smoser> rather than just loading /var/lib/cloud/instance/obj.pkl
[13:30] <jclift> Heh, was more of a "just in case", since I have 0 idea how things are structured on non CentOS.
[13:31] <jclift> And Gluster Community has people on all different kinds of OS's that may try it out
[13:31] <smoser> yeah. its better in that sense.
[13:31] <smoser> hm..
[13:31] <smoser> so one thing you could do, is not use metadata
[13:31] <smoser> but just user-data.
[13:31] <smoser> is there a reason for meta-data ?
[13:32] <jclift> Yeah, there's a 2K limit for user data, and I want to keep them separate
[13:32] <jclift> Also, I don't know how to do the multi-part stuff programatically yet
[13:32] <smoser> the limit should be i think 16k, and it can be compressed.
[13:32] <jclift> It's definitely 2K with rackspace cloud
[13:32] <smoser> well, it can at least be compressed.
[13:33] <smoser> 2k is silly small
[13:33] <jclift> It's even documented as 10K in some places in source... but in practice their API rejects anything over 2K
[13:33] <jclift> Yeah, agreed
[13:33] <smoser> (and you can use '#include')
[13:33] <jclift> Yep
[13:33] <jclift> Discovered that yesterday ;)
[13:33] <smoser> but #include increases complexity
[13:33] <jclift> eg https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/remote_centos6.cfg
[13:33] <jclift> Yeah
[13:33] <smoser> multipart is really easy though to do programatically.
[13:33] <smoser> you just throw whatever "parts" you want into a yaml list
[13:33] <jclift> Ahhh cool.
[13:34] <jclift> k
[13:34] <smoser> (and yaml == json)
[13:34] <jclift> Sure
[13:34] <jclift> I need to include a config file (eg as above), plus a script file to run afterwards (aka runcmd)
[13:35] <jclift> Tried listing both under an #include, but only the config file gets executed
[13:35] <smoser> ?
[13:35] <jclift> It's on my ToDo list to figure out later on today wtf is going wrong there
[13:35] <smoser> that should work.
[13:35] <smoser> oh.
[13:36] <smoser> it should work.
[13:36] <smoser> #include
[13:36] <smoser> http://url/1
[13:36] <smoser> http://url/2
[13:36] <smoser> you just have to get the "startswith" right.
[13:36] <smoser> ie, '#cloud-config' or '#!'
[13:36] <smoser> and cloud-init should do the right thing
[13:37] <smoser> anyway, other than being config-drive specific i dont think what you have there is horrendos
[13:44] <jclift> Yeah, that's what I tried yesterday. The config bit worked (first url), the 2nd didn't.  But I haven't looked through the cloud-init log to figure out why, even though I could see the 2nd URL was pulled down into user-data.i (or similar name, this is from memory)
[13:45] <jclift> I'll look into it later on today, it's probably something simple. :)
[13:45] <jclift> smoser: Does this seem like a valid startswith? https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/regression_test.sh
[13:46] <jclift> eg bash
[13:46] <jclift> Probably nothing bash specific in there, so changing to /bin/sh instead would likely work straight off
[13:46] <jclift> Meh, I'll investigate later.  Other things to finish first.
[13:46] <jclift> :)
[13:48] <smoser> it should, yeah, '#!' should be good enough.
[13:51] <smoser> jclift, regarding "obvious"
[13:51] <smoser> maybe that /usr/bin/bash doesn't exist ?
[13:52] <jclift> Hmmm, completely possible
[13:52] <jclift> I'll check in a sec.  Filling forms for other stuff atm.
[13:52] <smoser> also, one thing i'd suggest, is to set "cloud-init-output"
[13:52] <smoser> output: {all: '| tee -a /var/log/cloud-init-output.log'}
[13:52] <smoser> that is the default in trunk now, but is extremely useful.
[13:53] <smoser> that way any output of subprocesses of cloud-init goes there.
[14:00] <jclift> Useful tip.  I'll look into it shortly. :)
[18:11] <jclift> smoser: Interestingly, there's no flag file written by kernel updates for CentOS/RHEL/etc.
[18:11] <jclift> smoser: However, there's a yum "reboot_suggested" flag that gets written to yum metadata
[18:11] <jclift> Yum is written in Python
[18:12] <jclift> And various parts of yum can be imported for use in python
[18:12] <jclift> There's an example of something called PackageKit using it here: https://gitorious.org/packagekit/packagekit/source/945faa959f00e27d419517116c37e960d6093f56:backends/yum/yumBackend.py
[18:13] <jclift> In theory, it might be possible to just do something like "from yum.update_md import UpdateMetadata", and get it working from there
[18:15]  * jclift will experiment a bit, but my Python is very noobie level so you might be able to just glance at it and tell what to do. ;)
[19:42] <jclift> So, after speaking with the guys in the #yum channel, it looks like the reboot_suggested flag isn't that widely used.  So, may not actually be reliable.
[19:43] <jclift> They suggested just checking the version of the running kernel vs the latest installed one, and rebooting if they're different
[19:43] <jclift> I'll see if I can whip up a suitable patch to do that in a bit.  Kind of brain faded atm tho :/
[20:04] <smoser> jclift, that sounds fine. 
[20:05] <smoser> its non-trivial to determine "latest kernel" though.
[20:05] <smoser> i dont know how one is supposed to do that. you'll probably have to use yum to compare versions.
[20:05] <jclift> They pointed me towards a recent yum command addition that does it.
[20:06] <jclift> smoser: I'll hunt that down and see if it's feasible to copy.  Yum being written in python too, etc.
[20:06] <jclift> May not be tonight though.  Kind of needing a break atm. :)
[20:07] <smoser> oh, its not reasnoable to copy.
[20:07] <smoser> you'd want to use the library. iirc rpm's "which version is greater" is massive spaghetti.
[20:17] <jclift> Damn.  This is apparently a new yum command, in recent Fedoras.  I haven't yet looked into it.
[20:17] <jclift> You could be completely right thogh.
[20:17] <jclift> Guess I'll be finding out soon. ;)