[00:12] JayF hey, thanks, that should be a simple adjustment i think [00:13] Yeah, I have an idea of what needs to be done. Haven't made much progress today other than reading the CLA :( and going to learn how to bzr [00:13] ya, good ole bzr [00:13] smoser when we moving to git ;) [00:59] JayF, thats great. there are some outright bugs in it that i want to fix in short order, and get them back to 14.04 also. [00:59] so help is definitely welcome. [00:59] JayF, bzr is roughly equivalent really. [00:59] bar branch lp:cloud-init trunk [00:59] cd trunk [00:59] bzr commit -m "foo" [00:59] bzr push lp:~jayf/cloud-init/my-fixes [00:59] there is some doc in the tree that shows basically that work flow. === harlowja is now known as harlowja_away === harlowja_away is now known as harlowja === praneshp_ is now known as praneshp === harlowja is now known as harlowja_away === zz_gondoi is now known as gondoi [13:49] harlowja_away, or harmw do you have any icehouse clouds up that have latest stable release ? [13:49] or juno [13:50] https://bugs.launchpad.net/nova/+bug/1361357 [13:56] Im running icehouse, yes [13:59] don't know if I'm suffering from that though, since my cloud sucks :p [14:19] hm, I did notice it took ages to crawl the metadata.. just didn't think much of it at the time [14:19] Ill see if I can look/confirm this tonight when I'm done here at $work [14:39] harmw, i'm not sure i'm right. but something recently in our upgrade from 2014.1.1 to 2014.1.2 regressed that bad. [14:39] i really hate pypi. tried to run devstack, and transient failures related to [14:39] DistributionNotFound: No distributions at all found for xstatic-rickshaw>=1.4.6.2 (from horizon==2014.2.0.dev366.g987c83a) === kwadrona1t is now known as kwadronaut === harlowja_away is now known as harlowja [17:17] smoser think there is icehouse somewhere, although we don't use the metadata service :-P [17:19] well you smell [17:19] lol [17:22] ah, I was wondering who's responsible for murdering my nose [17:22] that explains it [17:43] harlowja, ba. [17:43] ?? [17:43] i didn't do it, lol [17:44] pretty sure read_metadata_service is double loading everything it loads [17:44] instant backup! +1 [17:45] ya, instant backup [17:45] what if the bits get corrupted man [17:45] the horror [17:45] exactly, i save u [17:45] thank you harlowja [17:46] np [17:46] np [17:46] :) [17:46] you're redundant on read *and* write [17:46] man. thats good. [17:46] lol [17:55] smoser: I wnt cirros/ci to print MTU values some day, agreed? [17:58] ok. === gondoi is now known as zz_gondoi [18:09] smoser: just now getting around to seeing your messages, so there's no step to change branches before the commit with bzr? the only thing that makes a 'branch' (as I'd call it in git) is when you push your commits to another remote ref? [18:09] smoser: sorry if I'm mixing the terms :) [18:11] smoser: checkout my re.compile goodness :> [18:18] I wonder who ever wrote test_generic.py [18:22] Should I expect tests to pass against master? make test pylint pep8 as the hacking.rst file says to do shows a ton of failures [18:22] it looks like there may be some unmocked stuff expecting certain physical attributes (like there's a test that appears to be running blkid, unmocked) [18:24] Hmm. even just pylint/pep8 isn't passing... [18:27] smoser: assuming you'd accept a pair of commits to fix pep8 and pylint? [18:31] JayF, pylint just sucks. [18:31] pep8 does too really. [18:31] i really, really hate that they arbitrarily change what they think is good. [18:32] at one point (precise) ./tools/run-pep8 and ./tools/run-pylint preduced no warnings [18:32] there are ways to ignore checks you don't like, or opt into only certain ones [18:32] yeah, but they change the defaults [18:32] and add new crap [18:32] and make it the default [18:32] the issues I've seen all seem fairly reasonable so far, but that's all opinion [18:33] but i'm open to anything that gets a static check across pretty much any distro [18:33] i'd love that. [18:33] So does that mean the answer is no you wouldn't or yes you would but you wouldn't reccomend trying [18:33] b/c I'm already about a third of the way through, haha [18:33] yes, i'd take that, and very happily if you told me that it wasn't going to start having new things complain in 6 months [18:33] just because [18:34] (and i'm ok to ditch pylint too) [18:34] it seems pyflakes is much more popular [18:34] I'll make pep8 pass as it sits on my box, remove pylint from the Makefile and the hacking.rst, and that should be a reasonable starT? [18:34] sure. what is your pep8 version? [18:35] i've 1.5.6 [18:35] $ pep8 --version [18:35] 1.5.7 [18:35] freshly installed in my venv using pip [18:35] that might actually be a way to get static results. lock the version of pep8 [18:35] that way you bump the version, you also fix pep8 issues created by bumping version (or make pep8 can ignore the new ones) [18:46] smoser: [18:46] AssertionError: '/etc/rc.conf' not in {'/etc/resolv.conf': } [18:46] that's what I got on that netconfig test [18:48] JayF, yeah. i agree. peip and venv is a solution. [18:56] smoser: https://code.launchpad.net/~jason-oldos/cloud-init/pep8-fixed/+merge/232289 how does that look? Pretty sure I bzr'd correctly :) [19:00] smoser where did u see the double-read? [19:00] in loading openstack metadata? [19:02] harlowja, i htink is it basically does a full get in a stat [19:02] for does_exist. [19:02] let me find [19:02] kk [19:02] hmmm [19:02] yeah, i'm pretty sure that _path_exists [19:02] is doing a get [19:02] and throwing away [19:03] hmmm [19:04] ya, i believe thats cause openstack doesn't support head requests for the metadata service, at least i think thats true from what i remeber [19:04] right. maybe we just need to cheat and re-use or something? [19:04] honestly, i am only aware of this because something is bad wrong [19:04] in openstack [19:05] bad wrong in openstack [19:05] haha [19:05] on ec2, you crawl the whole datasource in 0.07 seconds [19:05] smoser sure, i can add re-use in [19:05] in openstack, each get causes you ~.5 or something [19:05] lol [19:05] :-/ [19:05] which is clearly stupid [19:05] def [19:05] clearly something [19:05] lol [19:06] brb [19:09] alright smoser adding caching [19:09] how can openstack metadata take 0.5 sec per request :-/ [19:29] JayF, i'll take that MP for sure. but i need you to sign the contributors agreement. [19:29] smoser: just did [19:29] literally about 60s ago finished it [19:29] had the tab open since yetserday [19:29] you put my name on contact ? [19:29] 'smoser / cloud-init' is what I put in that field [19:30] deal. thanks. [19:30] because I wasn't exactly sure what it wanted :) [19:30] good enough. [19:31] so if you're OK with that, I'm going to do my other changes based on that branch to ensure I don't break pep8 again :) [19:31] this will be the fixes for upgrading the configdrive version + parsing vendor_data.json properly [19:42] smoser https://code.launchpad.net/~harlowja/cloud-init/os-caching/+merge/232303 [19:47] It worries me a little that I wrote https://code.launchpad.net/~jason-oldos/cloud-init/upgrade-configdrive and it didn't break tests [19:47] heh [19:49] deep breaths [19:49] thou shall not worry [19:50] https://code.launchpad.net/~jason-oldos/cloud-init/upgrade-configdrive/+merge/232308 [19:50] harlowja: Your name is /very/ familiar to me. Have we worked together on anything before? [19:50] unsure [19:50] lol [19:50] JayF where are u from :-P [19:50] I'm thinking opscode/chef? [19:50] negative [19:51] I work on rackspace.com/onmetal / Openstack Ironic [19:51] before that did devops for a slew of different Rackspace products [19:51] I used to be JasonF on IRC [19:51] kk, might have seen u before then, i was the one of the yahoo hosts of the midcycle meetup at yahoo before this [19:51] before the last meetup [19:51] if u went to that one, u had to sign in with me, haha [19:51] gatekeeper harlow [19:51] lol [19:52] JayF the code that mocks is @ http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/tests/unittests/test_datasource/test_openstack.py#L83 ; from what i remember we aren't simulating a full version compliant metadata service, but if u want to make that better, feel free [19:52] Aha! Totally where I know you from then. That's the first meetup we were at, when 'teeth-agent' became 'ironic-python-agent' in Sunnyvale [19:52] Well that's for metadata service [19:52] I'm working on ConfigDrive [19:52] same thing [19:52] we don't run a metadata service (slow or otherwise) [19:52] :-P [19:52] hmm. I will look and see if I can make those tests better [19:53] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/tests/unittests/test_datasource/test_configdrive.py#L67 then [19:53] they both use the same code, just read from different places [19:53] I wonder if there's a similar patch to mine for the metadata service [19:53] one http:// the other file:// [19:54] to add support for the newer version of data and the vendor_data.json [19:54] not afaik [19:54] i haven't made one, ha [19:54] well what I'm saying is, one might be needed [19:54] I just don't have a great way to test it [19:54] gotcha [19:54] could be [19:55] btw, i'm interested why doesn't the onmetal stuff use the metadata service? [19:55] JayF, merging that soon. i did a bunch more cleanup of pylint remnants [19:55] isn't config-drive much harder to make work on baremetal ? [19:55] if u can share.... [19:57] Metadata service in the real world is not easy for $network_reasons [19:57] I don't understand them all specifically, but smart people assure me it's difficult [19:57] right, sure [19:58] for configdrive, we patched cloud-init a while back to not exclude partitions when looking for configdrive [19:58] so we just write down an iso partition at the end of the disk that is properly labelled and contains a configdrive [19:58] and that's been working golden [19:58] ah [19:58] is that gonna be the way that most people using ironic do it? [19:58] or just onmetal folks [19:58] It's the only way IPA supports right now [19:58] and that's upstreamed [19:58] well, wait [19:59] I don't know if IPA + configdrive is upstreamed [19:59] because lots of features blocked on getting the ironic-nova driver in [19:59] we have a stack of things running in OnMetal that will land quickly when K opens [19:59] configdrive, long-running-agents, decom [19:59] cool [19:59] what I'm working on with cloud-init is support for the new proposed nova-neutron JSON file with networking information [20:00] gotcha, ya, sounds faimilar [20:00] we have a local patch (https://github.com/pquerna/cloud-init/pull/1) that we're using to build configs [20:00] because bonding with vlans on top isn't supported in cloud-init yet... [20:00] so my hope is that you guys will let us upstream this version, that uses network info opportunistically if it's in vendor_data.json [20:01] and hopefully when K releases, and the 'real' JSON support lands, we'll port the stuff we're doing over to the proper upstream openstack format [20:01] I'd strongly prefer us not have to hold patches, on cloud-init especially :) [20:01] cool [20:03] JayF https://review.openstack.org/#/c/85673 u saw that i hope? [20:03] ^ was a way to make the openstack network info less ubuntu-like [20:04] and other stuff [20:04] not sure if that one got superseded or died or other [20:05] Josh sits across from me :) [20:05] kk [20:05] he wrote our vendor_data.json patch that writes, basically that exact info, out to our vendor_data.json [20:05] cool [20:05] we're using it right now with any onmetal images other than debian [20:05] and works really well [20:06] nice so plan is to still have https://review.openstack.org/#/c/85673 someday [20:06] in K [20:06] by 2050 [20:06] ah [20:06] kk [20:06] less than 2050 then [20:06] yeah, so I'd *really* like to get our patches integrated with you guys even before then [20:06] just looking for that format under a key in vendor_data.json [20:07] then when the 'real' support is finalized, we can integrate that into the configdrive bits and go use it [20:07] I figure y'all will be quite glad to not have the /etc/network/interfaces file as a pseudo-api anymore? [20:07] +1 from me, ha [20:09] harmw, around ? [20:09] can we fix your test failure ? [20:11] harlowja, what do you think about htat cache ? [20:11] i can see caching being a pain at some point. [20:11] if the files in /files/ are big [20:12] I personally have seen people inject, using cloud-init, full rpms (of cloud-init, actually, it's pretty insane and recursive) [20:12] gah i hope the network json spec gets merged before 2050 [20:12] wtf [20:12] lol [20:12] well, the goal is to be able to do such silly things. [20:12] ok, i can reduce it to an existence cache only [20:13] well, then we'd pull it again. [20:13] which would be what i was annoyed by [20:13] i'm just asking.. do you think its worth it. [20:13] ah [20:13] really we need to fix the MD [20:13] to not suck [20:13] ya, i'd rather be able to like use a HEAD request :-P [20:15] as far as worth it, hmm [20:17] smoser other option is to have the cache exist say in /var/lib/cloud/data/cache [20:18] and when a item is read, write it there and first check there [20:18] harlowja: -1 [20:18] k [20:18] harlowja: what about diskless nodes? [20:18] that may only have a small ramdisk for writable / [20:18] harlowja, yeah, thats true. [20:18] that would seem not insane. [20:18] except for the i only have a floppy as a disk node [20:18] 1.44 mb ftw [20:18] hm.. i dont know. [20:19] or fix the os metadata service to actually respond to head requests [20:19] Why not do the optimization where it belongs? File a bug in Openstack to get HEAD added to metadata service, and a bug about how freakin' slow it is, then let them get faster instead of oyu :) [20:19] and then i can see if i can find the code that i had that used that before i found out it didn't exist, lol [20:19] JayF, well, it should be reasonably quick [20:20] should is a funny word :) [20:20] it does cache and hold the cache [20:20] https://bugs.launchpad.net/nova/+bug/1361357 [20:20] but something seriously regressed recently. [20:20] kk, u opened one, thx smoser [20:20] (my diagnosis in the summary is wrong, but on that same cloud before upgrade my small amount of logs still show 3 seconds and such to crawl the MD) [20:21] i dont know that head is all that big of a deal really. [20:21] harlowja, what are we using 'exists' for ? [20:21] i think the MD is fairly well designed to list what is in it from the entry point [20:21] right? [20:22] so a couple things, checking version support [20:22] and then checking files exist before reading (Required or not) [20:22] i can probably cut some of these out though [20:23] let me give that a shot [20:24] the version support is listed [20:24] it takes a query [20:24] but that list of versions should be returned [20:27] k, let me see if i can optimize a few of these away [20:33] https://code.launchpad.net/~jason-oldos/cloud-init/upgrade-configdrive/+merge/232312 should be good to go, I'll look at improving tests but I'll do that in another merge req since this one still passes tests :x [20:35] JayF, that is wrong .. [20:35] + # TODO(pquerna): this is so wrong. [20:35] lets fix that a differet way. [20:35] heh :) [20:36] How do you suggest? [20:36] read it in as a string and keep the string. [20:36] I left it be for now because I can assure you that vendor_data.json parsing works with it as written (primarily because we're using it, right now) [20:37] hm. is there an equivalent to git commit --amend in bzr? or how to edit a commit? [20:38] its just a different mentality. [20:38] you dont. [20:38] you fix it [20:38] and commit [20:38] you could if you wanted 'bzr uncommit' [20:38] and bzr commit [20:38] with the same message (and i do do that) [20:38] you'd prefer keep the intermediate commit or uncommit/recommit? [20:39] I don't care, you tell me :) [20:43] smoser alright, https://code.launchpad.net/~harlowja/cloud-init/smart-read/+merge/232316 [20:43] check that out, bb, food [20:52] JayF, i'm fine with either way really. [20:52] (network back) [20:52] bzr just more accepts that humans make mistakes and does not expect you to try to hide them. [20:52] then, you merge in and have a merge commit and all that history stays around. [20:53] smoser: Well, git workflows differ :) I squash now primarily because openstack/gerrit insist upon it [20:53] for bzr and cloud-init I'll do w/e else [20:53] yes, i know. [20:53] smoser: trying to grok what you meant by "read it in as a string and keep the string." [20:53] smoser: you want me to parse self.vendordata_raw somewhere else? [20:53] let me lok [20:54] iirc pq said when he did it before, it was assumed to be yaml and caused tracebacks [20:54] but at this point I'm still mainly porting our patch so I know less about it than I should [20:56] JayF, start here [20:56] http://paste.ubuntu.com/8153171/ [20:56] then fix up the other code to not expect a dict there. [20:57] hm.. [20:57] it is .json. so its clearly json === zz_gondoi is now known as gondoi [20:57] yeah, I don't get why there's any value in parsing it later when it's explicitly a json file [20:59] JayF, can we just do [21:00] its always a dict, right? [21:00] so at https://code.launchpad.net/~jason-oldos/cloud-init/upgrade-configdrive/+merge/232312 [21:00] line 13 is pointless check [21:00] so then can we just do: [21:00] if 'cloud-init' in vd: [21:00] I don't know if it's always a dict or not [21:00] self.vendordata_raw = vd['cloud-init'] [21:00] or let me correct [21:01] I don't know the object results.get('vendor_data') is returning [21:01] ah. ok. yeah. [21:01] hm.. [21:01] well, its the json.load() [21:01] right [21:01] ? [21:01] err.. load_json [21:02] i'm sorry [21:02] i have to run. i will be in tomorrow and we can sort this out more. [21:02] I'll dig in and clean it up === harlowja is now known as harlowja_away === harlowja_away is now known as harlowja [22:16] smoser: rfr -> https://code.launchpad.net/~jason-oldos/cloud-init/upgrade-configdrive/+merge/232312 === praneshp_ is now known as praneshp === gondoi is now known as zz_gondoi