=== rangerpbzzzz is now known as rangerpb [13:45] smoser, larsks mind peeking at https://code.launchpad.net/~bbaude/cloud-init/azure_dhcp ? it contains the rework we discussed yesterday. [13:46] rangerpb, reading [13:46] im updating the commit msg now too [13:50] smoser, also wondering if you wouldnt mind peeking at https://code.launchpad.net/~bbaude/cloud-init/rh_sub_rm_first which is a trivial change too. <-- larsks fyi [14:02] headed offline for a bit...back in about 90 minutes or so [14:02] k. i'll have it for you when you return === rangerpb is now known as rangerpbzzzz [14:40] harlowja, please ping when you're in. [16:54] smoser ping a ling [16:54] so mr.version [16:54] ha [16:57] hey [16:57] yeah. [16:57] so. we definitely can't pick up a runtime dependency for something so silly [16:57] k [16:57] that should be fine [16:58] we should be able to just do [16:58] and even build-time is a pain. i'm perfectly fine to take the convention of pbr [16:58] >>> import pkg_resources [16:58] >>> pkg_resources.get_distribution("pbr").version [16:58] for example [16:58] but only if you have that available in build [16:59] right? [16:59] (to get whatever version is installed) [16:59] everyone has pkg_resources [16:59] :-P [16:59] replace 'pbr' there with 'cloud-init' [16:59] ha [16:59] i'm confused. [16:59] so the runtime dependency would be for version.py? [16:59] maybe i'm confused eto [16:59] ha [17:02] so runtime, yeah. [17:02] so can't pick up runtime [17:02] but even the build time is a pita [17:02] and i'd like to avoid it [17:02] meh [17:02] i think its more than just pkg_resources [17:03] for build time, meh, what do u do [17:03] i have to have pbr at some sufficient version [17:03] u either create your own pbr thing [17:03] or use pbr lol [17:03] how hard is this though? [17:03] if u want to actually provide a useful deb and rpm version at build time [17:03] i'm fine to follow their versioning scheme [17:03] its sorta a pita that i don't want to recreate, lol [17:03] how is it hard? [17:04] its complicated, not hard [17:04] it can't be that complicated. [17:04] i'm happy with git-describe [17:04] it gives you 3 tokens of information [17:04] last_tag_on_branch, commits_since_tag, hash [17:05] you can only put those in so many different orders [17:05] ya, hash is useless in redhat [17:05] in rpms [17:05] its useful [17:05] you just hide it behind something like commits_since_tag [17:07] https://github.com/openstack-dev/pbr/blob/master/pbr/version.py#L353 has some details around why its such a pita [17:07] i just don't have much desire to recreate any of that crap [17:08] https://specs.openstack.org/openstack/oslo-specs/specs/juno/pbr-semver.html#proposed-change has more details [17:08] if u really want to recreate all that, go ahead i guess, i just don't care much to, lol [17:09] "Provide a new command (deb-version) to output Debian package version compatible version strings. This primarily involves translating PEP-440 precedence rules into Debian ~ and . component separators. [17:09] Provide a new command (rpm-version) to output RPM version metadata for incorporation in RPM versions using the ENVRA format. As RPM lacks a ‘before’ operator (~) the primary method for translation is to treat pre-release and dev builds as release builds of next lowest version to drive the sort order above all actual releases of the version below. We assume that no version will ever have more than 9998 patch/minor releases. E.g. 1.2.0.dev5 is [17:09] rendered as 1.1.9999.dev5. 1.0.0.dev5 would be rendered as 0.0.9999.dev5 and finally 0.0.0.dev5 would also be rendered as 0.0.0.dev5 to avoid negative version numbers. [17:09] " [17:51] harlowja, you are correct that this is a pita === rangerpbzzzz is now known as rangerpb [17:56] ya, i mean, ya, we can do something similar [17:56] sure, (although i don't really want to code it/verify its correctness, ha) [17:58] can pbr parse a string for me ? [17:58] and tell me what it thinks debiand and rpm versions shoudl be ? [17:59] hmmmm [17:59] don't think so [17:59] some other package might be able to [18:03] this is crap harlow [18:03] version.SemanticVersion(0, 7, 6, dev_count=100).debian_string() [18:03] 0.7.6~dev100 [18:03] i'd have thought i was saying 0.7.6 + 100 commits [18:03] and it gave me ~ which is < 0.7.6 [18:05] lol [18:07] u might want to try https://github.com/openstack-dev/pbr/blob/master/pbr/version.py#L121 ? [18:08] or seeing what https://github.com/openstack-dev/pbr/blob/master/pbr/packaging.py#L592 does [18:08] or https://github.com/openstack-dev/pbr/blob/master/pbr/packaging.py#L691 [18:09] harlowja, it sure looks to me like my versioning scheme works for both debian and rh [18:09] if it is a tag, then you use Major.Minor.Micro [18:09] if not, then you use [18:10] Major.Minor.Micro+Commits.gHASH [18:10] so "+" is addition there? [18:11] it just means greater [18:11] i am pretty sure. [18:11] ya, i don't think that works in rpms from what i remember [18:11] but maybe larsks can comment [18:12] harlowja: what am I commenting on? [18:12] * larsks reads... [18:12] I think we should be following https://fedoraproject.org/wiki/Packaging:Naming?rd=Packaging:NamingGuidelines w/r/t rpm versioning. [18:13] I suspect you can use '+' in a release if you want (that is, I don't think it's prohibitied), but it's not magic in any way. [18:15] larsks, i'm not sure i follow [18:16] I think you are asking if "Major.Minor.Micro+Commits.gHASH" is a good versioning scheme for packages, yes? Or did I misread? Apologies if so. [18:16] i'm saying that for snapshots of upstream, calling that the "upstream version" is sufficient. [18:17] if you want to suggest something else, i'm ok with that. [18:17] I'm still not following, sorry. You are proposing that e.g., brpm should produce packages with this versioning scheme? Or something else? If it's just a documentation reference to the upstream version that seems lovely. [18:18] so, yeah. if you run 'brpm' on trunk, you'd get a rpm that had a version like that. [18:18] if you ran it with 0.7.7, you'd get '0.7.7' [18:19] I guess if those rpms never really go anywhere other than a local development host it doesn't really matter. [18:19] what versoinoing would you use for your packaging ? [18:19] you'd probably just use a serial incrementer on the releases youd done [18:19] right? [18:19] The doc I pointed at (the fedora packaging guidlines), which is pretty much what I implemented in my changes to brpm. [18:20] So, --.git [18:20] * if those rpms never really go anywhere other than a local development host it doesn't really matter. [18:21] hmmm, but they will be going somewhere else, ha [18:21] harlowja: are they? [18:21] harlowja: none of the downstream rpm distributions use brpm for building packages. [18:21] sucks to be them [18:21] ? [18:21] I dunno about that. [18:21] But that is the current situation. [18:21] :) [18:22] reality is i don't plan on waiting around for downstream rpm distributions :-P [18:22] harlowja: i am not sure I follow your comments. [18:22] In what way would you need to wait on them? [18:22] wait for a change to filter through the pipeline ---> downstream rpm distributions [18:23] to images that i can them use [18:23] *then use [18:23] (in the godaddy cloud) [18:23] Ah, I see. [18:23] also need to add in a godaddy specific module to cloud-init, so i'll be repacking anyway, and cloud.cfg will require tweaks for that [18:23] It seems like that would happen in any case, regardless of what they were using for the spec file, especially for enterprise distributions, right? [18:24] There is almost always going to be delay between upstream release and downstream packaging. [18:24] idk, i'm not a enterprise distributor :-P [18:24] in the world of the future, no i don't like that delay and i think its messed up [18:24] in the world of today, sure it probably is a delay [18:26] Okay. Anyway, back to smoser's question, I would personally adopt the versioning scheme recommended in the fedora docs, but for our purposes here I don't think it matters one way or the other. As long as release is lexically greather than release , it should all be okay. [18:29] is that in part cause of the lack of usage of brpm? [18:29] (why it doesn't matter) [18:31] http://paste.ubuntu.com/22207121/ [18:31] that compares version info like [18:31] http://paste.ubuntu.com/22207177/ [18:31] woah, shell [18:31] lol [18:34] smoser: I was curious about whether something like 0.7.7-10-g1234567 will be > 0.7.7-1-g1234567. If not, one could zero-pad the commit count. [18:35] you can't put a - in an upstream version in debian [18:35] stupid [18:35] wait [18:35] not saying *you're* stupid [18:35] stupid that debian issue [18:36] larsks, the + does seem to work, and honestly make sense. [18:36] right now we're 0.7.6+some-commits [18:36] Awesome. Have at it :) [18:36] the read-version stuff would have to be updated as it gets 0.7.7 now. [18:36] but tath is ok. [18:44] and for a change of topic [18:44] smoser any thoughts on https://issues.jenkins-ci.org/browse/JENKINS-30183 [18:45] https://github.com/jenkinsci/acceptance-test-harness/blob/master/src/test/resources/openstack_plugin/cloud-init is the cloud-init file that uses [18:45] i think in part its because runcmd probably shouldn't be used [18:45] or the module ordering needs to be adjusted [18:45] https://issues.jenkins-ci.org/browse/JENKINS-30183?focusedCommentId=266029&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-266029 [18:47] or someones package doesn't set the right ordering for sshd to start [18:51] it almost seems like whats wanted is a init-run-cmds there [18:51] or moving runcmd into the cloud_init_modules for this case [18:51] so that sshd doesn't start until after thats done [18:52] so they want to lock sshd starting until its done? [18:52] ya [18:53] ah. i see. [18:53] jenkins ssh's in [18:53] any other way besides adjusting the module ordering that u can think of? [18:54] ya, they do https://github.com/jenkinsci/acceptance-test-harness/blob/master/src/test/resources/openstack_plugin/cloud-init#L16 and line 22 there [18:54] to work around this [18:54] does jenkins retry ? [18:54] if it can't get in. [18:54] i suspect yes [18:54] ya, but the desire i think is to not allow sshd at all until user data scripts have all ran [18:54] so that they know that user data scripts have finished before jenkins (the master) starts trying to connect in [18:55] which it does via ssh [18:55] (to establish the master <-> slave connection) [18:55] well, a boothook could disable sshd (and the stop it to make sure) [18:55] and then start it in runcmd [18:55] right, that'd work to [18:56] also, if they did not put the ssh-authorized-keys in under 'users' [18:56] the weird part (or harder part) is that that plugin allows for user specified cloud userdata [18:56] but wrote them when they're done [18:56] so people can override it and not do that [18:56] and screw themselves, ha [18:57] bb [18:57] i've wanted to be able to block ssh from starting until users sare configured before. [18:57] so that if you ever got denied you'd not need to try again. either connection closed or youre in [18:57] but never wanted to not start sshd till later. [18:58] ubuntu can disable sshd easily enough [19:11] harlowja, 2 other options [19:12] a.) rather than having jenkins just bang on the ssh port until gets in, you could have phone_home tell it it is ready [19:12] b.) you coudl have jenkins block until a /run/all-done is there [19:12] (or cloud-init's /run/cloud-init/result.json) [19:30] ah, ya, the '/run/all-done' might be a good idea [20:29] hey folks, had a question on whether anything has changed in recent ubuntu images as far as initial mount and format of the ephemeral drive. we have started seeing this lately: Stderr: 'mke2fs 1.42.13 (17-May-2015)\n/dev/sdb1 is mounted; will not make a filesystem here!\n' [20:35] we think this might be a race condition between mnt.mount and cloud-config.service, but it seems to be a new issue [20:51] hans__, this is azure ? [20:52] @smoser, yes [20:54] @smoser: it looks like mnt.mount is doing a mount before cloud-init gets there sometimes, and we see 'Device /dev/sdb1 has Temporary Storage ntfs' [20:55] @smoser, then mke2fs fails - is it expected that the drive is unmounted before cloud-init gets there? [20:56] hans__, yeah, thats probably true. [20:56] nothing is stiopping the mounts from occuring === rangerpb is now known as rangerpbzzzz