[15:23] rharper: I think we are good on https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/322991 now. I was likely going to land this later today unless you have objections. smoser did initial review there I had a couple of followups minor changes have come back per review comments [15:24] blackboxsw: cool [15:25] you know, gotta get content for our cloud-init weekly summary [15:25] :) [15:25] lol [15:25] blackboxsw: let me know when you push, and I'll rebase my two branches (doc change and v2 network passthrough) [15:36] waiting on tox [15:39] rharper: pushed [15:39] blackboxsw: cool, thanks [15:41] TemporalBeing: ok, looked again at the code; the runcmd items get written out to a scripts file in the instance and then aren't executed until the scripts_user module runs, which happens very late (and after package install); that explains why we could do package installs and interact with them in the runcmd (like your ufw changes, and pip); What remains is the dictionary merging; I'm going to look at that next [15:41] ok [15:58] blackboxsw: ok, pushed my two branches [16:00] nice rharper. ok I'm seeing a 6 hour offset in my unit tests on analyze branch (the only thing I think that is blocking that branch from landing). Strange thing in test_dump is that I use static SAMPLE_LOGS, so I'm not quite sure how our jenkins environment is parsing that differently. digging a bit [16:01] blackboxsw: so, I've not updated to your latest branch changes yet, but it looked like we might need to mock out the timestamp value [16:01] locally, tox -e py3 was failing, maybe you've already fixed that and your offset is different [16:01] blackboxsw: do you have a link to the failure on jenkins ? [16:02] rharper: https://jenkins.ubuntu.com/server/job/cloud-init-ci/144/console [16:03] blackboxsw: yeah, that's what I see locally on my xenial dev box [16:03] tox -e py3 shows it each time [16:03] * rharper will take a closer look since he can reproduce [16:03] * blackboxsw makes sure I've pushed latest [16:04] I get no errors xenial tox -e py3 on 2fdfab7..9e8fc85 [16:04] just pushed [16:12] * rharper refreshes [16:13] blackboxsw: why does that work? [16:13] I think we should debug the parse_timestamp with that value [16:13] I don't know if that works. but I definitely feel the same way [16:14] timeshifting -6hours just sheds light on something busticated [16:14] I only pushed that offset -6 hours to let jenkins run and see if it still fell over [16:14] I didn't intend you to get the latest work in progress branch [16:15] s/branch/commit [16:15] good timeshift -6hours didn't work. it's still broken by a 6 hour offset https://jenkins.ubuntu.com/server/job/cloud-init-ci/146/console [16:15] reverting that last commit [16:25] I think this is a utc thingy; when I manually convert the value via date, I get what the unittest gets as well; so I'm wondering where you got the expected value ? [16:26] ah [16:26] it's in MDT [16:27] hahah [16:27] % date +%s.%3N -d "2017-08-08 20:05:07,147 MDT" [16:27] 1502244307.147 [16:27] yeah [16:27] my bad [16:27] well, not really [16:27] I feel like we need something else in here re: TZ [16:28] the timestamp value written to the record is including the TZ offset in the timestamp value [16:28] that feels like a bug, it should be in UTC (or at least we need to record the TZ) [16:28] right an tz explicit format [16:29] well, I have a branch to force logging timestamps into UTC [16:29] which should help here [16:29] right would avoid the issue [16:29] but for old logs... [16:30] I think it's OK, if you're parsing a cloud-init.log file for events, you'r "generating" events from the log file, and you get the +TZ offset in the records [16:30] I don't think that's an issue per-se [16:31] that is, the delta between events is still correct independent of the TZ offset [16:32] ok, I've got to grab some lunch, will think on this, bbl [16:45] reset my tz to repro the failure.. ok working it === shardy is now known as shardy_afk [18:11] dpb1, I'm curious if there is already a way of adding a user via cloud-init/cloud-config for Ubuntu Core? [18:16] jhodapp: hey, not everything has landed, a couple of MPs are still in review for snappy. But it's kind of testable right now [18:16] with effort [18:16] rharper: if all that was available in uc16, would adding users work? [18:16] the users configuration dictionary works [18:16] dpb1, oh nice [18:17] rharper, so that's a full on uc16 system user right? [18:17] if you want to add snappy users, then you use the snap_user format, [18:17] it's a normal user, if you want a snappy system user, with store privs, you need to supply the email or system_user assertion [18:17] lemme get the link to docs [18:19] http://cloudinit.readthedocs.io/en/latest/topics/modules.html#snap-config [18:20] so snappy: {'email': } will trigger the 'snap create-user' which will do the user lookup via store and import ssh keys from associated account if present, and that will be a snap user which can install without sudo [18:20] the other users: keys will work, and users are added via the --extra-users path, they have sudo but are not authenticated to the store , so one would need to snap login, etc to access snap stuff with an account not created through snap create-user [18:26] rharper, what if the device doesn't have internet access to lookup via the store? [18:26] it fails [18:26] that's always the case for snap store interactions [18:26] if you think you cannot rely on the store [18:26] then you need to use assertions [18:26] https://docs.ubuntu.com/core/en/reference/assertions/system-user [18:27] rharper, manually applied assertions or can it be done with cloud-init still? [18:27] with cloud-init [18:27] you just inline the assertions in the config [18:27] snappy: {'assertions': [assrt1, assert2, assser3'] [18:27] rharper, ah I see, I think that should work for the requirements I have in mind [18:27] and then you can do email: known: true [18:30] rharper, and that'll work no matter if we've disabled console-conf? [18:30] no interaction directly with console-conf [18:31] rharper, speaking of which, second question...can we disable console-conf via cloud-init? [18:31] cloud-init runs 'snap ack ' [18:31] ok [18:31] and 'snap create-user --known ' [18:31] so just a wrapper [18:31] we don't have a config for disabling console-conf, but 1) if you create-user, snap disables console-conf for you [18:31] 2) if you want to cheat, you can touch the files that the systemd units do to disable console-conf [18:32] rharper, oh interesting [18:32] yeah we knew we could do that manually [18:32] that'll work for us [18:32] console-conf does 'snap create-user ' [18:32] although you may consider adding that ability explicitly [18:32] so, it's either interactive via console-conf, or via cloud-init [18:32] I don't think that'll abide [18:32] in general, we don't want to disable console-conf [18:32] those that really do want to have custom image anyhow [18:33] rharper, that's not a safe assumption [18:33] in which they can write out something to disable console-conf [18:33] the use case for this project breaks that assumption [18:33] and you're using the stock pc image? instead of customizing ? [18:33] rharper, yes [18:34] to use system-assertions you need to have your own model [18:34] so you'll need a custom image anyhow [18:35] rharper, you can't use the default one with assertions? [18:35] you can't sign it since you're not canonical [18:35] rharper, we are though :) [18:36] from everything I'm aware of, these should really be custom images; but I'll leave that to you; you can always do a write_files to touch the console-conf disable, etc if you don't make one; [19:44] blackboxsw: I was thinking, we should grep for any other configobj + stringio [19:44] might as well see if we can find any other ones like that [19:44] robjo: good thought rharper we can fold that in with the cc_landscape change [19:44] oops sorry robjo.... rharper I mean [19:45] * rharper just keeps handing out the work [19:45] 15 more mins on my existing unit tests, I'm trying to get coverage of failure paths up [19:48] rharper: I'm seeing various approaches to using logs in cc modules. some define them locally for the module LOG = logging.getlogger(__name__) other's use the log passed into the handle function (param # 3). What's the preferred long term? [19:49] some cc modules use both [19:49] I believe all new ones use LOG = logging.getlogger(__name__) [19:49] ok so we can be certain where the logs are coming from [19:49] ack [19:50] rharper: I'll tweak cc_landcsape to use this approach then [19:50] since I'm meddling [19:50] maybe [19:50] the log function isn't related to this bug; so my preference is not to touch it [19:50] maybe a topic to discuss w/ smoser on friday [19:50] yes [19:50] +1 [19:50] we may want to file these as backlog bugs [19:51] falls into the "style" category (like ' versus ") [19:51] alsmost [19:52] heh [20:50] ok cc_landscape done. now onto cc_puppet. [20:51] which also has no unit tests [20:51] and looks to suffer from the same prob [20:51] the other modules using StringIO seem to have handled things for py3 [20:51] and have coverage [21:00] unittesting++ [21:00] up to 61% coverage... slowly climbing out of the hole [21:03] blackboxsw: nice [22:33] found more bogus docs for cloud-config options in cc_puppet docstr [22:33] ok fixing that too. yay unit tests [23:01] Hi all. I am trying to use write_file in order to write files on to an AWS instance and failing miserably. In two of the test cases the files themselves are being written but with no content and in the third nothing is written at all. Any suggestions as to what I should be looking for? [23:01] s/are being written/are being created/