[00:01] harlowja, well, i fixed the ones i fixed [00:01] lol [00:02] the 2 i fixed with ip [00:02] thx :) [00:02] they are i'm pretty sure either [00:02] a.) you have no output of 'ip' [00:02] or (i think) [00:02] b.) you have not 'ip' in your path [00:02] because redhat typically does noth ave /sbin in path [00:02] but either way [00:02] the next one i think is just that have dns hijacking going on on that host [00:02] could be [00:03] see is_resolvable [00:03] it tries to work around that stuff [00:03] but apparently not enough there. [00:03] so... [00:03] help me get a cent6 / cent7 container [00:04] i only know VMs [00:04] lol [00:04] well, i have a container [00:04] VM [00:04] you just get it so things work [00:04] :) [00:04] agreed [00:04] will get that in a few, templating cloud.cfg [00:05] well https://public.etherpad-mozilla.org/p/cloud-init-centos-unittest [00:05] right now a bunch of stuff fails for me [00:05] kk [00:05] paage still loading [00:06] lol [00:06] i have a node up 45.55.168.77 [00:06] that you can go to ubuntu@ [00:06] k [00:06] and lxc is there... [00:06] cubswin? [00:06] lol [00:06] ssh keys. [00:06] k [00:06] 101 [00:06] is cubs win total [00:06] kind of like lol [00:06] haha [00:06] close enough [00:08] do you use python-mock from the distro ? [00:08] i think its too old. [00:09] i usually run tox which then installs all the things [00:09] so nothing from distro [00:13] hm. [00:13] well, i couldnt run tox [00:13] it didn't work either. [00:13] hmmm, did u install tox ? [00:13] how much did work? [00:13] from rpm ? [00:13] er.. yum [00:13] or from pip [00:15] http://paste.ubuntu.com/23253475/ [00:15] ah, yes, pip install setuptools --upgrade also [00:16] stupid old busted [00:16] likely also pip install virtualenv --upgrade to [00:16] i believe virtualenv has the bundled version of setuptools that tox then uses [00:16] but i forget [00:17] pip install setuptools tox virtualenv --upgrade [00:17] still issues [00:17] checking that. [00:17] k [00:17] and keep on saying to yourself 'old stuff is more stable' [00:17] lol [00:17] so essentially if this works, i'm basically running in tox [00:17] ya [00:18] which... [00:18] protects me from most things [00:18] *for testing [00:18] i'd like to run nosetests with the versions of things that are in the distro [00:19] probably need to cap mock version then [00:21] which is prob 1.0.1 on epel [00:21] *epel 7 [00:21] and 0.8 on epel 6 :-/ [00:21] don't forget keep on saying to yourself 'old stuff is more stable' [00:21] lol [00:22] well, i guess i dont care too much about the mock version [00:22] ya, as long as it works, which it should [00:23] right. i'm fine to have newer of that, but if we're using a library we want to ouse the version in the distro. [00:23] i like that: [00:23] yum install valid-package invalid-package valid-package2 [00:23] yum is sorta retarded... [00:23] goes and installs everything and then somewhere in the log it says 'invalid-package not available' [00:23] rather than jsut saying "um... can't do that" [00:23] ya, i think the exit code is also 0 for that case [00:23] from what i remember [00:23] lol [00:24] so u can't detect it in bash scripts [00:24] (at least not via exit codes) [00:26] hopefully dnf fixes that [00:27] FAILED (SKIP=28, errors=30, failures=1) [00:27] nice, more than with tox it seems, lol [00:27] so thats in my cent6 after pip install and then tox [00:27] some recently regressed. :-( [00:27] TypeError: decode() takes no keyword arguments [00:28] ah, ya, that one [00:28] didn't i fix that [00:28] i forget [00:28] lol [00:28] how do you fix that ? [00:28] cent6 and py26 are awesome [00:28] i think u can just not give a keyword argument :-P [00:28] and just do it positional argument only [00:29] ya, just positional should be fine [00:29] S.decode([encoding[,errors]]) -> object [00:29] prob could do encoding=encoding in py27 only or something [00:29] oh. right. [00:29] 'old stuff is more stable' [00:29] ok. with taht change, now [00:29] lol [00:29] FAILED (SKIP=28, errors=3) [00:30] cool [00:30] * powersj thinks we should probably get CI unit tests going again as they fail right now [00:30] whats the last 3 [00:30] 'old stuff is more stable' 'old stuff is more stable' 'old stuff is more stable' [00:30] lol [00:31] http://paste.ubuntu.com/23253529/ [00:31] k. first is easy enough (content) [00:33] ya, the rest are dict comphrenhsions [00:33] set comprehension on one of them :) [00:33] oh ya [00:33] ya, just turn those into set(iterable) and dict(iterable) [00:33] and that's all those become [00:34] so not so bad [00:34] http://paste.ubuntu.com/23253540/ [00:35] cool u can even do [00:35] set(m['uri'] for m in f['apt']['primary']) [00:35] if u really care [00:35] either will be fine [00:35] no need for intermediary list [00:36] oh. [00:36] how is that [00:36] oh. ididnt know you coudl [00:37] i like that [00:40] :-p [00:40] m['uri'] for m in f['apt']['primary'] is a generator [00:41] yeah, but only if wrapped in parens [00:41] $ python -c 'm for m in (1,2,3)' [00:41] File "", line 1 [00:41] m for m in (1,2,3) [00:41] ^ [00:41] SyntaxError: invalid syntax [00:41] $ python -c '(m for m in (1,2,3))' [00:41] happy [00:41] right [00:41] so u put it in parans [00:41] ha [00:41] set( ) [00:41] lol [00:42] it is kinda wierd like that [00:42] that the parens to the function call suffice [00:42] that kind of hurts my brain [00:43] ya, its probably a weirdness in the python syntax that allows for it [00:45] how should i say "what version of centos am i on" [00:45] ie, i want to know 6 or 6 [00:45] or 7 [00:47] alright. good. [00:48] in /etc/redhat-release i think [00:48] or lsb_release i think has something [00:49] or python has some stuff u can use [00:49] https://public.etherpad-mozilla.org/p/cloud-init-centos-unittest [00:50] so... given that patch above, i can run tox in cent6 or cent7 [00:50] thats good, but ideally we'd be able to run 'nosetests' against distro-installed versions (as would be found in a runtime) [00:50] ya, that may require a little more version tweaking [00:51] and the 'build' thing mostly worked... at least used to. [00:51] so, tahts good. thanks harlowja [00:51] ya, there is a file or 2 that is missing for brpm [00:51] but i'm hoping with nrezinorn that brpm goes away [00:51] you seem my comments https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+merge/305882 [00:51] i have to run [00:51] ya [00:51] thanks for your help jxharlow [00:51] whos that [00:51] lol [00:52] you changed your middle name when you moved to godaddy [00:52] JXMenHarlow [00:52] is that because harlowja.coolguy was taken? [00:52] but you could still get harlowjx [00:52] nah, i asked and they just said, meh that's what u got [00:52] i asked about harlowja, and it seemed like alot of work [00:52] so i just gave up [00:52] lol [00:53] https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+ref/kill-brpm (where brpm goes away) [00:53] it seems to work on cent7 at least, ha [00:53] nrezinorn just wants a spec file, lol [00:53] but i like brpm [00:53] * harlowja runs away [00:53] lol [00:54] as long as some way we can make an rpm [00:54] ya [00:54] * smoser out [00:54] later === shardy is now known as shardy_lunch === shardy_lunch is now known as shardy === rangerpbzzzz is now known as rangerpb [15:39] harlowja, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/307333 [15:39] what do you think about that ? [15:54] Hi, I am using vmware with openstack [15:54] Virtaul CD-Rom which used for config drive is not removed after Machine is provisioned [15:54] Leaving admintrative password and other data as clear text in ( DVD-DRIVE config-2 ) [15:54] what configuration is need for cloud init to remove the drive when vm is provisioned? [15:55] can anyone please help me with my query? [16:03] avsh, do you have non-root users who can mount a cdrom ? [16:03] yes [16:04] to my knowledge, cloud-init couldn't on its own rid the system of that drive. you could 'eject /dev/cdrom' [16:04] and then it would not be there, but possibly a 'eject -t /dev/cdrom' would pull back in the tray and have it again [16:04] and if it is Read-only media, then cloud-init can't write it to blank it [16:05] can you give the file that has the password in it ? it'll help me diagnose where to look to see what esle could be done. [16:11] ec2/ [16:11] openstack/2012-08-10/ [16:11] openstack/2013-04-04/ [16:11] openstack/2013-10-17/ [16:11] openstack/content [16:11] openstack/latest/meta_data.json [16:11] openstack/latest/user_data [16:11] openstack/latest/vendor_data.json [16:11] This is the Folder Structure [16:11] and openstack/latest/meta_data.json has contents like { "admin_pass": password, "random_seed": "*******" } [16:20] oh. well, good for you that cloud-init doesn't care what is in there [16:20] smoser, let me know if you need any info [16:20] it ignores it [16:20] as on linux, ssh keys are preferred. [16:21] but users can see other sensitive information which is a security concern [16:21] like if i install a software on the vm, that software password is an example [16:25] avsh, do you have a reason to let users mount that disk ? [16:25] why not just remove users from the 'cdrom' group [16:26] we can do that. we don't see this issue with Openstack + KVM [16:26] only with Vmware + Openstack [16:27] trying to understand, it is configuration with cloud-init or vmare nova driver [16:27] just because you're not looking in the right place :) [16:27] i suspect the same data is availble in http://169.254.169.254/openstack/ [16:28] and any malicious user already knew that. [16:28] ok [16:28] cloud-init can route off the that particular address so that only root would be able to get at it [16:29] with [16:29] disable_ec2_metadata: true [16:30] ok, let me check on kvm instance with the url you provided [16:40] smoser, you made my day [16:40] you are correct, I am not seeing at right place with kvm + openstack [16:40] It can access all the data with the above url [16:40] I can rule out vmware opensstack nova drive [16:41] avsh, alternatively you can do things in a different way. [16:42] you can use '#include-once' to include other cloud-config things... and make those expiring or one-time-read urls [16:42] the metadata services are not intended to be secure [17:00] smoser, I will check on the #include-once, thanks [17:45] rharper, around ? [17:45] i've 2 things for you... one. do you have a readthedocs.org account (or can you get one). [17:45] 2. http://paste.ubuntu.com/23256482/ [19:03] smoser: here [19:03] I have a rtd account [19:03] lemme get on (2) [19:04] what am I looking at with (2) ? [19:06] what is rtd account ? [19:06] i will share access to cloud-init project [19:06] ah, I see , a slow ish boot; total time is 11 seconds though [19:06] read-the-docs [19:07] ah, right [19:07] rharper, yeah.... [19:07] um more interesting than that. [19:07] raharper [19:07] that was a 2+ minute boot [19:07] :) [19:07] 17:32:37 to 17:32:56 [19:07] it's not cloud-init log [19:08] yeah. [19:08] that's like 20 seconds wallclock [19:08] thats what is fun [19:08] so something else (look at systemd-analyze blame [19:08] can I haz ssh to ami ? [19:08] i think clock is moving backwards [19:08] oh, ntp! [19:08] you can... yeah [19:09] let me set access up for you through my bastion [19:09] i assume its reproducible on serverstack [19:09] but [19:09] ok [19:10] dmi data /sys/class/dmi/id/product_name returned OpenStack Nova -- that log is not from AMI on EC2 .. [19:11] right [19:16] if it's kernel related, it's possible that it could be reproduced in sstack; however, given that the virt layer is going to handle memory differently (booting xen on ec2, vs kvm on openstack) that may mean we won't reproduce the same amount of slowdown; the kernel but that's reference has to do with SLAB/SLUB config and other changes [19:22] rharper, ssh-via proxy-user@10.245.162.60 ubuntu@10.5.0.185 [19:22] first ip is my bastion. i set you up to jump through there. [19:22] second is the system. [19:22] ssh-via is http://smoser.brickies.net/git/?p=tildabin.git;a=blob;f=ssh-via; [19:23] blob_plain is what I want [19:27] smoser: in [19:29] Sep 30 17:31:33 ubuntu systemd[1]: Time has been changed [19:29] something *did* reset the clock [19:30] Sep 30 17:31:33 ubuntu systemd-timesyncd[556]: Synchronized to time server 91.189.89.198:123 (ntp.ubuntu.com). [19:30] Sep 30 17:30:42 - prior event [19:31] Sep 30 17:31:33 - time has changed [19:31] so, slow clock moved forward by just under a minute [19:32] and then, ntp syncs it and moves it another minute forward, Sep 30 17:32:23 [19:33] brb [19:37] rharper, its wierd though that cloud-init's logging didn't see that. [19:37] rsyslog must be doing it ? and it keeping its own clock or something? [19:56] smoser: it's in journctl [19:56] the time change happened async from cloud-init execution [19:56] not quite sure how journctl keeps track of time vs. python logging/rsyslog [19:58] smoser: note the odd delta between the entry timestamp (Sep 30 17:32:37, vs the timestamp collected for the welcome message: at Fri, 30 Sep 2016 17:30:39 +0000) [19:59] yeah. its wierd. [19:59] and systemd is confused by this [19:59] as it *does* say cloud-init took 2 minutes to run [19:59] when i'm pretty sure watching a wall clock that is not the case. [20:00] correct [20:00] I was going to do the relative time between events in ci and I'm positive it didn't take all that time (rather we've got a clock jump) [20:01] I suspect if you uncloud-init data , reboot [20:01] it won't be as a long [20:01] I wonder why the VM clock is so far off though [20:01] 2 minute adjustment is pretty large === rangerpb is now known as rangerpbzzzz [20:06] you launched me at: Fri, 30 Sep 2016 17:29:59 +0000 [20:06] kernel booted : Fri, 30 Sep 2016 17:30:30 +0000 [20:06] smoser interesting if u haven't seen it [20:06] https://cloud.google.com/compute/docs/containers/vm-image/#using_cloud-init [20:08] rharper, yeah, i agree. [20:09] so, I don't know what to do unless timectl could tell us how much time was adjusted [20:09] it seems like it's a systemd bug too [20:09] really sucks [20:09] since analyze didn't update (acknowledge that ntp changed time) [20:09] if it knows the timedelta, it could apply that [20:09] in cloud-init we could read uptime [20:09] to provide true time [20:10] despite clock shift [20:10] theres a way to get that i think rather thatn /proc/uptime [20:10] (althoguh /proc/uptime is mocked in a container for us, and a kernel interface probably isnt) [20:11] right [20:13] https://github.com/xmonader/linuxsysinfo/blob/master/sysinfo.py [20:15] yeah, the procfs is slightly slow in container [20:15] the cloud-final message is always high on blame due to reading sysinfo [20:16] it's relatively slow to other modules [20:19] rharper, well, we could read via sysinfo. [20:19] syscalls, yeah; but that'd be host data, right ? [20:19] sorta like dmesg [20:19] probalby could even read once from /proc/uptime [20:19] it's just not right [20:19] and then count that offset [20:19] :) [20:19] right, proc/uptime once and caching that would be useful [20:20] and then form then on out ask the sysinfo [20:20] but the issue is the 4 invocations of cloud-init [20:20] well, 4 reads of /proc/uptime is probably "not that bad" in the grand scheme of all the bad things. [20:20] but from exec to exit, we could do sysinfo for time; but honestly; I think rdtsc is likely faster for absolute cycles [20:20] smoser: it's the hottest thing left on lxd reboots [20:20] well then. [20:21] it's not bad at all [20:21] :) [20:21] we're at 0.25 second reboot [20:21] but it's still roughly 30% of that in cloud-final message [20:21] so, if we could reduce that , then we'd see sub .2 second reboots [20:22] rdtsc would be faster than call to proc due to fuse [20:28] hrm, the monotonic clock jumps too when time is set; I suppose that's expected... but that sounds wrong [20:37] smoser: so, can we see if /var/lib/systemd/clock exists in the cloud-image ? [20:38] ah, it doesn't (at least the lxd rootfs doesn;'t have it) [20:39] as in is it created at runtime you mean ? [20:39] versus already present? [20:39] right [20:39] timesyncd uses that as a restore clock to this as soon as it starts (if it exists) [20:40] oh. [20:40] that could be a jump but it would have been backwards quite a bit [20:40] restore from what ? [20:40] each sync with ntp, that file is updated with the last good stamp from ntp [20:40] https://lists.freedesktop.org/archives/systemd-devel/2015-May/031988.html [20:40] It [20:40] implements sNTP and will sync the last known time to disk every time [20:40] it gets an sNTP sync or the system is shut down. [20:40] At boot it uses [20:40] that time to reinitialize the clock, as early as possible, before [20:40] NTP is done. THis will give you monotonic time which should solve [20:40] your probelm. [20:41] so, the journal, and other loggers on the system use CLOCK_MONOTONIC, which is susceptable to NTP changes in clock (always forward) [20:41] so, AFAICT, this is just a really out of sync clock on the host [20:49] which *is* susceptable ? [20:49] or is not succeptable [20:50] is [20:50] CLOCK_MONOTONIC_RAW is not [20:51] hm. [20:51] harlowja, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/307363 [20:52] powersj, that gives us a way (via the gist linked there) to run uni tests in centos6 pretty easily [20:58] rharper, raharper is now a maintainer of CloudInit in readthedocs [20:58] and fyi, magicalChicken's doc fixes are there now! [20:59] (it had stopped building when we moved from bzr) [21:00] smoser: cool, i see it [21:01] * smoser has to run [21:02] harlowja, if you could look at that... you can merge it if you want [21:02] it seems nice. [21:02] kk [21:02] cools [21:02] later. [21:03] smoser, very cool [21:03] woah [21:03] nice nice [21:03] all the modules got filled in??? [21:03] sweet [21:04] that only took a couple years to finish, lol [21:04] thx guys! :) [21:10] magicalChicken, ^^ [21:14] harlowja, smoser: so in the gce link, it mentions that they've implemented setting UID. Any particular reason that hasn't been implemented in base cloud-init up to now besides priorities? [21:14] i forget [21:14] i thought i remember a patch for that [21:14] lol [21:14] bug for it. https://bugs.launchpad.net/cloud-init/+bug/1396362 [21:15] yeah, i remembered seeing it in the backlog [21:15] unsure about that one [21:15] why is it a diff file [21:15] lol [21:15] did they not sign the CLA [21:15] weird [21:15] when i saw it in the gce doc, reminded me that i'd seen it requested before [21:15] ya [21:22] smoser, I setup cloud-init to run unit tests 2x a day now across the architectures. It will git clone master, figure it is better than noting. [21:23] I can figure something out for the new centos one too when I get back home [21:24] and since it does not respond to merge requests, but just does master, it will email us on failures, hence the mail you probably just got