=== sambetts|afk is now known as sambetts === shardy is now known as shardy_afk === shardy_afk is now known as shardy [12:09] Hi guys, I have an issue where the pubkey i specify for my ec2 instance doesnt get installed on the instance. The right key is available from the metadata, but doesnt get installed. I use cloud-init to provision my base AMI, might there be something I'm missing to "reset" for the keys to be installed? [12:32] Also can I create PRs for the docs? Not familiar with launchpad [12:47] telling, yes you can do "merge proposals" for the docs. [12:47] follow http://cloudinit.readthedocs.io/en/latest/topics/hacking.html [12:47] to do doc you run 'tox -e doc' [12:48] and, yes the key should get pulled in. [12:48] can you pastebin a /var/log/cloud-init.log from an instance ? this does onyl happen once per instance, though. not ever boot. [12:48] smoser: right, so the issue is I dont clean up the instances dir on my base ami? [12:51] This is my cloud-init log: https://ncry.pt/p/nxDn#FbcKAdDqbLBqKIqASdwzhw3PK6FtHibonWeX3zzIr0I [12:52] As you can see it contains the base-ami log and the newly created jenkins instances log :) [12:56] telling, you should not have to clean up anything on an ami [12:56] thats the goal at least. [12:56] Right, i thought so initially [12:56] Just cant figure this out :) [12:58] http://paste.ubuntu.com/24737229/ [13:00] telling, hm.. [13:01] can you share the cloud-config that you're giving ? it seems like you've at least modified the default user [13:03] smoser: yes, it's here: https://ncry.pt/p/pxDn#MpKZXCM083Hu_imxWbRE8mrPj1gbPzNYqbNhJblMgNs [13:06] telling, http://cloudinit.readthedocs.io/en/latest/topics/examples.html?highlight=users [13:06] so what is happening... [13:06] is that you are overriding the default 'users' list [13:06] and defining one without a 'default' entry [13:06] and the ssh keys from the metadata service go into which ever user is 'default' [13:07] there may be a way for you to tag your jenkins user in the list [13:07] let me check [13:07] No [13:07] Thats not what i want, i want it to be the ubuntu user as normal [13:07] I just want cloud-init to create a jenkins user, not for it to be default [13:11] Currently it seems, no user gets the key. I feel like this can be cause of trouble [13:12] I can only access my instance because I have it provisioned with a master key in the base ami (Which im still debating with myself if I can justify or not) [13:15] ah. [13:15] just add an entry 'default' [13:15] in your users array [13:17] But you would've expected my jenkins user in this case to get the key? Or whats to be expected? [13:22] But that did indeed fix it, thanks a bunch smoser. [13:22] telling, well, the user that gets the key is the 'default' user (as described in the system_info) [13:23] but that user is not modified/created if they are not int he 'users' array === rangerpbzzzz is now known as rangerpb [14:43] rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324948 quickly ? [14:45] or blackboxsw [14:45] mornin' [14:49] smoser: I don't see how that kwarg specified will not break on old versions. [14:50] smoser: nevermind, dumb [14:50] reordered call args +1 [14:54] thanks [15:03] Hey smoser, what's the proper procedure to formally objecting to the behavior of a module and discussing changing it? [15:03] Something in a github issue or whatever? [15:12] smoser: So have we talked to SL about identifying information before? [15:13] I'm about to send them an email asking, and want to make sure I'm not repeating a question we've already asked. [15:19] i'im sure you are :) [15:19] :) [15:55] blackboxsw, http://paste.ubuntu.com/24738693/ [15:55] those on top of your [15:55] https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/324875 [15:55] and i'm good to pull [15:55] (not that i'm workign on that rather than other things i should be working on) [15:56] smoser: +1 you need me to apply it? [15:56] smoser: I know your listening [16:01] i'll just merge it [16:01] you can ACK in that MP [16:01] (i commented there too) [16:02] Ipushed [16:02] smoser: sorry pushed [16:16] blackboxsw, fudge [16:16] dmi_chassis_asset_tag_matches "${azure_chassis}" && return $DS_FOUND [16:16] check_seed_dir azure ovf-env.xml && return ${DS_FOUND} [16:17] my feeling is that always seed dir indicates datasource. [16:17] if you put that stuff in, you are disabling all other checks. so i think we want/wanted those lines swapped [16:18] smoser: don't we want to ds-identify to exit on first/cheapest check for a given datasource? [16:19] bah. [16:20] and i think in my squashing i duplicated a line too [16:20] /o\ [16:22] hmm, I thought generally we wanted to claim success (DS_FOUND) as soon as possible for each datasource in ds-identify. Looking back over the script. I'm on the cloud-init hangout if you want to chat for faster turnaround [16:22] blackboxsw, well, the idea is that if you write /var/lib/cloud/seed// you're feeding information to cloud-init. it overrides any checks. you're telling it "USE THIS DATA!". [16:22] sure === sambetts is now known as sambetts|afk [17:07] can someone please help me [17:08] am using debian 8 and cloud-init 0.7.7 and the root partition is not resizing [17:08] https://docs.google.com/document/d/1nAiVAt0rIG6Fl-4vEtvOpHR2e2F5W3j5OEO3POYhcKA/edit?usp=sharing [17:10] smoser: I'm going through the DataSourceAzure code, and it looks like our seed dir ovf file can contain dscfg key in which the dscfg can override hostname_command, hostname_bounce.command, and agent_command with a custom command that we could have injected in the seed dir, Wouldn't that get us past seed dir being broken? It seems like setting dscfg['agent_command'] = ["not__builtin__"] would let us fall through an pull [17:10] ssh keys out of get_metadata_from_agent (instead of fabric) [17:10] erick3k: I wonder if that's related to https://bugs.launchpad.net/cloud-init/+bug/1684869 [17:10] Ubuntu bug 1684869 in cloud-init "growing root partition does not always work with root=PARTUUID=" [Medium,Confirmed] [17:10] checking your doc [17:10] i see resize module not found and then found [17:10] blackboxsw thank you [17:17] hmm, so I'm seeing "Jun 1 08:49:53 newdebian8 [CLOUDINIT] util.py[DEBUG]: Running command ('resize2fs', '/dev/vda1') with allowed return codes [0] (shell=False, capture=True) [17:17] Jun 1 08:49:53 newdebian8 [CLOUDINIT] util.py[DEBUG]: Resizing took 0.007 seconds [17:17] Jun 1 08:49:53 newdebian8 [CLOUDINIT] cc_resizefs.py[DEBUG]: Resized root filesystem " [17:18] hmm yeah then some module not found errors [17:22] anyway to fix that? [17:25] blackboxsw, yeah.. i gues maybe. [17:25] erick3k: I'm not sure why that log appears to be running the init modules so many time [17:25] erick3k: I'm not sure why that log appears to be running the init modules so many times [17:25] blackboxsw, it runs on every boot [17:25] as you can shut down an instance, grow its diskk and it wants to make that magic [17:26] erick3k, i suspect htat you do not have growpart [17:26] but i woudl have thought you'd have some log of an error [17:27] i did just check [17:27] smoser it is installed :( https://i.imgur.com/HdSDob4.png [17:29] erick3k, you're not running the growpart module [17:29] erick3k, and fyi, 'pastebinit' is installable in debian and is fabulous [17:29] kool [17:29] ie, you run 'pastebinit /var/log/cloud-init.log' [17:29] how can i try and run the module? [17:30] or 'dpkg-query --show | pastebinit' [17:30] cool [17:30] your /etc/cloud/cloud.cfg probaly has no 'growpart' [17:30] umm [17:30] http://paste.ubuntu.com/24739577/ [17:30] it does have resizefs [17:30] resizefs resizes the filesystem [17:30] but growpart grows the partition to use any space at the end. [17:31] you can probably urn [17:31] sudo cloud-init single --frequency=always --name=growpart [17:31] https://0bin.net/paste/dd5jh1SY5RJtBCn-#v0WV8Is-//g0CwlmdqSEjT33ny325sdWqdqTVkjGeQ4 [17:31] thats my cloud.cfg [17:33] run cloud-init single --frequency=always --name=growpart and reboot? [17:33] nice [17:33] that worked [17:33] add 'growpart' before 'resizefs' [17:33] in cloud_init_modules [17:40] smoser that does not work [17:40] why [17:40] i increased the size, deleted /var/lib/instances/xxxx [17:41] rebooted [17:41] and still has the same size [17:43] i'd hope that /var/log/cloud-init.log has a WARN message [17:43] does [17:43] checking [17:43] can you pastebin: sudo growpart --dry-run --update=on [17:43] can you pastebin: sudo growpart --dry-run --update=on /dev/vda 1 [17:43] (where 1 is the partition of the device that needs resizing) [17:44] and /dev/vda is the device [17:44] https://0bin.net/paste/2q9nEwhsy5wlg0-b#NBF6dIc76qJkDmnWhnv7K6fTrkqLVVwYkZ1IkszmPxp [17:45] i dont see the module invoked on the log [17:45] oh hold on [17:47] nvm [17:47] smoser it does work, looks like cloud.cfg with -growpart didnt save [17:47] hehe [17:47] thank you very much [17:47] you are the man, owe you a beer [17:48] you can pay me now and I'll buy smoser a beer [17:48] :) [17:52] xD [18:02] Hi, is there a way to prevent cloud-init from caching the user-data locally? [18:08] My main issue is that if, for some reason, user-data is not received by a VM after a reboot, the user-data is loaded locally and some stuff that normally only get executed on first boot, get run again. [18:09] That results in password getting changed and other fun stuff. [18:38] Trying to get fs_setup and mounts to work. Getting "Failed to make '/mysql_data' config-mount". Any ideas? [18:45] powersj: http://paste.ubuntu.com/24740067/ [18:45] end of my tox run [18:46] Redcavalier: you don't have user-data but somehow it's cached? [18:46] dgarstang: cloud-init.log and the boot syslog may be helpful (along with any user-data supplied) [18:48] rharper: The file /var/log/cloud-init.log is empty. [18:48] rharper, I do have user-data, it get executed on first boot. However, sometimes after a reboot (1 in 100, roughly), the instance is unable to go and fetch that userdata and decides to use the local data. Now, for some reason, it all the commands, even though they already ran on first boot. [18:48] rharper: Which is weird because I haven't made any logging changes [18:48] dgarstang: suggests that cloud-init didn't run or something cleared it. [18:48] dgarstang: what about syslog? that also may have some cloud-init output [18:49] rharper: It ran because it executed scripts and it's logged "Failed to make '/redis_data' config-mount" to /var/log/cloud-init-output.log [18:49] all the commands, even though they ran on first boot already, get executed again* [18:49] Redcavalier: hrm, that sounds like a network metadata source; it may be going down a NoDataSource path; [18:50] rharper: Same stuff logged to /var/log/messages for cloud-init as /var/log/cloud-init-output.log [18:50] rharper, oh right, if I can change the way the no data source case behave, I might be able to fix this. [18:50] So, basically, cloud-init isn't logging _anything_ except stdout [18:50] Redcavalier: in general, cloud-init does some things every boot, per-instance, or always; most of the initialization items like passwords, are run per-instance, so unless you've wiped out /var/lib/cloud/instance (symlink to the instance-id dir) [18:51] it shouldn't re-run any of those per-instance configurations [18:51] Does this look correct? https://gist.github.com/dgarstang/8352ad8c51d834fbf3f282eb300d83d7 [18:51] rharper, indeed, hence why the fact it re-runs them makes no sense to me. [18:51] rharper: on what branch of yours? I've don't recall seeing that happen before [18:51] Redcavalier: your /var/log/cloud-init.log should help digest the logic, if you have it [18:52] dgarstang: looking [18:52] I wish I had a /var/log/cloud-init.log... :( [18:52] * powersj goes to make lunch [18:52] dgarstang: seems odd not to have one; the user data in your paste doesn not reference /mysql_data [18:52] oh hm maybe xvdj0 should be xvdj [18:52] it has redis_data [18:53] I don't think so [18:53] That was from an earlier exammple sorry [18:53] does it come partitioned ? [18:53] Now it's complaining about /redis_data [18:53] rharper: Partitioned...? No, it's an external EBS disk [18:53] Mounted tho at /dev/xvdj [18:53] sorry attached I mean [18:53] ok, just trying to line up the device name to the device name in the mounts [18:53] rharper, yes I have it, that's how I know it runs again when it shouldn't. Basically I get : util.py[WARNING]: Getting data from failed . It's then followed by messages telling me that the commands that are supposed to only run once per instance are being executed again. [18:54] Well, yeah maybe it should be xvdj not xvdj0 in mounts [18:54] But... [18:54] lemme check something [18:54] Redcavalier: so, if you fail to get hit the openstack metadata service, I suspect that undefined bad things may happen; I'm not 100% sure where the instance-Id comes from on openstack instance types [18:54] Nope... so the disk /dev/xvdj isn't formatted. I can't mount it. So, it skipped the fs_setup step as well [18:54] Redcavalier: you can also look in /var/lib/cloud/instances/* [18:55] dgarstang: well, if somethings' not right with the formatting in fs_setup, then it won't have a device to mount which would prevent the mounts section from being successful [18:55] rharper: Sure. Getting it to format would be a nice first step [18:56] dgarstang: it looks fine to me, but maybe double check the device name? and are the EBS volumes attached prior to boot (I don't know) [18:56] powersj: master with some local changes [18:56] The EBS disk is attached. It's not even logging the fail to format. GRRR [18:56] rharper, logically, I'm betting it comes from /var/lib/cloud/instances/iid-datasource-none/user-data.txt [18:56] Redcavalier: that's the fallback [18:57] so since it failed to connect to the metadata service, it runs with fallback instance-id, which forces a new instance-id (it doesn't know which one) [18:57] however, I wonder why it couldn't use the cached metadata ... maybe smoser knows more about that datasource [18:58] rharper, actually I'm wrong, it can't come from /var/lib/cloud/instances/iid-datasource-none/user-data.txt since that file is empty. After all, my original commands are getting re-run again. [18:58] Redcavalier: right, it's related [18:58] cloud-init couldn't find the instance-id, the fallback one (none found) is a *new* instance id, which means cloud-init won't know it hasn't run any of those config modules before and re-runs them with defaults [18:59] ie, it didn't know the difference between image booting in a new instance (ie, I want you to regenerate my keys and etc) vs. this temporary failure to identify itself as the previous instance-id [19:02] rharper, that makes sense. I do need to change the behaviour then. Could I put information in the fallback datasource so it doesn't use the original one again? [19:03] unstable metadata service seems like the critical fix [19:03] I don't know without looking if cloud-init can know to use a previous instance-id in the case that the currently expected datasource is failing [19:04] what, if for example there were 3 different instance-id's ? is it always right to use the one from the previous boot? or how does cloud-init know which one to use on failure ? [19:07] rharper, it's clearly copying the previous one though. For example, I have 2 IDs right now, 25603f8d-e602-4cde-8a6b-09e7387e1512 and i-00000165. 25603f8d-e602-4cde-8a6b-09e7387e1512 is my original instance ID and the good userdata. i-00000165 was created when the metadata failed. It contains the same user-data as 25603f8d-e602-4cde-8a6b-09e7387e1512, which in this case is bad as it get executed again. [19:08] are both the user-data.txt files empty ? [19:08] rharper, nope, thewy both have my original data when the VM was created. However, I suspect that something else might be happening. [19:09] I don't believe there is any copying, the metadata service failed, so it picks an instance-id (Not sure if that's harcoded since it's a failure path); and then runs normal boot sequence , but since the instance-id is *different*, it doesn't find a cached instance dir, and runs like first boot of an instance [19:10] within that instance-id dir, it writes out the various files it normally would, including sem dir which tracks when it last ran those config modules [19:10] Openstack has compatibility to both offer its own metadata source, but also EC2-like data. If cloud-init queries openstack for EC2-like data, it will get a reply. I wonder if this is what is happening. [19:11] the two datasources for OpenStack that I'm aware of are the metadata-service URL (network based) or a ConfigDrive (vm local) [19:11] because if it really didn't get any data, by second instance ID user-data would indeed be empty [19:11] I believe we well perfer a ConfigDrive since it's local and we can detect that before we bring up networking ; [19:12] rharper, yes, I've always been in favor of configdrives. However, my hand was forced by the higher up, but that's irrelevant here. [19:12] but given your error message about the OpenStack datasource failed (and your log file may include URL timeouts) I don't thing that'd be a conflicting datasource (config drive) path [19:13] Redcavalier: I didn't mean to indicate a perferred solution, only that cloud-init detects config drives before it attempts to hit the network URL for OpenStack [19:13] rharper, right, it didn't load from configdrive here though. [19:15] "Failed to make '/redis_data' config-mount"... This is getting REALLY annoying [19:16] Seriously, what am I supposed to do without debug output? [19:16] Here's my latest attempt .. https://gist.github.com/dgarstang/a7d24e74c78d00a4f90a94019154dd44 [19:18] I don't need to create the dir do I? I looked through the code and it looks like cloud-init does it. I did see something about selinux in there [19:20] This error "Failed to make '/redis_data' config-mount" implies it can't even create a direcrory [19:20] looking [19:21] Why, when I google "cloud-init "Failed to make"" do I get nothing? [19:21] (except source...) [19:24] Actually, it HAS created /redis_data [19:25] Running "mkfs -t ext4 /dev/xvdj" manually works fine [19:25] dgarstang: you should be able to re-run the module like this: cloud-init --debug --force --file test.yaml single --name disk_setup --frequency always --report [19:25] wher test.yaml is your user-data you pasted [19:25] checking [19:25] then replace disk_setup with mounts [19:26] to run the mounts section [19:27] Ok, I ran it.. got a lot of data, nothing relevant. Not even a mention of 'redis' [19:28] Maybe it's because I manually formatted the disk. I'll start fresh [19:28] 38% on SRU bug templates smoser :). think that's worth a coffee break... back in a few [19:28] Would be nice if that debug got logged somewhere on boot [19:30] dgarstang: it does, /var/log/cloud-init.log; but you said it was empty; that's not normal [19:30] rharper: It indeed is empty [19:30] It's just an official CentOS 6 AMI [19:31] hrm [19:32] Fresh instance. Same issue. Same error. When I run the command above, still doesn't work and IT'S output doesnt got to cloud-init-output AT ALL [19:33] the single command won't go to the file, it goes to stdout; but when the module runs during boot, all of cloud-init's logging should go to /var/log/cloud-init.log ; if it's empty, that's possibly a logging config issue in /etc/cloud/cloud.cfg but I would have thought that an official AMI would have cloud-init logged somewhere properly [19:33] The scripts that I am running from runcmd have "exec >> /var/log/ec2-bootstrap.log ; exec 2>&1" at the top. Maybe that's confusing cloud-init's logging [19:34] that's definitely going to grab the runcmd commands outputs and redirectl them [19:34] Well sure, for my scripts, but nothing else. [19:35] Cloud-init is still sending its output to /var/log/cloud-init-output.log, but /var/log/cloud-init.log is empty [19:37] it's possible that the CentOS 6 AMI has it configured to send logs somewhere else, I would think syslog woudl be the other choice [19:37] Checking with those exec lines removed [19:58] back [20:00] * blackboxsw checks runcmd tests on ubutnu w/ 2>&1 just to be sure I'm seeing logs where I think I should [20:01] This is my latest attempt... https://gist.github.com/dgarstang/0fecc2dc7baaf1a2272a250bfe4da828... Output is going to cloud-init-output.log, and it's still logging "Failed to make '/redis_data' config-mount" ... still nothing going to cloud-init.log. This is so frustrating! [20:01] I can't make the cloud-init user data any more simple [20:03] "mount -t ext4 /dev/xvdj /redis_data" fails when I run it, so cloud-init did NOT format the disk [20:03] dgarstang, its empty on rhel before we changed the logging to go directly to a file [20:04] smoser: Egads. How'd you do that? [20:04] dgarstang, look in /etc/cloud/cloud.cfg.d/05_logging.cfg [20:04] see the line about 'syslog' (log_syslog) . just comment it out. [20:04] smoser: yah... don't know how to read/update that file [20:04] ah [20:04] then logging goes right to the file, not to syslog [20:04] which was busted on rhel [20:04] Well it's not going to syslog either [20:04] because cloud-init *thought* syslog was hooked up, but it wasnt [20:04] anyway, thats the fix. [20:04] All it's logging to syslog is the "Failed to make '/redis_data' config-mount" message [20:05] It's not doing it's regular debut to syslog [20:05] yea, thats kind of expected. syslog actually isnt all the way up. just make it go right to the file. [20:05] thats the change that is in upstream now [20:06] Good grief. Thanks. Well maybe the ability to format and mount disks is also broken? [20:06] Actually I don't have that line in 05_logging [20:07] I've got " - &log_syslog |" and " - [ *log_base, *log_syslog ]" [20:07] I'd rather fix the inability to mount disks rather than the logging tho. Maybe I should try CentOS 7 [20:09] sorry [20:09] you want to comment out a line like [20:09] - [ *log_base, *log_syslog ] [20:09] comment that out (#) [20:09] then you'll get a log [20:10] and the log should have a WARN message [20:10] Ok. Lemme try centos 7 first [20:12] blackboxsw, 44%. and i havent started my next one yet. (utlemming had done the DigitalOcean one) [20:19] Looks like the logging issue is fixed in CentOS 7 .... but not it's ability to mount disks! ARGH! [20:19] dgarstang, can i see /var/log/ ? [20:19] var/log/cloud-init.log [20:19] sure, hang on [20:20] Here ya go. https://gist.github.com/dgarstang/2d9c134b7f230c84a82ed64c34a82852 Not much useful stuff there [20:21] Lines 110,111 [20:22] Corresponding user-data with cloud-init https://gist.github.com/dgarstang/2fa1ed8b630117bdf472744d537d7d28 [20:24] for as much as gists are used for a pastebin youd think they would have an interest in just offering a pastebin solution [20:25] :-\ [20:25] I can't see how to make my config any simpler [20:25] ... i suspect that selinux is in play [20:26] I dunno. I read through the cloud-init code and saw something about that so I added it [20:26] as [20:26] Jun 1 20:17:35 ip-172-31-7-213 cloud-init: 2017-06-01 20:17:35,363 - util.py[WARNING]: Failed to make '/redis_data' config-mount [20:26] ^ Yep [20:26] is from util.ensure_dir() failing with that argument [20:26] But, it does create the directory [20:27] hm.. oh. [20:27] Yep [20:27] did rharper already go over this with you? [20:27] apparently there is a bug in python-selinux that couldbe related. [20:27] :-O [20:27] (sorry if we're re-treading here) [20:27] I'm the first person in history to use the cloud-init disk mount feature? [20:28] So, maybe when I build my AMI I should have it disable selinux in /etc/sysconfig/selinux [20:28] https://bugzilla.redhat.com/show_bug.cgi?id=1406520 [20:28] bugzilla.redhat.com bug 1406520 in libselinux "calling libselinux python restorecon fails on /var/lib/nfs/rpc_pipefs" [High,Verified] [20:30] dgarstang, are you able to make a change easily and re-try ? [20:30] i'd like to see the exception that is raised [20:31] Well, someone earlier said I ran rerun with "cloud-init --debug --force --file data.yml single --name disk_setup --frequency always —report" [20:31] so, I can try that [20:32] smoser: I didn't mention the selinux python issue yet [20:32] yeah. i'm surprised you're nto getting debug messages in that log [20:32] this is also 0.7.5 [20:33] which... realkly old. and i know that it is what you have, but, really old [20:33] b [20:33] smoser: re: selinux; there was a set_enforce 0 [20:33] as a bootcmd [20:33] however, someone else mentioned that maybe bootcmd didn't run "early" enough w.r.t disk_setup stuff [20:33] yeah, but can you even do that ? [20:33] which still seems odd to me given my reading of the config module order [20:33] but possibly things changed in trunk vs. 0.7.5 [20:34] I've disabled selinux in /etc/selinux/config... Rebooting... Will try again after back up [20:37] Well, I think I'm not getting the error now but "cloud-init --debug --force --file data.yml single --name disk_setup --frequency always —report" isn't causing it to mount still [20:37] So, maybe selinux needs to be disabled before boot [20:40] UGH [20:40] dgarstang, we are working recently on getting cloud-init much better on centos [20:41] but as your'e finding, its not as thouroughly tested as Ubuntu. [20:41] the goal is to improve it and get automated tests in place to keep the function there. [20:46] Sadness [20:54] dgarstang: the single runs just one module, you will need to call it again with '--name mounts' to perform the mount section [20:54] Is this progress? Using CentOS 7, and selinux disabled on boot, I'm getting only "util.py[WARNING]: Activating mounts via 'mount -a' failed" now. The other error has gone. it's still not formatting it though [20:55] hm. [20:56] I noticed earlier too that when I rebooted, I lost the custom hostname I had set. I presume it's picking that from DHCP [20:56] Wait wait. I am still seeing "Failed to make '/redis_data' config-mount" ... missed it earlier [20:56] So, I can probablt assume that the latest version of CentOS with Cloud-init can't format and mount disks [20:58] Sighj [21:03] blackboxsw, i have to run. even 50% right now. [21:04] smoser: make that 56% [21:12] I'm screwed [21:26] How could I get cloud-init 0.7.9 onto centos 7? [21:28] Might 0.7.9 potentially fix my disk mount issue? [21:35] dgarstang: we're still working on getting daily rpms built properly; that's coming soon, in the mean time, I've some hand-built ones I;m testing with which may help at least determining if there are other issues in play: http://people.canonical.com/~rharper/cloud-init/rpms/centos/7/cloud-init-0.7.9+123.g8ccf377-1.el7.centos.noarch.rpm [21:35] I think you can yum install that URL; you'll have to accept the unsigned rpm [21:36] that's trunk from a few weeks ago plus a few network configuration related fixes, but should be good enough to check that the disk_setup/mount stuff works (or fails in the same way) in which case, we should file a bug against cloud-init with your test user-data so we can get that resolved [21:39] Ok, thanks. Tried with that RPM... same issue [21:40] "cloud-init --debug --force --file data.yml single --name mounts --frequency always —report" right? [21:41] does that show errors, or you just don't get mounts ? === rangerpb is now known as rangerpbzzzz [21:43] Hmmm I got this ... https://gist.github.com/dgarstang/4eb93fda29decf54762b2d9356d505dc [21:44] hrm, doesn't appear to have run the mounts, lemme make sure I can get mounts to run via single [21:44] fs_setup is supposed to format right? [21:44] yes [21:44] Actually I removed mounts from data.yml. Was trying to simplify [21:45] It's not formatting however, as a manual mount command fails. "mount -t ext4 /dev/xvdj /redis_data" [21:46] do we know get debug in /var/log/cloud-init.log ? [21:46] checking [21:46] Yes [21:46] ok, then maybe let's run the fs_setup one and mounts, and gist the cloud-init.log [21:46] I got debug since I went from centos 6.5 to centos 7 [21:46] and see what we can see [21:46] and the updated rpm will have the logging fix [21:47] heh, 'ignorming' type in the output there [21:47] "cloud-init --debug --force --file data.yml single --name fs_setup --frequency always —report" ? [21:47] no, disk_setup [21:48] jeez [21:48] disk_setup is the module name, it reads fs_setup config as one of the different areas of disk setup [21:50] hang on [21:51] sure [21:53] Well here's the whole thing https://gist.github.com/dgarstang/91122f453f7c512e4b527fe8c73aa41d [21:54] hrm [21:55] Does it matter that this is a t2.nano instance? It has a 8Gb EBS disk attached at /dev/xvdj [21:55] No ephemeral [21:56] no [21:56] fs_setup needs to be a list [21:56] fs_setup: [21:56] list in the yaml? [21:57] :-O [21:57] like in your original user-data post: https://gist.github.com/dgarstang/8352ad8c51d834fbf3f282eb300d83d7 [21:57] how did that happen. :-\ [21:57] and it's *terrible* that we don't log something like, got an fs_setup but no list ... [21:58] That must have happened when I removed the label line to simplify [21:58] Made it a list... same deal [21:59] and can you look at /dev/disk/by-uuid and see if there is a symlink to the disk ? [21:59] it's possible it was successful and just doesn't output much [22:00] Well that's 0a84de8e-5bfe-43e7-992b-5bfff8cdce43 -> ../../xvda1 ... but xvda is the root disk [22:00] when the device isn't present, I get an error reported, Failed during disk check for /dev/xvdj [22:03] and in the log, I can see cc_disk_setup debug output: http://paste.ubuntu.com/24741508/ [22:04] rharper: on my output? [22:05] no, you said it was the same [22:05] right... but I don't see anything like the output you pasted [22:05] but I would expect to see cc_disk_setup output if the user-data for fs_setup was fix; which is surprising [22:07] here's my change to your yaml you pasted; that *should* show the 'setting up filesystems: ' message in /var/log/cloud-init.log; http://paste.ubuntu.com/24741550/ [22:11] one sec [22:11] rharper: Didn't work [22:12] wait wait [22:13] nah. I dunno. :( [22:20] "Unable to convert /dev/xvdj to a device" [22:21] ok, it actually formatted it. Didn't mount it tho [22:24] What command would I run to mount? Does disk_setup mount? [22:26] dgarstang: that's progress;lemme look at the code [22:27] if disk_setup didn't return OK, then you won't be able to mount it (unless for some reason it's already formatted); [22:27] one sec [22:27] I'm gonna roll a new AMI with the newer cloud-init for a start [22:29] ok, I've gotta step out for a bit more; would you be able to paste: cat /proc/partitions ? I'll read the disk_setup code and see what that message is coming from, but I suspect that something's awry with the device [22:30] rharper: kk, thanks [23:15] Looks like using http://people.canonical.com/~rharper/cloud-init/rpms/centos/7/cloud-init-0.7.9+123.g8ccf377-1.el7.centos.noarch.rpm breaks system boot. Public ssh key doesn't get installed properly. Can't ssh in [23:59] dgarstang: lemme see, it may have changed the default user where the keys are installed