#cloud-init 2014-05-27
<bracki> How can I prevent cloudinit from generating locales?
#cloud-init 2014-05-28
<smoser> bracki, you should be able to just give user-data cloud-config (or /etc/cloud/cloud.cfg.d/*.cfg) data that says:
<smoser>  locale: null
<smoser> utlemming, did you ever make the images pre-generate the en_US.utf-8 ?
#cloud-init 2014-05-29
<loki97123> question for you all
<loki97123> ps hello
<loki97123> when passing in a #!/bin/bash through userdata
<loki97123> and calling a script.sh that calls a script2.sh
<loki97123> that launches chef-client
<loki97123> that in turn has an embedded shell out to do a rsync on /* ...
<loki97123> could i lose file globing in all that mess?
<loki97123> anybody ?  ...
<loki97123> doing file globbing through a user data instanciated script â¦ failing?
<loki97123> help?
#cloud-init 2015-05-26
<Odd_Blok1> smoser: Thanks for merging the Azure walinuxagent replacement stuff. :)
<Odd_Bloke> smoser: We've realised that we're lacking a way of turning this on without using user-data; do you have any recommendations about how to make it configurable using something we can include in the image build?
<smoser> Odd_Bloke, the datasource config shoudl suffice
<Odd_Bloke> smoser: How would we change that config at image build time?
<smoser>  /etc/cloud/cloud.cfg.d/azure.cfg
<smoser> Odd_Bloke, around ?
<smoser> wonder if you have some time to look at https://code.launchpad.net/~bbaude/cloud-init/rh_subscription/+merge/259159 with rangerpb . he's just getting head around unit tests and mock and the things there used checking of log, which seems less than ideal. wondered if you had some other suggestions.
<rangerpb> thanks smoser Odd_Bloke happy to hear what you would prefer, etc
<dbuechler> Hi All, I have two remaining issues with a CentOS 7 image running cloud-init 0.7.7.
<dbuechler> First, pre-networking fails with a python error "No Such File or Directory, but I can't figure out which file it's looking for.
<dbuechler> Second, cc-locale fails, but doesn't provide any more detail.
<dbuechler> Err, my bad - cc_locale.  As I doubt I'll ever use a VM that isn't in English, I'm not sure that one matters a whole lot, but while I'm here asking questions, I might as well ask about it.
<dbuechler> Sorry.  I had to take a phone call.  Here's a pastebin of the python error in pre-networking:  http://pastebin.com/sKBevg5k
<dbuechler> smoser: Rebuilding the CentOS 7 image without LVM worked like a charm.  The xfsprogs was, in fact, installed by default, even on a Minimal System configuration.  I appreciate your insights on the matter.
<smoser> dbuechler, not sure on the selinux...
<smoser> would have to poke more. 
<dbuechler> Rather, the xfsprogs *package* was installed by default...  I'm a native English speaker.  I promise.
<smoser> can you pate the whole cloud-init.log ?
<smoser> and cloud-init-output.log ?
<smoser> unless you're concerned about sensitive info there.
<dbuechler> Sure.  Curiously, cloud-init.log was empty.  Cloud-init-output, I'll paste.  I'll just redact anything critical.
<smoser> cloud-init.log being empty is not good. its probably related to you needing to configure logging.
<dbuechler> smoser: Uh-oh.  Looks like I have a problem with my cloud.  It appears one of my controller nodes may be down.  I'll get that information for you as soon as I can.  It may be a couple of hours.  I'll paste cloud-init-output, verify cloud-init.log is still empty and then probably paste my cloud.cfg file.
<smoser> k.
<Odd_Bloke> smoser: rangerpb: Around for a little while now, will take a quick look.
<rangerpb> thanks Odd_Bloke 
<Odd_Bloke> rangerpb: I assume GoodTests and BadTests are happy-path/bad-path?
<rangerpb> correct
<Odd_Bloke> smoser: rangerpb: I can't decide how I feel about the checking of log; it is a kind of contract that the unit under test has with the end user.
<Odd_Bloke> But it's also the sort of test that can end up being very brittle.
<rangerpb> yeah it is brittle due to changing of error messages == failed tests
<Odd_Bloke> rangerpb: I wonder if perhaps relaxing the checks to just "did we log something at info level" and "did we log something at warn level" might be a good compromise?
<rangerpb> the log is definitive which is why I put it in there.  im thinking either your suggestion or remove it completely and depend on counting calls to functions, etc
<rangerpb> i suppose i could set some test vars in the code which are bool and I could check those
<rangerpb> like finishing sucessfully or not, prefer that?
<Odd_Bloke> rangerpb: That doesn't sound quite as good; you could move the logging out to log_success() and log_failure() functions, and check that those are called?
<rangerpb> you mean the actual calls to the logger?
<Odd_Bloke> I mean something like:
<Odd_Bloke> def log_success():
<Odd_Bloke>     LOGGER.info('rh_subscription plugin completed successfully')
<rangerpb> yeah
<rangerpb> i could do that
<Odd_Bloke> And then we aren't testing the messages, just that the high-level communication to the user is happening.
<rangerpb> sure
<rangerpb> if you are comfy with that, i will do that
<Odd_Bloke> rangerpb: Thanks. :)
<Odd_Bloke> rangerpb: Just added a few more comments. :)
<rangerpb> cool thanks man
#cloud-init 2015-05-27
<smoser> Odd_Bloke, thanks for helping rangerpb 
<rangerpb> yeah
<rangerpb> i have some followups to smooth out with him, but implementing his comments right now
<dbuechler__> smoser: I have that pastebin you asked for. Sorry about the delay. I had another fire to put out yesterday.
<dbuechler__> http://pastebin.com/cij4M18Q
<dbuechler__> cloud-init.log was empty.  Pastebin contains cloud-init-output.log and cloud.cfg as requested.  There wasn't anything overly sensitive in there, so I didn't feel the need to redact.
#cloud-init 2015-05-28
<Odd_Bloke> smoser: Review of https://code.launchpad.net/~daniel-thewatkins/cloud-init/openstack-vendor-data-doc/+merge/260463 would be appreciated, because we want to point a partner at some vendor-data docs. :)
<smoser> Odd_Bloke, gracias.
<smoser> Odd_Bloke, does cloud-init handle vendor-data differently on openstack than other places ?
<Odd_Bloke> smoser: I don't really know how it's handled elsewhere. :p
<rangerpb> hey guys, fedora ppc64 is spinning cloud images these days and I am testing it.  I see the cloud-init data being consumed, the host name changes, but the cloud user (fedora) isnt being setup and i cannot login as a result.
<rangerpb> the cloud-init data is known good, tested against fedora 22 x86_64 ... any tips on debugging the user-creation, etc?
<rangerpb> i mounted with guestfish and didnt see any cloud-init logs
#cloud-init 2015-05-29
<spandhe> hey smoser ! yt?
<spandhe> smoser: hey.. have a qn.. tI remember there was an issue where cloud-init wherr it used to run neytwork config again and again
<spandhe> any idea when that was fixed? which version
<spandhe> smoser: hey.. yt?
<rangerpb> Odd_Bloke, smoser we good now ?
<Odd_Bloke> rangerpb: We still have log_sucess rather than log_success.
<Odd_Bloke> Other than that, I'm happy.
<rangerpb> hmm
 * rangerpb looks
<rangerpb> Odd_Bloke, i think you are just pointing out a spelling error right?
<Odd_Bloke> rangerpb: Yep.
<Odd_Bloke> But an error nonetheless. ;)
<rangerpb> sure, i was trying to understand if I had done it right in some and not in others
<rangerpb> ok fixed Odd_Bloke thanks man
<harlowja> smoser congrats, http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-05-26-20.02.log.html#l-10
<harlowja> lifetime ticket to the summits, ha
<harlowja> smoser reminds me of http://floridamedicalmarijuanahelp.com/wp-content/uploads/2014/07/golden-ticket1.jpg lol
<harlowja> well thats a weird domain, haha
#cloud-init 2016-05-31
<harlowja> soooo smoser merge that stuff today right,lol
<mgagne> so I tried latest PPA (version available last Friday) and now I'm not even sure if cloud-init runs at all. hostname stays the same, cloud-config aren't injected, network is not configured.
<smoser> mgagne, hm..
<smoser> hm..
<smoser> http://pastebin.com/PCMX2RBP
<smoser> paste.ubuntu.com seems dead. but that is my current changes that i have locally.
<smoser> bzr push is failing too.
<harlowja> u guys broke it all
<mgagne> really, interfaces are going to be renamed? come on guys
<harlowja> hmmm, whats up with that
<mgagne> I thought we talked about it last week
#cloud-init 2016-06-01
<larsks> smoser: I think there is a bug in how growpart calls partx, and now partx has better error checking and blows up (see https://bugzilla.redhat.com/show_bug.cgi?id=1327337)
<larsks> I am going to submit a fix on launchpad.
<smoser> larsks, so partx changed interface ?
<smoser> oh. i see your last comment there.
<larsks> I don't think so.  I think the previuos command invocation was actually incorrect, based on my reading of the man page (even for earlier versions).
<larsks> I think it worked because of luck.
<smoser> well, not complete luck
<smoser> definitely it would have paid attention to the '1' in that call
<larsks> The man page says either "partx disk" or "partx partition [disk]", neither of which is what growpart is doing.
<smoser> previously, or growing growing a partition other than '1' would fail.
<larsks> The examples in the man page suggest either "partx --update /dev/vda1" or "partix --update --nr 1 /dev/vda" would be correct
<smoser> that is wierd. i agree.
<smoser> but it definitely does work.
<smoser> for lots of things
<larsks> Oh yeah, totally :)
<larsks> Heck, now I am extra confused...
<larsks> ...because looking at the sources we appear to be calling $part $dev, which should be fine.  Ugh, maybe I misread some output somewhere?
 * larsks puts on his paying-attention glasses and checks again.
<larsks> smoser: okay, patch submitted in lp.
<smoser> link ?
<smoser> larsks, i'm *that* lazy
<larsks> https://bugs.launchpad.net/cloud-utils/+bug/1587971
<larsks> It's a...(counts)...5 character change.
<harlowja> smoser merge that stuff today righththtttttt
<harlowja> lol
<harlowja> (not joking, ha)
<smoser> harlowja, bzr+ssh://bazaar.launchpad.net/~smoser/cloud-init/trunk.fix-networking/
<smoser> er..
<smoser> https://code.launchpad.net/~smoser/cloud-init/trunk.fix-networking/
<harlowja> whats that
<smoser> that is what i'm working on
<smoser> that has to et in
<harlowja> u broke stuff
<smoser> that fixes (soon) openstack config drive
<harlowja> kk
<harlowja> will look at
<harlowja> whats up with the renaming code?
<harlowja> does that run everytime cloud-init is ran?
<harlowja> that's the only part that seems to concern me :-/
<smoser> harlowja, yeah, i know. its scary
<harlowja> didn't think that doing `ip set ` was a permeant  change
<smoser> so yes. it runs every time.
<harlowja> :-/
<harlowja> scary
<smoser> it has to because of 2 things
<smoser> a.) initramfs is out of date with respect to /etc/systemd/network/*.link files
<smoser>   this is the case in any fresh instance as the initramfs that booted is either pristine or out of date
<smoser> b.) lxc container
<harlowja> doesn't everyone use initramfs, i didn't think say centos did and such
<smoser>   there are no udev events in a container, so the systemd .link files do not get applied
<harlowja> *does everyone
<smoser> i'd suspect most things use an initramfs.
<harlowja> ah, just the normal initramfs, nothing special to cloudinit
<smoser> its possible they do nto in cloud images (where you could feasibly have a 'virtual' kernel that had block device drivers built in for all targetted root devices
<smoser> )
<smoser> but even then, that means you can't boot with UUID=
<smoser> or LABEL=
<smoser> as its the initramfs that figures that part out
<harlowja> gotcha
<smoser> the reason that initramfs-out-of-date is a problem
<smoser> is because systemd .link refuses to rename devices that have been renamed
<harlowja> lol
<smoser> and the initramfs will not have any rules and rename 'eth0' to 'en1p2' or whatever the default is
<smoser> its kind of obnoxious really.
<harlowja> ya, seems like it
<smoser> smoser: please rename the device 'a' to 'b'
<smoser> systemd: ok, that sounds great. device 'a' is now 'b'
<smoser> smoser: please rename the device 'b' to 'c'
<harlowja> no soup for u
<harlowja> lol
<smoser> systemd: sorry, smoser, you need to think ahead more or update your initramfs and reboot
<harlowja> ya, that's stupid
<harlowja> lol
<smoser> harlowja, it wont run though if networking is disbled in cloud-init though
<smoser> so if user wants cloud-init out of the picture, they can disable networking (via cloud-config) and get that.
<smoser> on subsequent boots
<harlowja> ya, weird stuff, lol
<harlowja> i wonder if at some point we can just have the underlying system (sysconfig for example) have the right names in the first place
<harlowja> cause from my understanding sysconfig files have a name 'DEVICE=X'  and `HWADDR` fields
<harlowja> so whats the point of renaming stuff if those 2 are right (but maybe i not understand something here, ha)
<harlowja> so in sysconfig land, is the renaming needed(?)
<smoser> so the renaming is to support when the datasource declares the names
<smoser> mgagne pointed out that in openstack currently they dont raelly declare the nic name.  (id actually is the host nic's name :)
<harlowja> ya, good point, its crap like 'tap-blahblah' or ...
<harlowja> not eth0 or ethX
<smoser> but on other systems such as smartos and possibly in the future on openstack, the datasource (or even user) would like to declare the names for their nics
<smoser> internal0
<smoser> or
<smoser> external0
<smoser> to mean obviously useful things.
<harlowja> ya
<harlowja> is anyone in openstack (nova) land fixing the names of these things?
 * harlowja not it
<harlowja> lol
<harlowja> to not have host nic junk names
<smoser> its not bad, they're just 'ids'. which is fine. they could be uuids
<smoser> but conceivably a user would quite possibly want to attach a nic to a system and provide the name that they'd like that thing to appear as.
<harlowja> sure, i guess, i like mine called eth0-pink-bunny
<harlowja> lol
<mgagne> I'm planning on opening a bug/change for that bad behavior
<harlowja> cool
<harlowja> smoser  only think i can think of is to add more informational logging about what is about to be renamed (and not just the errors that may have resulted from the act of renaming/figuring out what to rename)
<smoser> harlowja, yeah, i'm working on that.
<harlowja> kk
<smoser> it is even more fun now in container
<smoser> than i thought
<harlowja> :-/
<smoser> https://github.com/lxc/lxd/issues/2063
<harlowja> is smartos going to use cloudinit now?
<smoser> you can't rename a nic that is up
<harlowja> now there own cloudinit
<harlowja> *not there own
<smoser> ubuntu on smartos (guests) have cloud-init
<harlowja> k
<smoser> since you can't rename a nic that is up
<smoser> and nics in containers *start* as up
<harlowja> 'why is my network device 'up' when nothing in my init system has configured it so'
<harlowja> lol
<harlowja> well that shit be weird
<smoser> so now i'm re-working that to allow for downing internfaces
<harlowja> lol
<mgagne> https://bugs.launchpad.net/nova/+bug/1588017
<smoser> so it will be able to down it if it does not have any "real" addresses
<smoser> mgagne, interesting...
<harlowja> mgagne thx
<smoser> from the guests perspective, it shoudl be physical always
<mgagne> that's not what the spec mentions
<mgagne> either we respect spec or we propose an amendment and wait for it to merge before changing implementation details
<smoser> where do you see this ?
<mgagne> in the spec...
<smoser> i see only a single occurance of the word 'virtual'
<smoser> at http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact
<mgagne> Example of VIF
<mgagne> otherwise I don't know what the purpose of vif would be
<harlowja> ya, only think i can think of is for ironic, but idk
<harlowja> ironic though is different
<harlowja> smoser  i guess otherwise, now that i understand that renaming stuff that fix looks ok
<harlowja> i can work it into the refactor i had when u merge it
<harlowja> smoser in '    def update_byname(bymac):
<harlowja>         return {data['name']: data for data in bymac.values()}'
<harlowja> that no worky on py26
<harlowja> just fyi
<harlowja> dict comprhension in 2.7+
<smoser> harlowja, oh yeah, i forgot i had to care about py226
<smoser> er.. even py26
<harlowja> :-P
#cloud-init 2016-06-02
<smoser> mgagne, a build of https://code.launchpad.net/~smoser/cloud-init/trunk.fix-networking/+merge/296272 should be landing at https://launchpad.net/~smoser/+archive/ubuntu/cloud-init-dev shortly
<smoser> (and this one should actually build :). the previous didn't build, which was why whatever you had there would have been garbage.
<smoser> whats there right now will use the 'id' as the nic name. i plan to look at changing that for openstack tomorrow.
<ubuntu__> smoser: hi, is https://bugs.launchpad.net/cloud-init/+bug/1355909 still on the radar?
<Odd_Bloke> Well, I've been sitting all by myself in a #cloud-init on a different IRC server for a few days. :p
<smoser> ubuntu__, i would like to have that functional, yeah.
<Takumo> Hi all, how would I set hostname based on an ec2 tag in cloud-config?
<smoser> Takumo, nothing really helpful to that in cloud-inti.
<smoser> are tags available inside the instance, i forget
<smoser> i didnt think they were.
<Takumo> I think they're available from the metadata service
<Takumo> doesn't matter too much
<Takumo> was just looking for a way to assign a hostname or id to use within ansible scripts
<Takumo> thought I could set the fqdn based on a tag and keep my ansible stuff agnostic of ec2
<smoser> well you can assign te hostname via cloud-init
<smoser> just not via a tag
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L408
<GivenToCode> Hi I am using ubuntu 14.04.4 with cloud-init 0.7.5 on ec2 and am having issues with ephemeral drives on various instance families
<GivenToCode> essentially, for the r3 family, cloud-init runs mount -a and in fstab is an entry to mount xvdb to /mnt, however for r3s the ephemeral devices don't have file systems
<GivenToCode> the mechanisms cloud-init provide (bootcmd) run after mount -a runs...
<Odd_Bloke> smoser: I have a doc improvement MP open at https://code.launchpad.net/~daniel-thewatkins/cloud-init/merging-doc-clarification/+merge/295822.  If gaughen reviews and approves it, are you happy for me to merge it, or would you like to review?
<smoser> Odd_Bloke, thats fine/
<smoser> GivenToCode, you just want it to not do that ?
<GivenToCode> if it matters i am using this AMI ami-0f8bce65, which has a mount for xvdb to /mnt in fstab and fails on r3 instances
<smoser> GivenToCode, you should be able to provide user-data that says:
<GivenToCode> smoser, I'm not sure what I want it to do, but it is clear cloud-init has at least one faulty assumption on ec2
<smoser> mounts:
<smoser>  - [ephemeral0, null]
<smoser>  - [swap, null]
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L174
<smoser> and if you could, file a bug against cloud-images at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L174
<smoser> bah
<smoser> at https://bugs.launchpad.net/cloud-images/+filebug
<smoser> and Odd_Bloke and his team will see if they can't make it dtrt
<GivenToCode> smoser, but i still do in fact want xvdb to mount to /mnt, we depend on it
<smoser> ah. ok. then you can put a filesystem on it too.
<smoser> hold on
<GivenToCode> but i want it to happen after we mkfs, which we are trying to do in bootcmd but it happens after mounts it looks like
<smoser>  http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config-disk-setup.txt#L4
<smoser> that should work.
<smoser> disk_setup:
<GivenToCode> smoser, so something like that is happening for c3s but not r3s
<smoser> right.
<smoser> so on amazon in most cases they provide you an ephemeral disk that already has a filesystem on it.
<smoser> and cloud-init just says "mount that"
<GivenToCode> ok, so if i put that in my own custom user data it'll happen for all instances no matter the family?
<smoser> but in this case (and on other clouds) that is not the case.
<smoser> this is new info to me, i've not tracked ec2 closely for quite a while.
<GivenToCode> ok, you've been a huge help
<smoser> GivenToCode, i'd like it to. it is possible there are bugs in it. but this is generally the path that azure takes, so something should be able to be made to work there.
<GivenToCode> cc_disk_setup.py[WARNING]: Query against device /ephmeral0 failed
<GivenToCode> util.py[WARNING]: Failed partitioning operation
<GivenToCode> ah typo on line 9
<Odd_Bloke> smoser: Will cloudinit.readthedocs.io eventually get updated, or is there another step that needs to happen?
<smoser> Odd_Bloke, i think so.
<Odd_Bloke> I guess we'll find out. ;)
<harlowja> i think its a manual refresh for that one
<harlowja> since afaik smoser and i are the people that can click rebuild that website
<Odd_Bloke> Ah, OK.
<smoser> harlowja, thank you
<smoser> :)
<harlowja> guess u want me to click rebuild :-P
<Odd_Bloke> I know some RTD things update on commit.
<Odd_Bloke> harlowja: Make it so, number one.
<harlowja> Odd_Bloke i don't think bzr stuff does update on commit :(
<harlowja> i think github stuff might
<harlowja> but bzr is manual afaik
<Odd_Bloke> Fair enough. :)
<Odd_Bloke> Anyway, /me --> long weekend
<harlowja> not allowed
<harlowja> lol
<GivenToCode> is 0.7.5 the latest version for ubuntu 14.04.4?
<smoser> GivenToCode, yes.
<GivenToCode> hmm, how hard is it to patch? I need the fix for this: https://bugs.launchpad.net/cloud-init/+bug/1311463
<smoser> GivenToCode, thats probably not too bad. and i'd help you learn the Ubuntu SRU process if you're really looking to  learn
<smoser> (that is me saying i'd like to help but have limited time and higher priorities at the moment... but if i can get you involved and then later possibly get you to contribute m ore, then i'd probably help you along :)
<GivenToCode> smoser, to be clear the bug has been fixed, i just need the new version (0.7.7) which doesnt seem to be easy to upgrade to
<GivenToCode> or, since it is a one liner I could manually patch my AMI but it doesn't look like the .py files are actually on disk
<smoser> well, you need an SRU (Stable Release Update) to get it into 14.04
<GivenToCode> I'd be happy to contribute, am an ASF committer, but there is some red tape with my current employer
<smoser> the .py files are on disk, and you *can* do that.
<smoser> dpkg -L cloud-init | grep .py
<GivenToCode> smoser, oh I see, my assumption of what SRU was was wrong
<smoser> https://wiki.ubuntu.com/StableReleaseUpdates
<smoser> rharper, so what happens if i do this:
<smoser>  http://paste.ubuntu.com/16928154/
<smoser> or this
<smoser>  http://paste.ubuntu.com/16928206/
<rharper> smoser: for the first, if we omit the name, I don't think we'll write out correct eni files, rather, we'd need to create our own name value if we don't use id
<smoser> for some reason i thoguht it might just use the current name if its not present.
<rharper> the latter patch makes more sense, however,  the absence of a 'name' key in the network_config will break the requirements of a type: physical dict
<rharper> smoser: that'd require more introspection at network_config parse-time
<rharper> but it could be done
<rharper> the name_by_mac
<rharper> we have for ip set link stuff would provide the values
<rharper> I need to think a bit more about whether we can construct reasonable configs for multi-layered things (like bonds, vlans and bridges)
<rharper> we have at-least the rackspace /ironic case which uses bonds and vlans in the network_data.json
<smoser> mgagne, i'm pretty sure my ppa should work for you now.
<smoser> i've tested on dreamhost and we've tested via config drive similar to what you gave
<smoser> rharper, so i think that https://code.launchpad.net/~smoser/cloud-init/trunk.fix-networking/+merge/296272 is ready ... except for the obnoxious naming in openstack
<smoser> which i really just dont feel comfortable putting in
<rharper> smoser: you've got merge conflicts
<rharper> in Changelog
<smoser> well, just that one. i was fixing it.
<rharper> w.r.t the id -> name , what do you suggest otherwise then?  Is your concern that users will be confused in the case that the id was tap-fasdfs23  ?
<smoser> well, i expect  a few things
<smoser> a.) some users saying WTH UBUNTU!
<smoser> b.) juju charms not expecting that name or being able to predict it in any way
<rharper> s/UBUNTU/$DISTRO_WITH_CLOUD_INIT
<smoser> rharper it doesn't make me feel better to say "WTH SMOSER!"
<smoser> :)
<rharper> smoser: right, cloud-init -> smoser
<rharper> understood
<rharper> so, totally understand a and b, so instead of cloud-init picking; I think we can just re-use what the nics were named unless 'name' exists in the network_data
<rharper> I think that's a completely reasonable alternative that allows folks to toggle the interface name wit net.ifnames=0 or 1
<rharper> and cloud-init won't be in the way
<smoser> yeah. then we have to search, which we're not doign right now.
<rharper> it's the same lookup call for rename later
<rharper> at the time of config-drive conversion
<rharper> we fetch the list of nics and macs, and just inject name: <current name by mac>
<smoser> yeah. i think so.
<smoser> mgagne, so your input would be appreciated. really should be fixed.
<mgagne> will test right now
<smoser> ok i have to go afk, but will check in later, and maybe try to do the last change there for openstack nic names.
<harlowja> "WTH SMOSER!"
<harlowja> lol
<mgagne> it's not working. cloud-init thinks my interfaces are named after the link id (found in interfaces.d). After running ip link, they are still named ens3 and ens4.
<mgagne> hostname is still not set, user scripts not run, etc.
<mgagne> I had to reboot the instance in single mode to debug and read logs
#cloud-init 2016-06-03
<harlowja> :(
<harlowja> 	"WTH SMOSER!"
<harlowja> lol
<harlowja> ;)
<smoser> mgagne, this is same place that you had run and copied that config drive out ?
<smoser> mgagne, i'd really appreciate some help working this out tomorrow. i think you're the only thing i have left on https://code.launchpad.net/~smoser/cloud-init/trunk.fix-networking/+merge/296272 to get right. it seems working well other places.
<smoser> the latest commit there (available in the ppa - 1251) will name the devices whatever they happened to be named.
<smoser> but i've also tested in a container and modifying the config drive data to match my MAC addresses that it works.
<mgagne> smoser: I found the issue
<mgagne> there was interface config left in /etc/network/interfaces which conflicted with cloud-init.cfg. The interface found in /etc/network/interfaces looks to be the one before rename
<mgagne> but it still doesn't ping
<mgagne> gateway is not configured at all in 50-cloud-init.cfg
<smoser> mgagne, ok. great. so cloud-init will not write /etc/network/interfaces
<smoser> if you have an image with that in it, then i consider the image broken...
<mgagne> yea, I used to delete interfaces in there. I removed it in Xenial in hope it got fixed since but it didn't so I restored it
<mgagne> smoser: it is an image built from an iso
<mgagne> and I install cloud-init in there
<smoser> well, you shoudl use our cluod images :)
<mgagne> it worked before =)
<mgagne> anyone building an image from iso has a broken installation and isn't compatible with cloud-init
<smoser> but basically ENI needs to be writen like http://paste.ubuntu.com/16951086/
<mgagne> ok, I ended up with something similar, all but lo interface
<smoser> mgagne, well you had static configuration in a image that waas to be used in a dynamic (multiple instances) thing.
<smoser> that clearly can't work
<mgagne> smoser: original image is built with dhcp
<smoser> it used to work only because
<smoser> a.) 'eth0' was static
<smoser> b.) we only supported massively simple networking
<smoser> now with fancier networking supported, that can't work.
<mgagne> ok, will continue to clear interfaces like before
<smoser> i'd really rather not have cloud-init open that file and rip out stanzas
<mgagne> now for the gateway =)
<smoser> yeah. that is a issue.
<mgagne> smoser: right, fine with me regarding ENI clearing
<smoser> whats the network_json ?
<mgagne> one with default route in routes =)
<mgagne> let me setup network so I can scp that thing
<smoser> :)
<smoser> if only something could set up networking for you :)
<mgagne> grep WARN returns nothing
<mgagne> what I found weird though is that
<mgagne> if network fails for whatever reasons
<mgagne> EVERYTHING stops
<mgagne> no cloud-config, hostname, etc.
<mgagne> we could argue that
<mgagne> cloud-config might need network to work properly
<smoser> well, "everything stops"...
<mgagne> but that's a huge inconvenience when debugging network issue, have to reboot in single mode
<smoser> what happens is if cloud-inti writes busted networking configuration (or busted networking configuration somehow comes into being)
<mgagne> well, my hostname is localhost and password doesn't work =)
<smoser> then boot will just never get to the point where networking is available
<smoser> so things that depend upon it (such as cloud-config and cloud-final) can't run.
<mgagne> hmm so boot will stall forever? (or close to eternity)
<smoser> i'm not sure exactly... it systemd timeouts ont hte stuff.
<smoser> but since cluod-init-final and cloud-config "require" those things, they wont run
<smoser> because those were never there.
<mgagne> boot is too fast and I don't know systemd that much to debug what happened at boot time =)
<mgagne> I sure did glanced at a "failed" at boot time
<mgagne> smoser: http://paste.openstack.org/show/Q3N1BWnP2ecC9yoYcHpk/
<mgagne> with actual IPs first 3 bytes changed
<mgagne> I do not own 1.1.1.0/24 =)
<smoser> k. thanks.
<mgagne> is there any log outputted when default gw is configured?
<mgagne> smoser: will need to rebuild image from latest cloud-init I guess, looks like you made a change 12h ago
<smoser> mgagne, yeah. that was to not name tap-ASDFASDF
<smoser> mgagne, i'll get that fixed.
<mgagne> right, will rebuild and see if it changes anything
<smoser> mgagne, can you just easily mount the image and dpkg -i it ?
<smoser> for quicker iteration
<mgagne> well, I guess that would be possible. Need to clean cloud-init state right?
<mgagne> still no routes but interface names are preserved now
<mgagne> in fact, I will test with additional routes and see if those are configured
<mgagne> ok, additional routes are not configured
<mgagne> so I feel like routes field isn't even parsed
<mgagne> is there a way I can test the network config parser manually?
<mgagne> I see a lot of print debugging and they aren't logged at boot time
<smoser> mgagne, give me a bit, i'm on a call. you can definitely test.
<smoser> mgagne, really appreciate your time, thank you
<mgagne> =)
<mgagne> thanks for help ;)
<smoser> mgagne, so to summarize, gateway and static routes ?
<smoser> is that what we're missing
<mgagne> yes
<smoser> k.
<smoser> i'll take a look (long call)
<mgagne> I switched to other tasks but I'm back now. is there a way I can test the network config/parsing? at first glance, I don't see any logic around routes parsing which looks for the "0.0.0.0" destination. "0.0.0.0" destination would be the default gateway. But I might have missed it.
<smoser> mgagne, i just took your paste and am trying to put together a test
<smoser> there is a test in tests/unittests/test_datasource/test_configdrive.py that does the convert
<mgagne> ok, I was more or less hoping for a way to run test after the fact against configdrive to see what cloud-init "thinks" about it
<mgagne> but I guess it's a feature that would be of limited use and for dev only
<smoser> rharper, http://paste.ubuntu.com/16956870/
<smoser> that is me taking mgagne's paste and putting it into a test to run conversion to network config on .
<smoser> and then we'll see
<rharper> y
<smoser> http://paste.ubuntu.com/16956919/
<smoser> thats what it prints otu.
<mgagne> does this format expect the field gateway to be defined?
<smoser> rharper, http://paste.ubuntu.com/16957033/ <-- my current diff
<smoser> produces http://paste.ubuntu.com/16957018/
<smoser> so yeah, we dont get the stuff rendered at the moment.
<rharper> yeah, there's a gateway in the routes;  the render_routes() doesn't seem to be getting called on a per-subnet basis
<rharper> it should
<rharper> basicall under the for subnet in subnets, there should be an if 'route' in subnet, etc... IIRC;  so somethings not right;
<rharper> something like this: http://paste.ubuntu.com/16957110/
<smoser> http://paste.ubuntu.com/16957213/
<rharper> almost, need to pass in an indent="    "
<rharper> smoser: lp:~raharper/cloud-init/trunk.fix-networking-subnet-routes
<smoser> rharper, right.
<smoser> http://paste.ubuntu.com/16957377/
<smoser> does that look right ..
<rharper> yes, the pre post
<smoser> seems sane
<smoser> how do you suggest we test this...
<smoser> comparing the rendered eni is a pita
<rharper> I'm just about to run through cloud-init-test
<rharper> VM booting and validating the values get set, ala vmtest
<rharper> we at least get feedback that route_n, ifconfig_a matches what's in eni that got rendered
<smoser> well, right. but we really need a way to unit test compare more easily
<smoser> i'lll put a crappy test in
<rharper> we have a compare rendered eni test in place
<rharper> it's just inside the vmtest, it includes parsing ifconfig_a output and the eni and such;  you can lift that, and we just need to presupply ifconfig_a and route_n output
<rharper> that's expected
<rharper> which shouldn't be too difficult to construct; it could be templatized w.r.t specifics in the source network_data.json (like mac and ip and netmask and such)
<rharper> smoser: see curtin's tests/vmtests/test_network.py for the basics;  I have an updated version for cloud-init-test here: https://git.launchpad.net/~raharper/+git/cloud-init-test/tree/tests/vmtests/test_network.py
<smoser> rharper, http://paste.ubuntu.com/16957806/
<rharper> smoser: y
<rharper> smoser: and ideally you extend and add a second non-gateway route to the routes array, and check that we do non 'default' key
<rharper> and an ipv6 one, that'll fully exercise the render_routes() method
<smoser> ok. rharper what am i missign from you ?
<smoser> you had one fix for me.
<smoser> right?
<rharper> lemme get you branch
<rharper> http://paste.ubuntu.com/16958041/
<rharper> that'll let the LinuxBridge network_data.json work
<rharper> I sent you the configdrive data for that earlier today
<smoser> andthere is a fix for mtu there ?
<smoser> rharper, ^
<smoser> yeah. ok . i see
<rharper> smoser: yes, it's a physical/link property, not subnet property
<rharper> that said, if we bring it in, then we likely need a bit more to convert it
<rharper> I have some data with mtu: null,
<smoser> committed and pushed
<rharper> so, it needs some care to drop the key, or if it's not null, apply int() on it
<smoser> ?
<rharper> {
<rharper>     "links": [
<rharper>         {
<rharper>             "ethernet_mac_address": "fa:16:3e:ed:9a:59",
<rharper>             "mtu": null,
<rharper> if that's not null, but a number, like 1500
<rharper> we want to convert to int
<smoser> http://paste.ubuntu.com/16958299/
<smoser> it does get rendered there.
<smoser> ie, its safe to treat it as a string.
<rharper> ok, then maybe just handle the null case
<smoser> seems to work.
<smoser> http://paste.ubuntu.com/16958359/
<smoser> that showed it with None
<smoser> and also tested with it as not present
<smoser> and it just doesnt appear
<smoser> so good work :)
<rharper> smoser: yeah, working here on cloud-init test and latest fix-networking branch
<smoser> mgagne, fwiw, why do you not use our images ?
<mgagne> smoser: it's not the flow we adopted a couple of years ago. The tool we use (Oz) doesn't deal with .qcow2 but iso
<mgagne> and the main thing we change in the image is the cloud-init package + config
<smoser> what do you change ?
<mgagne> there never was a time in our history where cloud-init works as-is in our cloud
<smoser> (and the argument of "We use Oz" is like if i asked you 'why do you compile glibc' and you said "we use -O2")
<mgagne> + we need to backport cloud-init for older releases
<mgagne> smoser: we have a unified build system for all OS, including Windows
<mgagne> this is a compromised we made to make it easy for us to enroll now dev so they don't have to learn new tools for each OS
<smoser> again... yoou're arguing "we use compiler foo".  i'm saying "don't compile"
<mgagne> we aren't closed to the idea of switching workflow/tool, it just happened to not be our priority for now. priority being infra ops
<smoser> we give you somethign that is supposed to "just work"
<mgagne> well I don't think this discussion will be productive tbh
<smoser> (i understand your argument and don't mean to sound like i'm fighting... we just work to make our downloadable images "just work".
<mgagne> they never worked
<smoser> the same way we work to make our compiled kernel or python or glibc "just work"
<mgagne> cloud-init used to not support cdrom configdrive. we had to patch it (or backport)
<smoser> definitely. there are bugs, and our sru process is slow.
<smoser> (i really *really* appreciate your help)
<mgagne> we inject password in admin_pass field, cloud-init doesn't support that feature https://bugs.launchpad.net/cloud-init/+bug/1236883
<smoser> we can work to make our images "just work" for you. which will make them "just work" for our customers who use RDO . which is a win for everyone.
<mgagne> smoser: tbh, this would thing is *very* frustrating
<smoser> oh , well thats just silly.
<mgagne> it just happened that we don't feel there is any progress despite opening bugs
<smoser> use ssh keys :)
<mgagne> ...
<mgagne> we very well know that ssh keys exist
<mgagne> that's not the UX we are selling our users
<mgagne> and they expect
<mgagne> I respond to our users expectations in a reasonable fashion. I sure won't install golang by default if they need it. it's their problem.
<mgagne> I can come up with a list of "features" we provide and see where cloud-init lacks
<smoser> lets work to make this better. i agree there is lots of room for improvement.
<mgagne> smoser: I'm glad we can work this out.
<mgagne> smoser: the most best ever feature that could come up from this is backporting of cloud-init to old releases.
<mgagne> this is really the thing that causes the most pain
<smoser> to old ubuntu ?
<mgagne> all those fixes you are doing to cloud-init (thanks again btw), trusty might never see the light of it due to SRU policies. So we end up needing to backport it ourselves and then rebuild the image ourselves.
<smoser> at this point trusty has issues also because of systemd
<smoser> err.. lack of.
<mgagne> I can understand that it won't be backported to 12.10. I agree with don't care about EOL releases.
<smoser> doing the work to get this well functional again in upstart will require some work.
<mgagne> but trusty... it needs to work
<smoser> yeah.
<smoser> its ok for you to be blunt with me. i promise :)
<mgagne> smoser: it used to work, thanks for jayofdoom patch. but we want to get ride of that. and so far, we don't know how to make that happen without canonical contribution.
<mgagne> a couple of walls have been punched lately tbh
<smoser> ah. the networkign_json for trusty ?
<mgagne> yes
<smoser> it will ble non-trivial :-(
<mgagne> it used to work for 3rd party patch =)
<smoser> this merge just did is quite non-trivial
<mgagne> so there is a way
<mgagne> I wouldn't mind contributing code. what I'm lacking is experience with bzr (and knowledges of cloud-init inner working). Unfortunately, not a lot of people like it in our team
<rharper> heh, now that it's merged, maybe smoser will push to +git on launchpad
<harlowja> git git gi
<harlowja> git
<harlowja> git
<harlowja> lol
<waldi> igit?
<harlowja> ugit
<harlowja> weallgit
<rharper> harlowja: nice
<harlowja> make git great again
<harlowja> lol
<ajorg> cloud-init is on github now, but I don't see any pull requests there. are contributions still to be directed at bzr / launchpad?
<ajorg> would patches to make cloud-init compatible with python 2.6 be welcomed?
<harlowja> which cloud-init btw?
<harlowja> just want to make sure the source repo u are working on is the right one
<ajorg> 0.7.x
<harlowja> ya, i fixed a bunch in a merge i have pending
<harlowja> prob similar to ones u fixed
<ajorg> ah, good
<ajorg> os.uname in cloudinit/distros/__init__.py is one of them
<ajorg> was not a propery in 2.6
<harlowja> hmmmm
<harlowja> unsure if i fixed that one, doesn't sound familar
<harlowja> do u have a set of patches with the ones u fixed
<harlowja> i can make sure i incorporate (or already might have)
<ajorg> I do, but I have a lot of patches and need to sort them out now that I'm able to contribute them.
<ajorg> many of them not for 2.6
<harlowja> ok, anyway u can figure out the 2.6 ones
<ajorg> I have been maintaining cloud-init for the Amazon Linux AMI, so I have a lot of RPM / Yum distro patches.
<ajorg> On monday I'll try to post a branch with my 2.6 related patches for you.
<ajorg> We're on 2.7 now, but the patches are still useful for CentOS / Red Hat 6
<ajorg> harlowja: what's the best time of day for me to find you and smoser?
<harlowja> so scott moser is on US Central time zone
<harlowja> i'm on US pacific
<harlowja> so working hours those time zones is usually when we around
<ajorg> i'm also on pacific, so that should work well for me.
<harlowja> cool
<ajorg> Some of our patches will likely require some discussion.
<harlowja> but i bounce around alot, but sure let's discuss
<harlowja> i'm all over the place in cloud-init and ... openstack and ...
<harlowja> lol
<ajorg> I'm actually headed home soon, so I'll have to catch you on Monday.
<harlowja> k
<ajorg> I'll try to be prepared.
<harlowja> np
<ajorg> is it best to put each patch on a feature branch?
<ajorg> or can some of them that are closely related land together?
<harlowja> closely related land together i think is ok
<harlowja> (with-in reason, ha)
<ajorg> k
<ajorg> and several of our patches provide backward compatibility for our 0.5.x features (before any Yum / RPM support was added to cloud-init)
<ajorg> oops, there he went.
<ajorg> welcome back :-)
<harlowja> ha
<harlowja> ya, wireless -> wired
<ajorg> was just saying that several of our patches are for backward compatibility with our old 0.5 fork. are those likely to be accepted?
<harlowja> unsure
<harlowja> are u from amazon?? :-P
<ajorg> yes.
<harlowja> ah
<ajorg> I'm the cloud-init maintainer here.
<harlowja> lucky guess, i didn't know anyone else running 0.5 :-P
<ajorg> haha
<ajorg> we haven't been for a long time
<harlowja> ah, maybe guess was more about who has been running cloud-init for that long then
<ajorg> but we landed a bunch of yum / rpm features and our customers used them
<harlowja> in fact it was mostly just lucky, lol
<harlowja> how's seattle :-P
<ajorg> great actually
<ajorg> love it up here
<harlowja> cool
<harlowja> but ya, can't hurt to propose them imho
<ajorg> i've recently become able to contribute any of our current patches, of which we have a variety
<harlowja> ya, no doubt
<ajorg> 29 patches at the moment, some of them features, some bugfixes, etc.
<harlowja> i remember a cloud-init bug many years old that had a amazon person on it, trying to give a patch file and i think smoser couldn't touch it (legal reasons idk?)
<harlowja> i wonder if that bug is still there somewhere, ha
<ajorg> if it had something to do with region names that was probably me.
<ajorg> the regex was (is?) too strict.
<harlowja> unsure, lol
<ajorg> i'll be more open this year, i promise.
<harlowja> woot
<ajorg> anyway, i need to find a bus. i'll catch up on monday, hopefully
<harlowja> cool
#cloud-init 2017-05-30
<smoser> larsks, i'm sorry to bother you.
<smoser> https://bugs.launchpad.net/cloud-init/+bug/1692424
<ubot5> Ubuntu bug 1692424 in cloud-init "util.py[WARNING]: Failed to disable password for user centos" [Undecided,Incomplete]
<smoser> that has audit.log there now, and it does seem to say that 'passwd' is being denied. can you at least confirm that ?
<larsks> smoser: that looks like what it is saying, yes.  I'm not sure that's a cloud-init bug, though.  I wonder if someone modified the image and neglected to re-label the filesystem?
<larsks> I am pretty sure that cloud-init (both 0.7.5 and 0.7.9) are in use under EL7 without this problem cropping up.
<larsks> I wonder if Anil can share the image that is resulting in this issue?
<smoser> larsks, yeah. thanks.
<smoser> what is "neglected to re-label the filesystem"
<smoser> ?
<larsks> smoser: applying correct selinux labels to filesystem objects.  E.g., if you replace a file, the new file will probably not have the correct selinux labels.  There are various of ways of addressing that...
<larsks> E.g., the 'virt-customize' command has a --selinux-relabel option.
<smoser>  thanks
<larsks> I'm going to ask Anil if we can get access to the image that is having these issues.
<smoser> good
<smoser> i was just going to ask if you wanted me to mess up your words (unintentionally) or if you wanted to ask yourself in the bug :)
<smoser> thanks!
<blackboxsw> smoser: what's the difference in actual behavior between dmi_product_name_is and dmi_product_name_matches in ds-identify? It looks shallowly that they are dupes but I bet I'm missing shellisms of case handling vs direct equality comparison
<blackboxsw> ahh n/m *_matches handles globbing or regex
<blackboxsw> ahh n/m *_matches handles globbing or regex:q
<blackboxsw> oops
<smoser> right.
<smoser> if you dont give it a shell glob, then they're equal
<blackboxsw> feels like we could drop dmt_product_name_is as we can pass that string w/out the glob as $1  to get the same behavior as dmi_product_name_is
<blackboxsw> feels like we could drop dmi_product_name_is as we can pass that string w/out the glob as $1 to dmi_product_name_match to  get the same behavior as dmi_product_name_is
<blackboxsw> but maybe folks like the explicit semantic difference of the function name dmi_product_name_is versus dmi_product_name_matches
<smoser> blackboxsw, i'im trying to launch you an instance on azure
<smoser> having issues with login right now
<blackboxsw> thank you sir. that'll help a lot.
<blackboxsw> if it's too much work I can just setup an account
<blackboxsw> then you don't have to be remote hands for me  :)
<smoser> ok. that is sorted, launching now.
<smoser> blackboxsw, what is your preferred user name ?
<smoser> csmith ?
<smoser> blackboxsw, ssh csmith@chad-testing.cloudapp.net
<blackboxsw> csmith would be perfect thx
<blackboxsw> I'm in thanks smoser
<smoser> blackboxsw, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324625
<smoser> could you take a review of that ?
<blackboxsw> on it smoser
<blackboxsw> strange LP is showing no diff.  Pulling it into my local branch to check
<smoser> see my third comment there, blackboxsw
<smoser> This has no merge proposal shown because of bug 1693543
<ubot5> bug 1693543 in Launchpad itself "oops when generating diff in git merge proposal" [Critical,In progress] https://launchpad.net/bugs/1693543
<blackboxsw> ahh interesting
<smoser> blackboxsw, i'd just re-submit, but i dont know if it will hit it again.
<blackboxsw> it's no prob smoser . It's just 3 commits right, your 2 plus Junjie right?
<smoser> right.
<smoser> i'll squash them, so you can view the individual commits just as context
<blackboxsw> smoser: what's fastest for ds-identify, reading dmi content or metadata directory? I'd expect the former , dmi from sysfs
<blackboxsw> I'm wondering why ds-identify in that branch doesn't check sysfs first and then metadata 2nd
<blackboxsw> especially because the DMI product_name is already cached as a script var per collect_info
<smoser> blackboxsw, looking
<smoser> blackboxsw, http://paste.ubuntu.com/24716726/
<smoser> there, you're asking aobut line 202 ?
<smoser> you're probably right on the speed thing, since the dmi values are alreaady set
<blackboxsw> smoser: yeah seed check should come after dmi check
<smoser> but the seed dir is really just a completely separate path. seed is really kind of just for testing.
<smoser> so i guess the justification for the order is that the datasource looks in the order (seed and then dmi)
<smoser> ie, if you have seeded that datasource it wont look at the dmi data (and thus wont actually even read the metadata service)
<smoser> random ... unrelated ..
<smoser>  i found out about https://repl.it/languages/python3 over the weekend.
<blackboxsw> ahhh
<smoser> it is pretty cool
<blackboxsw> so in the datasource itself, (per order of seed before sysfs) if that is an inefficient order would be want to eventually change it in both places?
<blackboxsw> both places being ds-ident and the datasource itself
<smoser> well, its not inefficient as preference
<blackboxsw> ahh ok
<smoser> if you put seed data in /var/lib/cloud/ for almost any datasource, then you are basically disabling searching
<blackboxsw> ahh nice on the autocomplete IDE :)
<smoser> yeah, it even has really nice (seeming) vi keybindings
<blackboxsw> smoser: order matters in default datasource detection right? Because ..  AliYun Ec2... because AliYun is a more specific case of Ec2 right?
<smoser> umm.
<smoser> so i *think* that i made it such that DataSourceEc2 will not recognize a aliyun and AliYun will not recognize Ec2
<smoser> ie, aliyun does not provide the dmi platform info that Ec2 does
<smoser> (if it did, and Aliyun was after Ec2 in the list, then that would break aliyun)
<blackboxsw> smoser: finally finished the review. I've approved pending some minor fixes and I'll watch for your comments on that branch just to clarify points and my understanding of datasource order.
<blackboxsw> smoser: that last comment you made was my understanding about the order and precedence or the datasources in the list "if it did, and Aliyun was after Ec2 in the list, then that would break aliyun".    Thank you for confirming.
<smoser> blackboxsw, i responded to that mp. i'll make most of the changes you suggested and ask for re-rview.
<smoser> i think if aliyun changes their platform to look like they are amazon, then t hey'd kind of have to expect that guests would all of a sunden think they were on amazon.
<smoser> powersj, when cjwatson said to me...
<smoser> <cjwatson> smoser: don't have enough brain right now to work out whether it will hit it again, but you can try rescanning the repository using the API: https://launchpad.net/+apidoc/devel.html#git_repository-rescan
<smoser> have you looked at doing such a thing ?
<nacc> i've also foudn that resubmitting the MP works (re-propose?) it does supersede it, but you can keep the old comments
<smoser> ie, if its easy and youv'e done it before, i'd like to kicki it on
<smoser>  https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324625
<powersj> I have not, but I would like to as that MP has generated 300+ emails to my
<smoser> i guess i can just try re-submit
<powersj> to me*
<powersj> please do :)
<nacc> it's a button int he top right, iirc
<powersj> not having a diff has made the CI job think it needs a review
<smoser> didnt fix it
<smoser> :-(
<powersj> ok I'll look at rescan while I eat lunch in a bit.
<smoser> i am pretty sureits because i changed Junji's name in a commit message.
<smoser> and i suspect it wont help if we kick it another way
<powersj> smoser: https://paste.ubuntu.com/24717938/ you may need to try that as I don't have uber-user privs on cloud-init
<powersj> actually I logged in as anon... that might be why
<powersj> effectively you would want to try this I think: https://paste.ubuntu.com/24717974/
<smoser> powersj, ran that with python2.
<powersj> smoser: no 401 auth error?
<smoser> it did run to completion
<smoser> dont knwo what that means.
<smoser> blackboxsw, i didnt realize your comment on the P_PRODUCT_NAME was from ds-identify test rather than from the datasource test
<smoser> my response still kind of applies
<smoser> i dont have really strong feelings
<blackboxsw> same here smoser, just finished a reply on the MP. I don't really have strong feelings about it, I just like less duplication wherever possible
<blackboxsw> we've combined spent way more time that needed for how on the fence I am.
<smoser> agreed.
<smoser> copying the variable into the unit tests ensures against future accidental breakage.
<smoser> at least more so.
<smoser> we have diff at https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324807 now
<smoser> blackboxsw, i'm going to leave tests/unittests/test_ds_identify.py unmodified
<smoser> in order to maintain consistency
<smoser> i'm not opposed to a cleanup though
<smoser> (other data there uses copied strings)
<blackboxsw> +1 smoser
<blackboxsw> woot visual diff
<smoser> now i'm going to see if i break it
<smoser> blackboxsw, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324807
<smoser> i thin i've addressed all your feedback there
<smoser> (other than *adding* a copy of that string)
<blackboxsw> thanks checking
<blackboxsw> smoser: you're good on that branch thanks
<smoser> merged. thanks.
<smoser> oh fudge
<smoser> we missed one blackboxsw
<smoser> the settings test
<smoser> as in why did that still pass
<blackboxsw> ahh right the NonDefault
<blackboxsw> looking
<smoser> oh.
<smoser> the change was made to fix test_expected_default_network_sources_found
<smoser> we just now have no need for test_expected_nondefault_network_sources_found
<smoser> as it is essentially covered now in the former
<blackboxsw> smoser: agreed. Yeah it was only to cover non-default cloud datasources which now no longer exist.
<askb> smoser, ping
<askb> smoser, wondering if you got a chance to look into the updated for 1692424
#cloud-init 2017-05-31
<smoser> askb, did larsks not respond?
<smoser> blackboxsw, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324274
<askb> smoser, yes ... thanks for the update on the bug!
<askb> smoser, so if this is an issue with selinux, then disabling selinux should allow cloud-init to proceed with passwd lock issue correct ?
<askb> smoser, I could do another run trying to selinux disabled
<smoser> askb, i'd expect that disabled selinux would fix that yes, but the proper fix is to fix your image to have the correct filesystem settings.
<askb> smoser, we don't know what is overriding the /etc/shadow permissions, possibly nova-agent which brings up the service
<askb> smoser, any issue with the fs settings I may have missed out ?
<smoser> o/
<sauloaislan> Hi!!
<Odd_Bloke> smoser: rharper: How is (ds-identify|cloud-id) run in systemd?  (i.e. how can I work out when in boot it ran?)
<rharper> it runs at generator time
<rharper> which is prior to any units running (generators help determine if a unit should run or not)
<Odd_Bloke> Aha, OK.
<Odd_Bloke> So that would (necessarily?) be before some mounts, right?
<rharper> so, ds-identify (cloud_id) will write a .unit file in /run/systemd/targets/multiuser.target/wants/cloud-init.target or something like that to indicate that cloud-init units should run
<rharper> well, rootfs has to be moutned for systemd to run
<Odd_Bloke> It looks like on Softlayer it's running before the config drive mount is in-place, so it isn't detecting ConfigDrive as an option.
<rharper> we don't mount config drive
<rharper> ConfigDrive checks udev fs labels IIRC
<Odd_Bloke> It checks FS labels, but does also check the path if an FS label isn't found.
<Odd_Bloke> Because some clouds (e.g., of course, Softlayer) don't present a standard FS label.
<rharper> I'm not sure I understand what is different about softlayer's config drive
<rharper> cloud_id will check it in the same way we always have;  that said, the config drive device may be *slow*, or *not present* at the time;  which is one of the concerns that smoser raised in the past
<Odd_Bloke> So there are two parts to the issue.  Firstly, Softlayer only presents a config drive if metadata is specified for the instance.
<Odd_Bloke> So we have to do boot-time determination of the appropriate DS to use.
<Odd_Bloke> The second problem is that the config drive doesn't present in a standard way, so ds-identify doesn't recognise it.
<rharper> ds choice is between config drive and URL ?
<Odd_Bloke> ConfigDrive and NoCloud.
<rharper> a cloud using NoCloud
<rharper> I suppose this is more bare metally though IIRC
<Odd_Bloke> Yep and yep.
<rharper> nocloud is present in the filesystem they laydown; that's straight forward, I suppose if you have running instances with the configdrive, we can experiement with ds-identify on such a system
<rharper> and expand/update the configdrive detection, or possible introduce a ds_check_softlayer with the differences contained therein
<Odd_Bloke> So the code in question is https://git.launchpad.net/cloud-init/tree/tools/ds-identify#n557
<Odd_Bloke> 561 onwards is intended to handle this case.
<Odd_Bloke> But I'm not sure it ever will anywhere, because of mount ordering.
<Odd_Bloke> (Well, except in cases where people are baking config drive data in to their images, but that is not an interesting case. :p)
<rharper> mount ordering
<rharper> you believe that the device won't be present ?
<Odd_Bloke> The device is present, but we check the actual paths.
<Odd_Bloke> I believe it won't be mounted.
<rharper> it doesn't have to be
<Odd_Bloke> 558-560 could be modified to include the Softlayer "METADATA" disk label.
<rharper> FS labels are populated via blkid
<rharper> https://git.launchpad.net/cloud-init/tree/tools/ds-identify#n183
<rharper> so, if the block device is present, and readable, and it has a filesystem label that blkid can detect, it should get listed
<Odd_Bloke> Right, agreed.
<Odd_Bloke> But that's the first three lines of the function.
<Odd_Bloke> I'm talking about the rest of the function. :p
<rharper> well, the mount part is not related then
<rharper> we find all of the filesystem labels present and check for known config drive labels;  if there is a *new* one for softlayer, then I suppose we should add that
<rharper> I see
<rharper> the expectation is that it is mounted to /config_drive .
<Odd_Bloke> Or /var/lib/cloud/seed/config_drive
<rharper> yes, I see; hrm;  I don't know without looking at the code what would mount it there
<rharper> in the datasource code, there are some test mount paths
<Odd_Bloke> But nothing will be mounted at ds-identify time so the specific location is irrelevant, no?
<Odd_Bloke> *nothing other than /
<rharper> well, the path that ds-identify checks matters; but if nothing is going to mount it then no it does not matter
<rharper> practically
<rharper> so
<rharper> here's what happens, (Or can);  if the filesystem label is present) then we return that we've found a ConfigDrive and that enables cloud-init to run
<rharper> when cloud-init runs, it has code that will do the mount and read the contents
<rharper> for config drives
<rharper> it's not clear to me why ds-identify also has the same content checks but I suspect there may be some that populate a directory (but do not have a fs lable)
<rharper> label)
<rharper> smoser: may confirm
<rharper> so for softlayer, it should be sufficient that they attach a config drive with the 'config-2' label; and ds-identify will see that from blkid label parsing, and *enable* cloud-init to run later;  when cloud-init runs it will mount and consume the datasource
<rharper> if cloud-init isn't running when a config drive is present on softlayer, then we need to examine the filesystem labels that were found
<smoser> "METADATA" is kind of garbage
<smoser> which is why i've avoided it. its not at all unreasonable for someone to have a block device with a filesystem label of 'METADATA'
<Odd_Bloke> rharper: Yeah, the problem is that Softlayer _don't_ present a "config-2" label, they present a "METADATA" label.
<rharper> =(
<Odd_Bloke> Sorry, I should have been more explicit about that previously.
<smoser> so... i think that extending the ConfigDrive to notice 'METADATA' as a valid filesystem label would probably work.
<smoser> but i really don't like that without some other piece of identification
<smoser> just because its such a generic term.
<smoser> if i had to do ConfigDrive again (the openstack implementation) i would have picked os-cfg-2 or something as the label to namespace it some as even 'config-2' is arguably generic
<smoser> even if we did etend config drive to support 'METADATA" then we'd still have a wierd situation wehre there were 2 valid datasources.... which should be preferred ?
<Odd_Bloke> smoser: NoCloud is always put last, right?
<Odd_Bloke> (That's the appropriate ordering on SL.)
<smoser> well, in the default list...
<smoser> datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Ec2, CloudStack, None ]
<smoser> but id' really rather not build any (more) dependency on order.
<smoser> we're *almost* to a point where these dont have to rely on order.
<smoser> Ec2 is the last big one... once it turns strict on, then order doesnt really matter.
<smoser> and if i had the option, i think i'd give preference to things in /var/lib/cloud/seed over any other source...
<smoser> just because "well, don't put stuff there if you dont want it there."
<smoser> it wasn't ever really meant to be a full scale "datasource", but more for testing and such.
<Odd_Bloke> If we can work out some way of identifying SL, we could extend check_configdrive_v2 to do 'if SL and METADATA FS label found: yes'.
<smoser> yes. i think so.
<smoser> thats essentially what we do for brightbox now
<smoser> (with ec2)
<smoser> i realize we want/need to support softlayer, but i dont want doing so to really screw me later.
<Odd_Bloke> Agreed.
<smoser> and i'm kind of feeling like we're not in a good situation because of some decisions that were made in the past, and i dont want to make it worse.
<Odd_Bloke> smoser: Are you saying that you're worried this path would screw you later, or you're saying that this is our only viable option?
<Odd_Bloke> (Or both? >.<)
<smoser> if we get a positive identification "running on softlayer", then we're in reasonable shape
<smoser> if we don't get one, then there are a whole lot of heuristics that crossing our fingers and hoping for the best.
<Odd_Bloke> smoser: Am I right in thinking that you have a script that will harvest a bunch of data that I can pick through?
<smoser> well if you look at /run/cloud-init/ds-identify.log you'll see what it collects now
<smoser> sudo sh -c 'cd /sys/class/dmi/id && grep -r . *'
<smoser> i think they're xen
<Odd_Bloke> There isn't any DMI data.
<Odd_Bloke> ;.;
<smoser> so there might be some stuff in /sys/hypervisor
<smoser> but iirc i dont think there was
<Odd_Bloke> http://paste.ubuntu.com/24727582/ is what's in there at first-boot.
<smoser> disapointed in myself that i didnt grab that info when i had an instance up in bug 1689890
<ubot5> bug 1689890 in cloud-init "Unable to identify datasource in IBM Bluemix" [Medium,Confirmed] https://launchpad.net/bugs/1689890
<smoser> sudo sh -c 'cd /sys/hypervisor && grep -r . *'
<smoser> ^ Odd_Bloke run that
<Odd_Bloke> http://paste.ubuntu.com/24727590/
<Odd_Bloke> I have a meeting to get to.
<Odd_Bloke> So I'll have to pick this up again later.
<smoser> ec2 says that their uuid i 'uuid' there will start with 'ec2'
<smoser> i'd accept the crappy solution that 'uuid' there will start with '367fb048-'
<rharper> didn't realize it was a xen stack
<rharper> wonder if we should be doing xenstore-ls on xen platforms
<smoser> i'm more willing to take low-chance-of-false-positive solutions like collision on the first 8 chars of a uuid on xen than i am on kvm.
<smoser> due to the built in future proofing of xen
<rharper> well, if they don't want to control their UUID space, they can inject an arbitrary key into the store
<smoser> yeah.
<rharper> it would be nice to see what's currently in there
<rharper> xenstore-ls
<rharper> on softlayer
<rharper> do we have a dmesg from one ?
<smoser> i dont have one.
<smoser> and xenstore-ls (package xenstore-utils) is not in an Ubuntu image by default.
<smoser> blackboxsw, https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/324640
<smoser> thoughts on my response ?
<blackboxsw> sure smoser so what would the main run? Maybe move the tools/cloudconfig-schema functionality into schema.py?
<blackboxsw> so argparsing etc?
<smoser> thats what i was saying, and then just a small entry point thing in tools/ that is something like:
<blackboxsw> I think that's what you meant. Yeah I'll hoist the main out of tools/cloudconfig-schema and into schema.py
<smoser>  http://paste.ubuntu.com/24728389/
<blackboxsw> opps too late
<smoser> yeah, so cloudconfig-schema then ends up being like ^
<blackboxsw> yeah makes sense. I might add another unit test or two to handle arg parse behavior etc (as I left tools without coverage as it wasn't officially part of the delivered cloudinit package/modules)
<blackboxsw> just put up the azure ds-id from chassis-asset-tag too. https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/324875   and was about to work on SRU templates
<blackboxsw> cleaning up the schema right now
<smoser> yeah, was reading that now.
<Odd_Bloke> rharper: http://paste.ubuntu.com/24728642/
<Odd_Bloke> ^ dmesg from SL
<Odd_Bloke> smoser: We do actually install xenstore-utils on our SL images.
<smoser> Odd_Bloke, doesnt really help. if ds-identify or cloud-init used it... then reasonably we'd have to at least Recommends it.
<smoser> fixing CPC provided Ubuntu images is only half a fix. Ubuntu should just work.
<smoser> blackboxsw, reviewed chassis-asset-tag
<rharper> Odd_Bloke: thanks, I see that xen procfs  ... not sure what that is, but maybe that has some info without using the tools
<smoser> Odd_Bloke, can you ssh-import-id ?
<smoser> if you have one up.
<Odd_Bloke> smoser: rharper: You're both on root@169.53.54.118
<rharper> Odd_Bloke: thanks
<rharper> so, xenstore has bios-string options, http://paste.ubuntu.com/24728907/
<rharper> it's very possible that they could set Softlayer specific stuff in there that can be read via xenstore-read
<rharper>  the whole set of  key/values with this:  xenstore-ls /local/domain/`xenstore-read domid`
<rharper> # xenstore-read /local/domain/`xenstore-read domid`/bios-strings/bios-vendor
<rharper> Xen
<rharper>  
<smoser> we should write up a "Why we recommend you identify your cloud platform, and how best to do it."
<rharper> ack
<mak2> Hi !!
<mak2> smoser i have problems with cloud-init, this my log http://paste.ubuntu.com/24729092/
<mak2> I can not login on vm
<mak2> My environment is openstack master i'm  trying perfomer a deploy with de network interface flat and/or neutron (ironic.conf)
<blackboxsw> thanks smoser on the asset-tag review just pushed the updates to https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/324640
<blackboxsw> and additional unit tests
<mak2> smoser this is cloud-init-output.log http://paste.ubuntu.com/24729173/
<mak2> Can someone help me?
<smoser> mak2, this is 12.04 ?
<smoser> 14.04
<smoser> mak2, well, your image is configured to search only the Ec2 datasource, but your'e running on openstack
<smoser> openstack provides a compatible ec2 metadata service at 169.254.169.254
<smoser> so that *should* be ok
<smoser> but when cloud-init is trying to read from that url, its getting
<smoser> 2017-05-31 18:06:59,187 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: bad status code [500]
<smoser> i suspect you can recreate that simply with
<smoser> curl 'http://169.254.169.254/2009-04-04/meta-data/instance-id'
<smoser> and i suspect your nova api server has stack traces
<mak2> is 14.04
<smoser> blackboxsw, i thin i like iut.
<smoser> it
<smoser> biab
<mak2> smoser this comand into VM curl 'http://169.254.169.254/2009-04-04/meta-data/instance-id'and return 500 Internal Server Error
<mak2> smoser nova-placement-api-error.log http://paste.ubuntu.com/24729333/
<mak2> why do you know the image is configured to search only the Ec2 datasource?
<mak2> smoser ?
<smoser> i dont know where you got your image from
<smoser> but it has (i think) config in /etc/cloud/cloud.cfg that says datasource_list: ['Ec2', 'None']
<mak2> smoser I created the diskimage-builder :D
<smoser> in theory that should work, as i said, openstack does provide a ec2 api endpoint.
<smoser> but your cloud platform is broken and gives 500 errors when someone tries to access it
<smoser> so thats busted
<smoser> almost entirely unrelated, but i'd really suggest just using the ubuntu images (http://cloud-images.ubuntu.com/)
<smoser> while disk image builder could/should/may work, the cloud-images are official ubuntu output, *and* you dont' have to bother building them.
<smoser> my feeling on building your own images is similar to my feelings on building your own glibc, gcc or kernel.
<smoser> sure, you can do it. its probably mostly supported, but ... unless you really know why you want to do such a thing, you probably dont want to do such a thing.
<smoser> all that make sense ?
<smoser> blackboxsw, https://jenkins.ubuntu.com/server/job/cloud-init-ci/nodes=metal-amd64/433/console
<smoser> you understand that ?
<smoser> (use six.StringIO)
<mak2> ahhhhh great I trying using export DIB_CLOUD_INIT_DATASOURCES=OpenStack
<blackboxsw> smoser from six import StringIO py3 changed locations used to be io.StringIO in py2
<blackboxsw> smoser: I'll put up a trivial
<blackboxsw> my bad
<blackboxsw> thanks CI :)
<smoser> mak2, that said i really suspect that changing the image wont help in any way
<smoser> your openstack is busted.
<blackboxsw> we probably could/should chanage tests/unittests/helpers.py to avoid the try/ImportError dance on that too
<smoser> yeah
<mak2> smoser thx :D
<smoser> mak2, if you just want to see things fail faster you can use cirros
<smoser> i'm pretty sure it'd boot faster and show you this same error
<mak2> smoser ok I will try it! :D
<blackboxsw> smoser: here's the diff http://paste.ubuntu.com/24729567/   shall I put up another review for it or push to cc-ntp-schema-validation?
<blackboxsw> there were no other abuses. just what I had added for the with_logs stuff
<smoser> just push is fine.
<smoser> what is cStringIO versus StringIO
<smoser> >>> import cStringIO
<smoser> >>> cStringIO
<smoser> <module 'cStringIO' from '/usr/lib64/python2.6/lib-dynload/cStringIO.so'>
<smoser> >>> import StringIO
<smoser> >>> StringIO
<smoser> <module 'StringIO' from '/usr/lib64/python2.6/StringIO.pyc'>
<smoser> i guess just performant.
<blackboxsw> smoser: pushed
<smoser> blackboxsw, https://jenkins.ubuntu.com/server/job/cloud-init-ci/nodes=metal-amd64/434/console
<smoser> :?
<smoser> that ran against af370e135e8b9873ac8182ab6250aca061b420d1
<blackboxsw> smoser:turns out you actually need to add the file
<blackboxsw> af370e1..f85cd6c should have it
<smoser> k
<blackboxsw> meh one more fix.
<Odd_Bloke> smoser: cStringIO is StringIO written in C, but with some limitations.
<Odd_Bloke> (Historically including problems with UTF-8.)
<Odd_Bloke> Which is to say, "it's Python". :p
<blackboxsw> :) . smoser ci test runs agree I'm officially done w/ that branch
<smoser> \o/
<blackboxsw> also officially done w/ the azure  instance
<blackboxsw> minor review comments on https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324677
<blackboxsw> and I was seeing merge conflicts when trying to merge w/ trunk
<akaris> smoser hi! qq for you - do you know how ubuntu cloud-init handles the presence of multiple default routes? e.g. eth0 has a default route, eth1 has a default route. as the kernel will only accept one default route, do you guys configure PBR or something similar, or are you simply blindly implementing the multiple default routes in the configuration files and have the kernel reject all additional default routes?
<akaris> to be more precise:   if I create a neutron network with a router and default route, and a second network with a default gateway as well, and I attach both to an instance .. IMO there are 3 ways to handle this: implement PBR (for the inbound case) and for traffic from the instance outbound "role the dice" ... reject multiple default routes with an error message ... accept multiple default routes but role the dice for the one that wins (which is the fir
<akaris> st default rotue that gets configured)
#cloud-init 2017-06-01
<telling> Hi guys, I have an issue where the pubkey i specify for my ec2 instance doesnt get installed on the instance. The right key is available from the metadata, but doesnt get installed. I use cloud-init to provision my base AMI, might there be something I'm missing to "reset" for the keys to be installed?
<telling> Also can I create PRs for the docs? Not familiar with launchpad
<smoser> telling, yes you can do "merge proposals" for the docs.
<smoser> follow http://cloudinit.readthedocs.io/en/latest/topics/hacking.html
<smoser> to do doc you run 'tox -e doc'
<smoser> and, yes the key should get pulled in.
<smoser> can you pastebin a /var/log/cloud-init.log from an instance ? this does onyl happen once per instance, though. not ever boot.
<telling> smoser: right, so the issue is I dont clean up the instances dir on my base ami?
<telling> This is my cloud-init log: https://ncry.pt/p/nxDn#FbcKAdDqbLBqKIqASdwzhw3PK6FtHibonWeX3zzIr0I
<telling> As you can see it contains the base-ami log and the newly created jenkins instances log :)
<smoser> telling, you should not have to clean up anything on an ami
<smoser> thats the goal at least.
<telling> Right, i thought so initially
<telling> Just cant figure this out :)
<smoser> http://paste.ubuntu.com/24737229/
<smoser> telling, hm..
<smoser> can you share the cloud-config that you're giving ? it seems like you've at least modified the default user
<telling> smoser: yes, it's here: https://ncry.pt/p/pxDn#MpKZXCM083Hu_imxWbRE8mrPj1gbPzNYqbNhJblMgNs
<smoser> telling, http://cloudinit.readthedocs.io/en/latest/topics/examples.html?highlight=users
<smoser> so what is happening...
<smoser> is that you are overriding the default 'users' list
<smoser> and defining one without a 'default' entry
<smoser> and the ssh keys from the metadata service go into which ever user is 'default'
<smoser> there may be a way for you to tag your jenkins user in the list
<smoser> let me check
<telling> No
<telling> Thats not what i want, i want it to be the ubuntu user as normal
<telling> I just want cloud-init to create a jenkins user, not for it to be default
<telling> Currently it seems, no user gets the key. I feel like this can be cause of trouble
<telling> I can only access my instance because I have it provisioned with a master key in the base ami (Which im still debating with myself if I can justify or not)
<smoser> ah.
<smoser> just add an entry 'default'
<smoser> in your users array
<telling> But you would've expected my jenkins user in this case to get the key? Or whats to be expected?
<telling> But that did indeed fix it, thanks a bunch smoser.
<smoser> telling, well, the user that gets the key is the 'default' user (as described in the system_info)
<smoser> but that user is not modified/created if they are not int he 'users' array
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/324948 quickly ?
<smoser> or blackboxsw
<blackboxsw> mornin'
<blackboxsw> smoser: I don't see how that kwarg specified will not break on old  versions.
<blackboxsw> smoser: nevermind, dumb
<blackboxsw> reordered call args +1
<smoser> thanks
<ragechin> Hey smoser, what's the proper procedure to formally objecting to the behavior of a module and discussing changing it?
<ragechin> Something in a github issue or whatever?
<Odd_Bloke> smoser: So have we talked to SL about identifying information before?
<Odd_Bloke> I'm about to send them an email asking, and want to make sure I'm not repeating a question we've already asked.
<smoser> i'im sure you are :)
<Odd_Bloke> :)
<smoser> blackboxsw, http://paste.ubuntu.com/24738693/
<smoser> those on top of your
<smoser> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/324875
<smoser> and i'm good to pull
<smoser> (not that i'm workign on that rather than other things i should be working on)
<blackboxsw> smoser: +1 you need me to apply it?
<blackboxsw> smoser: I know your listening
<smoser> i'll just merge it
<smoser> you can ACK in that MP
<smoser> (i commented there too)
<blackboxsw> Ipushed
<blackboxsw> smoser: sorry pushed
<smoser> blackboxsw, fudge
<smoser>     dmi_chassis_asset_tag_matches "${azure_chassis}" && return $DS_FOUND
<smoser>     check_seed_dir azure ovf-env.xml && return ${DS_FOUND}
<smoser> my feeling is that always seed dir indicates datasource.
<smoser> if you put that stuff in, you are disabling all other checks. so i think we want/wanted those lines swapped
<blackboxsw> smoser: don't we want to ds-identify to exit on first/cheapest check for a given datasource?
<smoser> bah.
<smoser> and i think in my squashing i duplicated a line too
<smoser>  /o\
<blackboxsw> hmm, I thought generally we wanted to claim success (DS_FOUND) as soon as possible for each datasource in ds-identify. Looking back over the script. I'm on the cloud-init hangout if you want to chat for faster turnaround
<smoser> blackboxsw, well, the idea is that if you write /var/lib/cloud/seed/<datasource>/ you're feeding information to cloud-init. it overrides any checks. you're telling it "USE THIS DATA!".
<smoser> sure
<erick3k> can someone please help me
<erick3k> am using debian 8 and cloud-init 0.7.7 and the root partition is not resizing
<erick3k> https://docs.google.com/document/d/1nAiVAt0rIG6Fl-4vEtvOpHR2e2F5W3j5OEO3POYhcKA/edit?usp=sharing
<blackboxsw> smoser: I'm going through the DataSourceAzure code, and it looks like our seed dir ovf file can contain dscfg key in which the dscfg can override hostname_command, hostname_bounce.command, and agent_command with a custom command that we could have injected in the seed dir, Wouldn't that get us past seed dir being broken? It seems like setting dscfg['agent_command'] = ["not__builtin__"]  would let us fall through an pull
<blackboxsw> ssh keys out of get_metadata_from_agent (instead of fabric)
<blackboxsw> erick3k: I wonder if that's related to https://bugs.launchpad.net/cloud-init/+bug/1684869
<ubot5> Ubuntu bug 1684869 in cloud-init "growing root partition does not always work with root=PARTUUID=" [Medium,Confirmed]
<blackboxsw> checking your doc
<erick3k> i see resize module not found and then found
<erick3k> blackboxsw thank you
<blackboxsw> hmm, so I'm seeing "Jun  1 08:49:53 newdebian8 [CLOUDINIT] util.py[DEBUG]: Running command ('resize2fs', '/dev/vda1') with allowed return codes [0] (shell=False, capture=True)
<blackboxsw> Jun  1 08:49:53 newdebian8 [CLOUDINIT] util.py[DEBUG]: Resizing took 0.007 seconds
<blackboxsw> Jun  1 08:49:53 newdebian8 [CLOUDINIT] cc_resizefs.py[DEBUG]: Resized root filesystem "
<blackboxsw> hmm yeah then some module not found errors
<erick3k> anyway to fix that?
<smoser> blackboxsw, yeah.. i gues maybe.
<blackboxsw> erick3k: I'm not sure why that log appears to be running the init modules so many time
<blackboxsw> erick3k: I'm not sure why that log appears to be running the init modules so many times
<smoser> blackboxsw, it runs on every boot
<smoser> as you can shut down an instance, grow its diskk and it wants to make that magic
<smoser> erick3k, i suspect htat you do not have growpart
<smoser> but i woudl have thought you'd have some log of an error
<erick3k> i did just check
<erick3k> smoser it is installed :( https://i.imgur.com/HdSDob4.png
<smoser> erick3k, you're not running the growpart module
<smoser> erick3k, and fyi, 'pastebinit' is installable in debian and is fabulous
<erick3k> kool
<smoser> ie, you run 'pastebinit /var/log/cloud-init.log'
<erick3k> how can i try and run the module?
<smoser> or 'dpkg-query --show | pastebinit'
<erick3k> cool
<smoser> your /etc/cloud/cloud.cfg probaly has no 'growpart'
<erick3k> umm
<smoser> http://paste.ubuntu.com/24739577/
<erick3k> it does have resizefs
<smoser> resizefs resizes the filesystem
<smoser> but growpart grows the partition to use any space at the end.
<smoser> you can probably urn
<smoser> sudo cloud-init single --frequency=always --name=growpart
<erick3k> https://0bin.net/paste/dd5jh1SY5RJtBCn-#v0WV8Is-//g0CwlmdqSEjT33ny325sdWqdqTVkjGeQ4
<erick3k> thats my cloud.cfg
<erick3k> run cloud-init single --frequency=always --name=growpart and reboot?
<erick3k> nice
<erick3k> that worked
<smoser> add 'growpart' before 'resizefs'
<smoser> in cloud_init_modules
<erick3k> smoser that does not work
<smoser> why
<erick3k> i increased the size, deleted /var/lib/instances/xxxx
<erick3k> rebooted
<erick3k> and still has the same size
<smoser> i'd hope that /var/log/cloud-init.log has a WARN message
<smoser> does
<erick3k> checking
<smoser> can you pastebin:  sudo growpart --dry-run --update=on
<smoser> can you pastebin:  sudo growpart --dry-run --update=on /dev/vda 1
<smoser> (where 1 is the partition of the device that needs resizing)
<smoser> and /dev/vda is the device
<erick3k> https://0bin.net/paste/2q9nEwhsy5wlg0-b#NBF6dIc76qJkDmnWhnv7K6fTrkqLVVwYkZ1IkszmPxp
<erick3k> i dont see the module invoked on the log
<erick3k> oh hold on
<erick3k> nvm
<erick3k> smoser it does work, looks like cloud.cfg with -growpart didnt save
<erick3k> hehe
<erick3k> thank you very much
<erick3k> you are the man, owe you a beer
<dpb1> you can pay me now and I'll buy smoser a beer
<dpb1> :)
<erick3k> xD
<Redcavalier> Hi, is there a way to prevent cloud-init from caching the user-data locally?
<Redcavalier> My main issue is that if, for some reason, user-data is not received by a VM after a reboot, the user-data is loaded locally and some stuff that normally only get executed on first boot, get run again.
<Redcavalier> That results in password getting changed and other fun stuff.
<dgarstang> Trying to get fs_setup and mounts to work. Getting "Failed to make '/mysql_data' config-mount". Any ideas?
<rharper> powersj: http://paste.ubuntu.com/24740067/
<rharper> end of my tox run
<rharper> Redcavalier: you don't have user-data but somehow it's cached?
<rharper> dgarstang: cloud-init.log and the boot syslog may be helpful (along with any user-data supplied)
<dgarstang> rharper: The file /var/log/cloud-init.log is empty.
<Redcavalier> rharper, I do have user-data, it get executed on first boot. However, sometimes after a reboot (1 in 100, roughly), the instance is unable to go and fetch that userdata and decides to use the local data. Now, for some reason, it all the commands, even though they already ran on first boot.
<dgarstang> rharper: Which is weird because I haven't made any logging changes
<rharper> dgarstang: suggests that cloud-init didn't run or something cleared it.
<rharper> dgarstang: what about syslog? that also may have some cloud-init output
<dgarstang> rharper: It ran because it executed scripts and it's logged "Failed to make '/redis_data' config-mount" to /var/log/cloud-init-output.log
<Redcavalier> all the commands, even though they ran on first boot already, get executed again*
<rharper> Redcavalier: hrm, that sounds like a network metadata source; it may be going down a NoDataSource path;
<dgarstang> rharper: Same stuff logged to /var/log/messages for cloud-init as /var/log/cloud-init-output.log
<Redcavalier> rharper, oh right, if I can change the way the no data source case behave, I might be able to fix this.
<dgarstang> So, basically, cloud-init isn't logging _anything_ except stdout
<rharper> Redcavalier: in general, cloud-init does some things every boot, per-instance, or always;  most of the initialization items like passwords, are run per-instance, so unless you've wiped out /var/lib/cloud/instance (symlink to the instance-id dir)
<rharper> it shouldn't re-run any of those per-instance configurations
<dgarstang> Does this look correct? https://gist.github.com/dgarstang/8352ad8c51d834fbf3f282eb300d83d7
<Redcavalier> rharper, indeed, hence why the fact it re-runs them makes no sense to me.
<powersj> rharper: on what branch of yours? I've don't recall seeing that happen before
<rharper> Redcavalier: your /var/log/cloud-init.log should help digest the logic, if you have it
<rharper> dgarstang: looking
<dgarstang> I wish I had a /var/log/cloud-init.log... :(
 * powersj goes to make lunch 
<rharper> dgarstang: seems odd not to have one;  the user data in your paste doesn not reference /mysql_data
<dgarstang> oh hm maybe xvdj0 should be xvdj
<rharper> it has redis_data
<rharper> I don't think so
<dgarstang> That was from an earlier exammple sorry
<rharper> does it come partitioned ?
<dgarstang> Now it's complaining about /redis_data
<dgarstang> rharper: Partitioned...? No, it's an external EBS disk
<dgarstang> Mounted tho at /dev/xvdj
<dgarstang> sorry attached I mean
<rharper> ok, just trying to line up the device name to the device name in the mounts
<Redcavalier> rharper, yes I have it, that's how I know it runs again when it shouldn't. Basically I get : util.py[WARNING]: Getting data from <class 'cloudinit.sources.DataSourceOpenStack.DataSourceOpenStack'> failed . It's then followed by messages telling me that the commands that are supposed to only run once per instance are being executed again.
<dgarstang> Well, yeah maybe it should be xvdj not xvdj0 in mounts
<dgarstang> But...
<dgarstang> lemme check something
<rharper> Redcavalier: so, if you fail to get hit the openstack metadata service, I suspect that undefined bad things may happen;  I'm not 100% sure where the instance-Id comes from on openstack instance types
<dgarstang> Nope... so the disk /dev/xvdj isn't formatted. I can't mount it. So, it skipped the fs_setup step as well
<rharper> Redcavalier: you can also look in /var/lib/cloud/instances/*
<rharper> dgarstang: well, if somethings' not right with the formatting in fs_setup, then it won't have a device to mount which would prevent the mounts section from being successful
<dgarstang> rharper: Sure. Getting it to format would be a nice first step
<rharper> dgarstang: it looks fine to me, but maybe double check the device name?  and are the EBS volumes attached prior to boot (I don't know)
<rharper> powersj: master with some local changes
<dgarstang> The EBS disk is attached. It's not even logging the fail to format. GRRR
<Redcavalier> rharper, logically, I'm betting it comes from /var/lib/cloud/instances/iid-datasource-none/user-data.txt
<rharper> Redcavalier: that's the fallback
<rharper> so since it failed to connect to the metadata service, it runs with fallback instance-id, which forces a new instance-id (it doesn't know which one)
<rharper> however, I wonder why it couldn't use the cached metadata ... maybe smoser knows more about that datasource
<Redcavalier> rharper, actually I'm wrong, it can't come from /var/lib/cloud/instances/iid-datasource-none/user-data.txt since that file is empty. After all, my original commands are getting re-run again.
<rharper> Redcavalier: right, it's related
<rharper> cloud-init couldn't find the instance-id, the fallback one (none found) is a *new* instance id, which means cloud-init won't know it hasn't run any of those config modules before and re-runs them with defaults
<rharper> ie, it didn't know the difference between image booting in a new instance (ie, I want you to regenerate my keys and etc) vs.  this temporary failure to identify itself as the previous instance-id
<Redcavalier> rharper, that makes sense. I do need to change the behaviour then. Could I put information in the fallback datasource so it doesn't use the original one again?
<rharper> unstable metadata service seems like the critical fix
<rharper> I don't know without looking if cloud-init can know to use a previous instance-id in the case that the currently expected datasource is failing
<rharper> what, if for example there were 3 different instance-id's ?  is it always right to use the one from the previous boot?  or how does cloud-init know which one to use on failure ?
<Redcavalier> rharper, it's clearly copying the previous one though. For example, I have 2 IDs right now, 25603f8d-e602-4cde-8a6b-09e7387e1512 and i-00000165. 25603f8d-e602-4cde-8a6b-09e7387e1512 is my original instance ID and the good userdata. i-00000165 was created when the metadata failed. It contains the same user-data as 25603f8d-e602-4cde-8a6b-09e7387e1512, which in this case is bad as it get executed again.
<rharper> are both the user-data.txt files empty ?
<Redcavalier> rharper, nope, thewy both have my original data when the VM was created. However, I suspect that something else might be happening.
<rharper> I don't believe there is any copying, the metadata service failed, so it picks an instance-id (Not sure if that's harcoded since it's a failure path); and then runs normal boot sequence , but since the instance-id is *different*, it doesn't find a cached instance dir, and runs like first boot of an instance
<rharper> within that instance-id dir, it writes out the various files it normally would, including sem dir which tracks when it last ran those config modules
<Redcavalier> Openstack has compatibility to both offer its own metadata source, but also EC2-like data. If cloud-init queries openstack for EC2-like data, it will get a reply. I wonder if this is what is happening.
<rharper> the two datasources for OpenStack that I'm aware of are the metadata-service URL (network based) or a ConfigDrive (vm local)
<Redcavalier> because if it really didn't get any data, by second instance ID user-data would indeed be empty
<rharper> I believe we well perfer a ConfigDrive since it's local and we can detect that before we bring up networking ;
<Redcavalier> rharper, yes, I've always been in favor of configdrives. However, my hand was forced by the higher up, but that's irrelevant here.
<rharper> but given your error message about the OpenStack datasource failed (and your log file may include URL timeouts) I don't thing that'd be a conflicting datasource (config drive) path
<rharper> Redcavalier: I didn't mean to indicate a perferred solution, only that cloud-init detects config drives before it attempts to hit the network URL for OpenStack
<Redcavalier> rharper, right, it didn't load from configdrive here though.
<dgarstang> "Failed to make '/redis_data' config-mount"... This is getting REALLY annoying
<dgarstang> Seriously, what am I supposed to do without debug output?
<dgarstang> Here's my latest attempt .. https://gist.github.com/dgarstang/a7d24e74c78d00a4f90a94019154dd44
<dgarstang> I don't need to create the dir do I? I looked through the code and it looks like cloud-init does it. I did see something about selinux in there
<dgarstang> This error "Failed to make '/redis_data' config-mount" implies it can't even create a direcrory
<rharper> looking
<dgarstang> Why, when I google "cloud-init "Failed to make"" do I get nothing?
<dgarstang> (except source...)
<dgarstang> Actually, it HAS created /redis_data
<dgarstang> Running "mkfs -t ext4 /dev/xvdj" manually works fine
<rharper> dgarstang: you should be able to re-run the module like this:  cloud-init --debug --force --file test.yaml single --name disk_setup --frequency always --report
<rharper> wher test.yaml is your user-data you pasted
<dgarstang> checking
<rharper> then replace disk_setup with mounts
<rharper> to run the mounts section
<dgarstang> Ok, I ran it.. got a lot of data, nothing relevant. Not even a mention of 'redis'
<dgarstang> Maybe it's because I manually formatted the disk. I'll start fresh
<blackboxsw> 38% on SRU bug templates smoser :). think that's worth a coffee break... back in a few
<dgarstang> Would be nice if that debug got logged somewhere on boot
<rharper> dgarstang: it does, /var/log/cloud-init.log; but you said it was empty; that's not normal
<dgarstang> rharper: It indeed is empty
<dgarstang> It's just an official CentOS 6 AMI
<rharper> hrm
<dgarstang> Fresh instance. Same issue. Same error. When I run the command above, still doesn't work and IT'S output doesnt got to cloud-init-output AT ALL
<rharper> the single command won't go to the file, it goes to stdout; but when the module runs during boot, all of cloud-init's logging should go to /var/log/cloud-init.log ;  if it's empty, that's possibly a logging config issue in /etc/cloud/cloud.cfg but I would have thought that an official AMI would have cloud-init logged somewhere properly
<dgarstang> The scripts that I am running from runcmd have "exec >> /var/log/ec2-bootstrap.log ; exec 2>&1" at the top. Maybe that's confusing cloud-init's logging
<rharper> that's definitely going to grab the runcmd commands outputs and redirectl them
<dgarstang> Well sure, for my scripts, but nothing else.
<dgarstang> Cloud-init is still sending its output to /var/log/cloud-init-output.log, but /var/log/cloud-init.log is empty
<rharper> it's possible that the CentOS 6 AMI has it configured to send logs somewhere else, I would think syslog woudl be the other choice
<dgarstang> Checking with those exec lines removed
<blackboxsw> back
 * blackboxsw checks runcmd tests on ubutnu w/ 2>&1 just to be sure I'm seeing logs where I think I should
<dgarstang> This is my latest attempt... https://gist.github.com/dgarstang/0fecc2dc7baaf1a2272a250bfe4da828... Output is going to cloud-init-output.log, and it's still logging "Failed to make '/redis_data' config-mount" ... still nothing going to cloud-init.log. This is so frustrating!
<dgarstang> I can't make the cloud-init user data any more simple
<dgarstang> "mount -t ext4 /dev/xvdj /redis_data" fails when I run it, so cloud-init did NOT format the disk
<smoser> dgarstang, its empty on rhel before we changed the logging to go directly to a file
<dgarstang> smoser: Egads. How'd you do that?
<smoser> dgarstang, look in /etc/cloud/cloud.cfg.d/05_logging.cfg
<smoser> see the line about 'syslog' (log_syslog) . just comment it out.
<dgarstang> smoser: yah... don't know how to read/update that file
<dgarstang> ah
<smoser> then logging goes right to the file, not to syslog
<smoser> which was busted on rhel
<dgarstang> Well it's not going to syslog either
<smoser> because cloud-init *thought* syslog was hooked up, but it wasnt
<smoser> anyway, thats the fix.
<dgarstang> All it's logging to syslog is the "Failed to make '/redis_data' config-mount" message
<dgarstang> It's not doing it's regular debut to syslog
<smoser> yea, thats kind of expected. syslog actually isnt all the way up. just make it go right to the file.
<smoser> thats the change that is in upstream now
<dgarstang> Good grief. Thanks. Well maybe the ability to format and mount disks is also broken?
<dgarstang> Actually I don't have that line in 05_logging
<dgarstang> I've got " - &log_syslog |" and " - [ *log_base, *log_syslog ]"
<dgarstang> I'd rather fix the inability to mount disks rather than the logging tho. Maybe I should try CentOS 7
<smoser> sorry
<smoser> you want to comment out a line like
<smoser> - [ *log_base, *log_syslog ]
<smoser> comment that out (#)
<smoser> then you'll get a log
<smoser> and the log should have a WARN message
<dgarstang> Ok. Lemme try centos 7 first
<smoser> blackboxsw, 44%. and i havent started my next one yet. (utlemming had done the DigitalOcean one)
<dgarstang> Looks like the logging issue is fixed in CentOS 7 .... but not it's ability to mount disks! ARGH!
<smoser> dgarstang, can i see /var/log/ ?
<smoser> var/log/cloud-init.log
<dgarstang> sure, hang on
<dgarstang> Here ya go. https://gist.github.com/dgarstang/2d9c134b7f230c84a82ed64c34a82852 Not much useful stuff there
<dgarstang> Lines 110,111
<dgarstang> Corresponding user-data with cloud-init https://gist.github.com/dgarstang/2fa1ed8b630117bdf472744d537d7d28
<smoser> for as much as gists are used for a pastebin youd think they would have an interest in just offering a pastebin solution
<dgarstang> :-\
<dgarstang> I can't see how to make my config any simpler
<smoser> ... i suspect that selinux is in play
<dgarstang> I dunno. I read through the cloud-init code and saw something about that so I added it
<smoser> as
<smoser> Jun 1 20:17:35 ip-172-31-7-213 cloud-init: 2017-06-01 20:17:35,363 - util.py[WARNING]: Failed to make '/redis_data' config-mount
<dgarstang> ^ Yep
<smoser> is from util.ensure_dir() failing with that argument
<dgarstang> But, it does create the directory
<smoser> hm.. oh.
<dgarstang> Yep
<smoser> did rharper already go over this with you?
<smoser> apparently there is a bug in python-selinux that couldbe related.
<dgarstang> :-O
<smoser> (sorry if we're re-treading here)
<dgarstang> I'm the first person in history to use the cloud-init disk mount feature?
<dgarstang> So, maybe when I build my AMI I should have it disable selinux in /etc/sysconfig/selinux
<smoser> https://bugzilla.redhat.com/show_bug.cgi?id=1406520
<ubot5> bugzilla.redhat.com bug 1406520 in libselinux "calling libselinux python restorecon fails on /var/lib/nfs/rpc_pipefs" [High,Verified]
<smoser> dgarstang, are you able to make a change easily and re-try ?
<smoser> i'd like to see the exception that is raised
<dgarstang> Well, someone earlier said I ran rerun with "cloud-init --debug --force --file data.yml single --name disk_setup --frequency always âreport"
<dgarstang> so, I can try that
<rharper> smoser: I didn't mention the selinux python issue yet
<smoser> yeah. i'm surprised you're nto getting debug messages in that log
<smoser> this is also 0.7.5
<smoser> which... realkly old. and i know that it is what you have, but, really old
<smoser> b
<rharper> smoser: re: selinux;  there was a set_enforce 0
<rharper> as a bootcmd
<rharper> however, someone else mentioned that maybe bootcmd didn't run "early" enough w.r.t disk_setup stuff
<smoser> yeah, but can you even do that ?
<rharper> which still seems odd to me given my reading of the config module order
<rharper> but possibly things changed in trunk vs. 0.7.5
<dgarstang> I've disabled selinux in /etc/selinux/config... Rebooting... Will try again after back up
<dgarstang> Well, I think I'm not getting the error now but "cloud-init --debug --force --file data.yml single --name disk_setup --frequency always âreport" isn't causing it to mount still
<dgarstang> So, maybe selinux needs to be disabled before boot
<dgarstang> UGH
<smoser> dgarstang, we are working recently on getting cloud-init much better on centos
<smoser> but as your'e finding, its not as thouroughly tested as Ubuntu.
<smoser> the goal is to improve it and get automated tests in  place to keep the function there.
<dgarstang> Sadness
<rharper> dgarstang: the single runs just one module, you will need to call it again with '--name mounts' to perform the mount section
<dgarstang> Is this progress? Using CentOS 7, and selinux disabled on boot, I'm getting only "util.py[WARNING]: Activating mounts via 'mount -a' failed" now. The other error has gone. it's still not formatting it though
<smoser> hm.
<dgarstang> I noticed earlier too that when I rebooted, I lost the custom hostname I had set. I presume it's picking that from DHCP
<dgarstang> Wait wait. I am still seeing "Failed to make '/redis_data' config-mount" ... missed it earlier
<dgarstang> So, I can probablt assume that the latest version of CentOS with Cloud-init can't format and mount disks
<dgarstang> Sighj
<smoser> blackboxsw, i have to run. even 50% right now.
<blackboxsw> smoser: make that 56%
<dgarstang> I'm screwed
<dgarstang> How could I get cloud-init 0.7.9 onto centos 7?
<dgarstang> Might 0.7.9 potentially fix my disk mount issue?
<rharper> dgarstang: we're still working on getting daily rpms built properly; that's coming soon, in the mean time, I've some hand-built ones I;m testing with which may help at least determining if there are other issues in play:  http://people.canonical.com/~rharper/cloud-init/rpms/centos/7/cloud-init-0.7.9+123.g8ccf377-1.el7.centos.noarch.rpm
<rharper> I think you can yum install that URL;  you'll have to accept the unsigned rpm
<rharper> that's trunk from a few weeks ago plus a few network configuration related fixes, but should be good enough to check that the disk_setup/mount stuff works (or fails in the same way) in which case, we should file a bug against cloud-init with your test user-data so we can get that resolved
<dgarstang> Ok, thanks. Tried with that RPM... same issue
<dgarstang> "cloud-init --debug --force --file data.yml single --name mounts --frequency always âreport" right?
<rharper> does that show errors, or you just don't get mounts ?
<dgarstang> Hmmm I got this ... https://gist.github.com/dgarstang/4eb93fda29decf54762b2d9356d505dc
<rharper> hrm, doesn't appear to have run the mounts, lemme make sure I can get mounts to run via single
<dgarstang> fs_setup is supposed to format right?
<rharper> yes
<dgarstang> Actually I removed mounts from data.yml. Was trying to simplify
<dgarstang> It's not formatting however, as a manual mount command fails. "mount -t ext4 /dev/xvdj /redis_data"
<rharper> do we know get debug in /var/log/cloud-init.log ?
<dgarstang> checking
<dgarstang> Yes
<rharper> ok, then maybe let's run the fs_setup one and mounts, and gist the cloud-init.log
<dgarstang> I got debug since I went from centos 6.5 to centos 7
<rharper> and see what we can see
<rharper> and the updated rpm will have the logging fix
<rharper> heh, 'ignorming' type in the output there
<dgarstang> "cloud-init --debug --force --file data.yml single --name fs_setup --frequency always âreport" ?
<rharper> no, disk_setup
<dgarstang> jeez
<rharper> disk_setup is the module name, it reads fs_setup config as one of the different areas of disk setup
<dgarstang> hang on
<rharper> sure
<dgarstang> Well here's the whole thing https://gist.github.com/dgarstang/91122f453f7c512e4b527fe8c73aa41d
<rharper> hrm
<dgarstang> Does it matter that this is a t2.nano instance? It has a 8Gb EBS disk attached at /dev/xvdj
<dgarstang> No ephemeral
<rharper> no
<rharper> fs_setup needs to be a list
<rharper> fs_setup:
<dgarstang> list in the yaml?
<dgarstang> :-O
<rharper> like in your original user-data post:  https://gist.github.com/dgarstang/8352ad8c51d834fbf3f282eb300d83d7
<dgarstang> how did that happen. :-\
<rharper> and it's *terrible* that we don't log something like, got an fs_setup but no list ...
<dgarstang> That must have happened when I removed the label line to simplify
<dgarstang> Made it a list... same deal
<rharper> and can you look at /dev/disk/by-uuid and see if there is a symlink to the disk ?
<rharper> it's possible it was successful and just doesn't output much
<dgarstang> Well that's 0a84de8e-5bfe-43e7-992b-5bfff8cdce43 -> ../../xvda1 ... but xvda is the root disk
<rharper> when the device isn't present, I get an error reported, Failed during disk check for /dev/xvdj
<rharper> and in the log, I can see cc_disk_setup debug output:   http://paste.ubuntu.com/24741508/
<dgarstang> rharper: on my output?
<rharper> no, you said it was the same
<dgarstang> right... but I don't see anything like the output you pasted
<rharper> but I would expect to see cc_disk_setup output if the user-data for fs_setup was fix; which is surprising
<rharper> here's my change to your yaml you pasted; that *should* show  the 'setting up filesystems: ' message in /var/log/cloud-init.log; http://paste.ubuntu.com/24741550/
<dgarstang> one sec
<dgarstang> rharper: Didn't work
<dgarstang> wait wait
<dgarstang> nah. I dunno. :(
<dgarstang> "Unable to convert /dev/xvdj to a device"
<dgarstang> ok, it actually formatted it. Didn't mount it tho
<dgarstang> What command would I run to mount? Does disk_setup mount?
<rharper> dgarstang: that's progress;lemme look at the code
<rharper> if disk_setup didn't return OK, then you won't be able to mount it (unless for some reason it's already formatted);
<rharper> one sec
<dgarstang> I'm gonna roll a new AMI with the newer cloud-init for a start
<rharper> ok, I've gotta step out for a bit more;   would you be able to paste: cat  /proc/partitions ?  I'll read the disk_setup code and see what that message is coming from, but I suspect that something's awry with the device
<dgarstang> rharper: kk, thanks
<dgarstang> Looks like using  http://people.canonical.com/~rharper/cloud-init/rpms/centos/7/cloud-init-0.7.9+123.g8ccf377-1.el7.centos.noarch.rpm breaks system boot. Public ssh key doesn't get installed properly. Can't ssh in
<rharper> dgarstang: lemme see, it may have changed the default user where the keys are installed
#cloud-init 2017-06-02
<rharper> it defaults to user fedora  (versus say centos); not sure what the default for the centos6 AMI was
<powersj> rharper: thx for review that is exactly what I was looking for :)
<rharper> cool
<smoser> rharper, ping
<smoser> would this be expected to render functional networking ?
<smoser>  http://paste.ubuntu.com/24748335/
<rharper> smoser: reading
<rharper> first glance looks sane; I generally run it through curtin apply_net or cloud-init net_convert and see what the eni ends up looking like
<rharper> smoser:  this is what curtin renders as eni with that:  http://paste.ubuntu.com/24748563/
<smoser> bah
<smoser> http://paste.ubuntu.com/24748651/
<smoser> i ended up figuring it out.
<smoser> wow.
<smoser> so that was a large time sync, but ended up verifying this actually works. so thats nice.
<rharper> oh, if I read that right;' that's nasty;  the vlan got the ens3 name because of the duplicate macs
<smoser> yeah
<dpb1> where did duplicate macs come from?  random?
<smoser> the .link file that i left around
<smoser> it renamed the newly created vlan device
<smoser> and then vlan was like WTH!
<smoser> rharper, i'm not sure why udev doesn't do this for us
<rharper> what should udev do for us ?
<smoser> ie, we say DRIVERS=="?*"
<smoser> why doesn't udev see the newly created vlan device
<smoser> and rename it
<rharper> the vlan isn't going to generate a udev event
<smoser> (that would be bad... but what prevents it)
<smoser> then why did systemd.link do it?
<rharper> at least, I don't think it would but I'm not sure
<rharper> we create a vlan dev *with* the name eth0.101
<rharper> there's not rename event to occur
<smoser> right
<rharper> s/not/no
<smoser> with a name eth0.101 and a mac that matches that of eth0
<smoser> maybe udev just fails to rename it
<smoser> what i'm asking is...
<rharper> well, I guess I'm saying, I wouldn't expect udev to rename unless it saw an event for a device;  and there's no need to do a rename because it's already at the name we expect
<smoser> a.) system boots, ethernet device is present, udev hotplug... that gets named 'eth0'
<smoser> per
<smoser>  SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="fa:16:3e:4b:63:49", NAME="eth0"
<smoser> b.) vlan device is created that has a name eth0.101
<smoser> but has the mac "fa:16:3e:4b:63:49"
<smoser> in my buggy scenario a .link file renamed eth0.101 to 'ens3'
<smoser> but why wouldn't udev *attempt* to rename eth0.101 to eth0 ?
<rharper> that's a good question w.r.t eth0.101 getting renamed by systemd;  I would have thought that the device type would have prevented it from getting renamed
<rharper> what was the rename value in sysfs ?
<smoser> what is the field ?
<smoser> i had collected grep -r . ens3/
<smoser> (see the paste)
<rharper> looking
<blackboxsw> morning
<rharper> ens3/name_assign_type:4
 * rharper googles 
<rharper> https://patchwork.kernel.org/patch/4526491/
<rharper> I think the link file we wrote is not strict enought;  the vlan matched by mac only;  we don't have the DRIVERS=?* equivalent which was meant to ignore "virtual" network devices
<rharper> now that we remove it, we don't get a second rename of devices with matching mac addresses
<rharper> alternatively we would need to ensure the .link files included something in-addition to the mac address to prevent it from matching on non-physical devices
<rharper> possibly reading the driver of the interface and including that in the [Match] section
<smoser> powersj, the other hting i have today... the daily builds from recipe started faling yesterday
<smoser> because some of the patches in need updating
<smoser> (ubuntu/xenial and ubunty/yakkety have patches ... to the azure ds and blackboxsw's' changes there caused us to need to refresh them.. easy, just hae to do it)
<rharper> smoser: you know, (see above for the .link trouble we had with vlans and bonds I suspect);  I wonder if much of that trouble xnox had was related to those .link files matching on the virtual devices as well
<smoser> thats possible.
<smoser> if he wasnt cleaning them.
<smoser> but cloud-init *would* clean them.
<dgarstang> Well it sucks I can't use cloud-init. :(
<smoser> (which ... unfortunately now it will not...so ones that it wrote before it wont clean on upgrade)
<smoser> dgarstang, we'll get you there.
<smoser> dgarstang, if you want... and you have an instance that i can ssh into, with trunk rpm... i could poke a bit
<dgarstang> smoser: Thanks, appreciate it, but for the time being I'll just have to do it with scripts executed from cloud-init
<rharper> dgarstang: did we file a bug yet with your issue?  I'd like to get one so we can track it;  we'll be doing more with centos and updated cloud-init and certainly want to get your issue resolved
<dgarstang> I don't really know how to categorise the issue exactly
<rharper> no need to , just say "disk setup and mounts not working on centos6 ami on ec2"
<rharper> dump in your user-data you gisted and that should be all we need
<smoser> including the ami id that you used woudl be good.
<smoser> or if it was a custom one...reproducing with a "official" of some sort
<dgarstang> Gonna retry it first with a fresh pair of eyes
<dgarstang> So, I just tried this again, I see this in the output ... "mkfs.ext2: option requires an argument -- 't'" and "Usage: mkfs.ext2 [-c|-l filename] [-b block-size] [-C cluster-size]"... I don't know why since I told it to use ext4
<dgarstang> scrap that. USer error
<blackboxsw> smoser: I need to fix this right ? https://launchpadlibrarian.net/322019862/buildlog_ubuntu-zesty-amd64.cloud-init_0.7.9-1538-g0a448dd-0ubuntu1+1438~trunk~ubuntu17.04.1_BUILDING.txt.gz   As in we should skip unit tests around json schema if the dependency isn't present right?
<blackboxsw> or should we make sure the python-jsonschema dep is called out for the build env.
<blackboxsw> yep, it's also the same issue w/ powersj's https://bugs.launchpad.net/cloud-init/+bug/1695318
<ubot5> Ubuntu bug 1695318 in cloud-init "centos 6/7 schema unittests failing" [Undecided,In progress]
<blackboxsw> ok right, grabbed it.
<smoser> blackboxsw, yes we should skip i think
<smoser> blackboxsw, did you click the checkymark on 1692087 or did i
<smoser> oh. i see in the log that you did.
<blackboxsw> oops smoser, I just finished 2093. I think I fat-fingered 2087
<blackboxsw> was thinking of relying on a quick azure deployment test in 2093's case. /me is setting up my own account
<smoser> blackboxsw, for that bug, https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bugs/lp-1686514/disk-setup can help format a device to look like ntfs and/or gpt
<smoser> (i was launching ones in serverstack and using /dev/vdb)
<blackboxsw> smoser: like https://bugs.launchpad.net/cloud-init/+bug/1695318 maybe
<ubot5> Ubuntu bug 1695318 in cloud-init "centos 6/7 schema unittests failing" [Undecided,In progress]
<blackboxsw> oops I mean..
<blackboxsw> #1634678
<blackboxsw> smoser: like https://bugs.launchpad.net/cloud-init/+bug/1634678
<ubot5> Ubuntu bug 1634678 in cloud-init (Ubuntu Trusty) "fs_setup always creates new filesystem with partition 'auto'" [Low,Confirmed]
<smoser> blackboxsw, so we think we're done with sru templates ?
<blackboxsw> I'm checking again. you did the lion's share thx
<blackboxsw> smoser: yes looks done
<smoser> none on bug 1692087
<ubot5> bug 1692087 in cloud-init (Ubuntu Zesty) "check_partition_layout has false positives when partitioned with gpt" [Medium,Confirmed] https://launchpad.net/bugs/1692087
<smoser> i think
<smoser> i'll do that one.
<shaharmor> Hello, anyone around?
<smoser> hey
<shaharmor> Hey @smoser
<shaharmor> I'm having some trouble with cloud-init getting stuck before finishing the module:config part
<shaharmor> I opened this bug: https://bugs.launchpad.net/cloud-init/+bug/1694399
<ubot5> Ubuntu bug 1694399 in cloud-init "module:config isn't finishing, stuck after locale configuration" [Undecided,New]
<shaharmor> I'm able to replicate it every few instances I start
<shaharmor> I'm wondering if its something wrong specifically in my machine or a general thing
<shaharmor> because I believe cloud-init is something that is used extensively
<smoser> shaharmor, cloud-final.conf runs that
<smoser> and runs
<smoser> start on (stopped rc RUNLEVEL=[2345] and stopped cloud-config)
<smoser> (per /etc/init/cloud-final.conf)
<smoser> i suspect cloud-config has stopped , as it clearly ran. i doubt its hung there.
<shaharmor> yeah I'm not sure its hung
<smoser> but something on your system is preventing 'rc' from finishing
<shaharmor> But the thing is that its not always happening
<smoser> this is an official ubuntu image ?
<shaharmor> It was started from the official one (On AWS), and some stuff added on it
<shaharmor> then an AMI is created
<shaharmor> and when that AMI is used with "userdata", this happens occasionally
<smoser> its been a long time. i can tell you that its very unlikely that there is a general bug like you're describing
<smoser> shaharmor, can you run 'initctl list'
<shaharmor> Thats what I though
<shaharmor> Let me try to replicate it again because I already shut down that machine
<shaharmor> @smoser ok I managed to reproduce
<shaharmor> running it now
<shaharmor> output of 'initctl list': https://pastebin.com/7i3Wq3MQ
<shaharmor> this is the command that runs the init.d scripts? /bin/sh /etc/init.d/rc 2
<shaharmor> could it be that its still running other scripts in init rc2 and just waits for them to qui?
<shaharmor> ok I figured it out
<shaharmor> I had another script (not related to the userdata) that was set to run on rc2 that was stuck
<shaharmor> when I killed it, the module:final ran
<smoser> blackboxsw, you're working https://bugs.launchpad.net/cloud-init/+bug/1695318
<smoser> right ?
<ubot5> Ubuntu bug 1695318 in cloud-init "centos 6/7 schema unittests failing" [Undecided,In progress]
<blackboxsw> almost done smoser
<blackboxsw> yep
<smoser> and i'll need that to get the xenial, yakkety, zesty dailiy archive builds
<blackboxsw> was adding a card for that
<smoser> i think those will just start workign again ocne we merge your fix to trunk
<smoser> ie:
<smoser>  https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-yakkety
<smoser>  https://launchpadlibrarian.net/322160649/buildlog_ubuntu-yakkety-amd64.cloud-init_0.7.9-1541-g1cd4323-0ubuntu1+1385~trunk~ubuntu16.10.1_BUILDING.txt.gz
<blackboxsw> pushing a branch for review
<blackboxsw> smoser: powersj https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/325024
<smoser> blackboxsw, love it
<blackboxsw> let's just hope that handles it. I'm trying to get my cent6 lxd up again
<blackboxsw> It should have covered all the atttached unit test failure in the bug. Just not sure if there are more now.
<smoser> just queued a build at https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-xenial
<smoser> and https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-devel
<smoser> fingers crossed
<smoser> and i have to run
<blackboxsw> thx smoser
<blackboxsw> powersj: I'm extending to https://git.launchpad.net/server-team-ci/    allow passing in parameters like WIP branch URL so that it can work with more than just cloud-init/master
<powersj> blackboxsw: I assume you are referring to the centos tests?
<powersj> I am re-writting those right now
<blackboxsw> https://git.launchpad.net/server-team-ci/scripts/centos/centos-run.sh rather
<blackboxsw> right, if you are touching it I'll be hands off
<powersj> https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/324982
<blackboxsw> reviewing
<powersj> I'm extending it so we can make it part of the pipeline
<blackboxsw> ++
<powersj> have all these nice tests, now time to run them on every merge proposal rather than daily :D
<blackboxsw> heh
<blackboxsw> man, I'd really love to use python instead of bash for any new scripts we write
<blackboxsw> but, that's just because my shell chops are el-stinko
<blackboxsw> and bash is hard to test.
<blackboxsw> was just referring to the newscript comment from smoser on your branch
<powersj> I hear you; I switch between both
<powersj> the proposed automation is in python so far, but rest will be shell
<blackboxsw> yeah so many external calls 'lxc ... ' it lends itself more easily to shell scripts. but, yeah.
<blackboxsw> yay green, thx powersj smoser https://jenkins.ubuntu.com/server/view/Cloud-init/job/cloud-init-centos-6/21/
<powersj> awesome
<blackboxsw> powersj: per the other comment you got on your branch about not dup'ing the dependencies, I'll get a make ci-deps Makefile target to you. I'm trying to test out something right now
<powersj> sweet
<blackboxsw> it won't be until Monday though, so it probably won't help your existing branch.
<blackboxsw> it won't be reviewed/landed
#cloud-init 2018-05-29
<rharper> hatifnatt: hrm, that looks like a bug, could you file one   https://bugs.launchpad.net/cloud-init/+filebug  with your config that you posted and log ?
<smoser> rharper: i think hatifnatt posted on bug 1746455 and i responded there.
<ubot5> bug 1746455 in cloud-init "cloud-init vSphere cloud provider DHCP unique hostname issue" [High,Fix released] https://launchpad.net/bugs/1746455
<dmbaturin> Hi everyone!
<smoser> hi dmbaturin
<dmbaturin> What determines which of these: https://github.com/cloud-init/cloud-init/tree/master/cloudinit/distros will be run?
<dmbaturin> Also, what's the policy on including new distros?
<blackboxsw> dmbaturin: /etc/cloud/cloud.cfg  system_info: distro
<blackboxsw> or any override of the same system_info:distro key present in yaml files in /etc/clouc/cloud.cfg.d/*
<blackboxsw> It'
<dmbaturin> I see, thanks.
<blackboxsw> dmbaturin: it'd be nice to have a merge proposal with a new distro module proposed against cloud init following the devel instructions @ http://cloudinit.readthedocs.io/en/latest/topics/hacking.html
<blackboxsw> then we can pull that into tip of cloud-init so that the new distro aligns with existing work.
<blackboxsw> we can help you get any branches in fairly quickly
<dmbaturin> I want to add support for cloud-init to VyOS (https://vyos.io), but since it's only loosely debian-based and the configuration system is completely different, we need to have cloud-init interact with its config system specifically, so I'm looking how I can make that integration.
<dmbaturin> Let me read the hacking guidelines.
<dmbaturin> What I'm supposed to put in the "Please add the Canonical Project Manager or contact" field?
<dmbaturin> In the contributor agreement.
<powersj> smoser: ^
<dpb1> dmbaturin: are you signing for just you? or for a company?
<dpb1> dmbaturin: if just for an individual, writing David Britton is fine.
<dpb1> (that's me)
<dpb1> In the end, it's just for Canonical legal to have a contact point in case they aren't sure why you are signing.
<dmbaturin> As an individual. Can it later be converted to a company contact at some point?
<dpb1> dmbaturin: at that future point, just have your company rep do a new one
<dmbaturin> Well, if anyone is going to be a company rep, most likely that will be me again.
<dpb1> dmbaturin: ya, you can follow up later with that once you get confirmation from the right entity at your company
<dpb1> it shouldn't lock you out or anything
<dmbaturin> Ah, ok. Signing for myself for now then.
<dmbaturin> All contributions to cloud-init must be a allowed to be dual licensed under GPLv3 or Apache 2.0?
<dpb1> dmbaturin: yes, the current project is licensed gpl/apache2 https://github.com/cloud-init/cloud-init/blob/master/LICENSE
<smoser> dmbaturin: from a legal perspective, any thing you do that is licensed apache 2.0 is effectively allowed to be licensed GPLv3
<smoser> (per the FSF, thats not my legal opinion)
<dmbaturin> Yes, I know. The original internal libraries of VyOS are GPL, need to be careful not to link to them then.
<smoser> err... more clear, that is the FSF's legal opinion, and I do not disagree... I just wanted to be clear that I was not stating *my* legal opinion (which would have no value as IANAL)
<dmbaturin> We didn't choose it, our own new libs are either LGPL or MIT, but the ones that were there before the fork are GPL.
<dmbaturin> Oh, don't worry, even if you were a lawyer, from worldwide perspective it would be just as worthless as when you are not. ;)
<dpb1> hah
<dmbaturin> I mean, in quite a few jurisdictions GPL (or Apache) would technically be legally void just because it's not in their national language; and in an even greater number of jurisdictions no one really tried it in court.
<dmbaturin> A more practical question: would it be fine to introduce dependencies on something that will be in the target distro into those distro-specific modules?
<blackboxsw> ok looks like we're at the cloud-init status meeting time again (delayed 1 day for us holiday)
<blackboxsw> #startmeeting Cloud-init bi-weekly status meeting
<meetingology> Meeting started Tue May 29 16:05:51 2018 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<dpb1> dmbaturin: there will be a time for open questions in this meeting in just a few minutes. :)
<dpb1> so you will have the right people around
<blackboxsw> welcome folks to another cloud-init community status meeting,  today's meeting delayed by one day due to US holiday. Next meeting will be June 11th. same time
<blackboxsw> I've added an actions  topic  to this meeting so we can wrap up  or carry over any actions discussed last time
<blackboxsw> the topics will be Previous Actions, Recent Changes, In-progress Development,  and Office Hours
<dmbaturin> Oh, cool.
<blackboxsw> As always notes will be posted to the following site
<blackboxsw> #link https://cloud-init.github.io/
<blackboxsw> welcome dmbaturin good timing. :)
<blackboxsw> #topic Previous Actions
<dmbaturin> Yeah, I'm just in time it seems. ;)
<blackboxsw> 2 weeks ago we had a couple of followup items that needed some extra review:
<blackboxsw> * ACTION: blackboxsw review distro dection and empty modules list
<blackboxsw> * ACTION: robjo review existing chrony support in master per rharper's work
<blackboxsw> * ACTION: blackboxsw carryover network hotplug vs network maintenance on reboot-only
<blackboxsw> we did get through robjo's branches on distro *detection* and landed them\
<blackboxsw> and I know our team also discussed a potential approach to network hotplug vs network maintenance to better enable SmartOs folks who want to handle network config across reboots only
<blackboxsw> I think we decided we needed to draw up a quick shared document on a proposal which would allow for maintenance on reboots only vs true hotplug.
<blackboxsw> I'll carry over that action to write up a doc on this and send it to list by the next meeting
<blackboxsw> ACTION blackboxsw write up short doc/branch on  hotplug versus network maintenance on reboot for comment
<blackboxsw> and I believe robjo from SuSE was able to get through rharper's chrony support branch with a couple comments too
<blackboxsw> so no other actions from last meeting
<blackboxsw> #topic Recent Changes
<blackboxsw> this following content landed in cloud init tip over the last two weeks
<blackboxsw> - Do not use the systemd_prefix macro, not available in this environment
<blackboxsw>       [Robert Schweikert]
<blackboxsw>     - doc: Add config info to ec2, openstack and cloudstack datasource docs
<blackboxsw>       [Chad Smith]
<blackboxsw>     - Enable SmartOS network metadata to work with netplan via per-subnet
<blackboxsw>       routes [Dan McDonald] (LP: #1763512)
<blackboxsw>     - openstack: Allow discovery in init-local using dhclient in a sandbox.
<ubot5> Launchpad bug 1763512 in cloud-init "DataSourceSmartOS ignores sdc:routes" [Medium,Fix committed] https://launchpad.net/bugs/1763512
<powersj> lol!
<powersj> welcome back
<blackboxsw> heh looks like I got kicked for the paste :)
<powersj> blackboxsw: your last message was - openstack: Allow discovery in init-local using dhclient in a sandbox.
<blackboxsw>     - tests: Avoid using https in httpretty, improve HttPretty test case.
<blackboxsw> 10:14       (LP: #1771659)
<blackboxsw> 10:14     - yaml_load/schema: Add invalid line and column nums to error message
<blackboxsw> 10:14       [Chad Smith]
<blackboxsw> 10:14     - Azure: Ignore NTFS mount errors when checking ephemeral drive
<blackboxsw> 10:14       [Paul Meyer]
<ubot5> Launchpad bug 1771659 in cloud-init "unittests fail in OpenSuSE 42.3 with httpretty issues" [Medium,Fix committed] https://launchpad.net/bugs/1771659
<blackboxsw> - packages/brpm: Get proper dependencies for cmdline distro.
<blackboxsw> 10:14     - packages: Make rpm spec files patch in package version like in debs.
<blackboxsw> 10:14     - tools/run-container: replace tools/run-centos with more generic.
<blackboxsw> 10:14     - Update version.version_string to contain packaged version. (LP: #1770712)
<blackboxsw> 10:14     - cc_mounts: Do not add devices to fstab that are already present.
<blackboxsw> 10:14       [Lars Kellogg-Stedman]
<blackboxsw> 10:14     - ds-identify: ensure that we have certain tokens in PATH. (LP: #1771382)
<blackboxsw> 10:14     - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
<blackboxsw> 10:14     - read_file_or_url: move to url_helper, fix bug in its FileResponse.
<blackboxsw> 10:14     - cloud_tests: help pylint [Ryan Harper]
<blackboxsw> 10:14     - flake8: fix flake8 errors in previous commit.
<ubot5> Launchpad bug 1770712 in cloud-init "It would be nice if cloud-init provides full version in logs" [Medium,Fix committed] https://launchpad.net/bugs/1770712
<blackboxsw> 10:14     - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
<blackboxsw> 10:14     - tests: restructure SSH and initial connections [Joshua Powers]
<ubot5> Launchpad bug 1771382 in cloud-init "ds-identify: fails to recognize NoCloud datasource on boot cause it does not have /sbin in $PATH and thus does not find blkid" [Low,Fix committed] https://launchpad.net/bugs/1771382
<blackboxsw> 10:14     - ds-identify: recognize container-other as a container, test SmartOS.
<blackboxsw> ok hopefully we ended on ds-identify
<dmbaturin> Yes, we did.
<blackboxsw> excellent. sorry for the paste, I'll send this out to cloud-init@lists.canonical.com a day before the next meeting so we don't have to IRC flood here
<blackboxsw> make that cloud-init@lists.launchpad.net
<blackboxsw> also we finished our SRU (stable release update) of cloud-init 18.2.27 to Bionic.
<blackboxsw> Ubuntu Cosmic currently reflects near tip of master 18.2.59
<blackboxsw> ok that's all for Recent Changes
<blackboxsw> anything I'm missing powersj ?
<powersj> I think you are good
<blackboxsw> #topic In-progress Development
<blackboxsw> We track upstreams progress publicly in trello
<blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
<blackboxsw> any blue labeled cards are cloud-init core work
<blackboxsw> we have been fixing a couple of bugs raised by our CI infrastructure on newer series of Ubuntu . currently a minor issue with salt minion on Bionic or later, and a couple of unit and integration test race conditions
<blackboxsw> big ticket items for cloud-init in the nearterm are metadata standardization across clouds, so cloud-init scripts/cloud-config template can source these cloud-provided values
<dmbaturin> Metadata standardization is something I really would like to see, if you need more hands for that, let me know.
<blackboxsw> the standardization of this instance-data will allow folks to script against any standard values provided to cloud-init in the same way on any cloud. Think hostname, fqdn, ip addrs, region name etc.
<dmbaturin> SSH keys too!
<blackboxsw> definitely dmbaturin I'll point you at a couple branches and what we're thinking
<blackboxsw> this conent will show up in /run/cloud-init/instance-data.json
<blackboxsw> #link https://cloudinit.readthedocs.io/en/latest/topics/datasources.html?highlight=instance-data#instance-data
<blackboxsw> and will also be referenced via jinja template variables
<blackboxsw> and a cloud-init query CLI
<dmbaturin> The data will be updated whenever a change in the environment is made?
<blackboxsw> Also powersj will be working toward a common library for cloud-testing in the weeks to come which cloud-init integration tests will leverage to drive lxd, ec2, openstack azure etc for a cloud testing
<dmbaturin> Also, will it be possible to stop cloud-init from doing anything but writing that data and starting an external script to process it?
<blackboxsw> dmbaturin: some of that functionality will be handled in the hotplug work we are starting on. There will be operations that can be triggered by either a hotplug monitor on metadata or by cloud-init's CLI to say query from cache (the instance-data.json file) versus query fresh/update
<dmbaturin> I see.
<blackboxsw> dmbaturin: cloud-inits init-local or init-network stage is what calls "get_data" on the give datasources to collect and write that data to file. Spawning a script is generally done through runcmd which happens in cloud-init's 'final' stage. Trying to decouple them (and skipping the modules:config stage) is possible by altering /etc/cloud/cloud.cfg in a custom image to specify no modules in a given stage. Though it's not
<blackboxsw> really recommeded as most of the modules only do a quick sanity check to see if they are specifically enabled before trying to do any realy work
<blackboxsw> we try to keep boot time as fast as possible and cut out the fat where we can
<blackboxsw> if that's the concern you had
<blackboxsw> ok I think that's it for In-progress development we can move to office hours for all addtional discussion
<blackboxsw> #topic Office Hours (next ~30 mins)
<dmbaturin> No, boot time is not the primary concern here, my concern is how to ensure no module is trying to treat our system as if it was a normal Debian (which either doesn't work or can potentially get the system into an inconsistent state).
<dmbaturin> I guess if we are having a real meeting, it may be a good idea to formally introduce myself and the project. :)
<blackboxsw> All topics of interest to cloud-init development can be brought up and discussed here. If there are merge proposal that need attention, bugs that need work just bring them up here we should have a few sets of eyes on this channel to discuss and comment
<blackboxsw> sounds good dmbaturin introduce away :)
<blackboxsw> Chad smith, Canonical, one of the maintainers of cloud-init. We have a few others here (some on vacation). powersj rharper smoser dpb1  all canonical as well.
<dpb1> idk who blackboxsw is
<dpb1> he might be crazy
<blackboxsw> frequently we have other distribution developers and cloud devs here too (SuSE, RedHat, Microsoft Azure, SmartOS, VMWare )
<blackboxsw> heh, I'm just a bot
<dmbaturin> So, I'm one of the maintainers of the VyOS project (http://vyos.io). It's a distro for routers and firewalls whose primary goal is to be just like hardware routers, but not tied to any hardware, which includes a single config file and unified CLI with a commit/rollback model, versioning, and cross-checks (e.g. if you try to reference a non-existent NIC in DHCP configuration, commit fails).
<dpb1> nice to meet you dmbaturin
<blackboxsw> ahh makes sense. So debian-based os kindof, which is why you'd want to lock down what modules run.
<dmbaturin> We support all major virtualization platforms now in the sense of including all required drivers and utilities, but autoconfiguration on cloud platforms is only supported for EC2 via a custom script, so we are looking to ways to support more clouds, ideally without doing the work that is already done, or at least contributing those general things into something where more people can benegit from it, not just us.
<blackboxsw> alos dmbaturin each config module claims what distro is supported in a distro property, so you could vet what modules you want to run, and only add VyOS to the list of compatible distros. Config modules all live in source at cloudinit/config/cc_*py.
<blackboxsw> but we can discuss that confuig module support (or not) once you dig in to look at supporting VyOS
<dmbaturin> Yes, I'm thinking how exactly it should be done.
<blackboxsw> dmbaturin: cloud-init's a pretty good choice for getting that cloud-support breadth for free
 * robjo sorry I'm late
<blackboxsw> robjo: sorry for the late change from yesterday's normal meet time
<dmbaturin> The least intrusive option would be to indeed improve the instance data format, so that we can simply pass it to our own script, which is why I'm all for contributing to it.
<robjo> blackboxsw: noLnxDistro branch has not yet been merged
<blackboxsw> bah robjo ahh you're right
<blackboxsw> ok looks like you handled all review comments. I'll get it landed today
<smoser> powersj: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347060
<blackboxsw> ACTION blackboxsw land https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/336794 today
<smoser> you moved that to 'approved' i guess ?
<smoser> which meant the bot didnt comment (sorry blackboxsw ... interupted)
<dmbaturin> blackboxsw: Could you point me to the branches were the work on instance data is going on?
<blackboxsw> dmbaturin: here's one stale one I need to get back to this week. for enabling the template reference of instance-data.json content
<powersj> smoser: ah sorry you are right
<smoser> powersj: i'm going to land it anyway
<robjo> also I think emptyStageOK branch should be ready to go
<powersj> ok
<blackboxsw> hrm digging on the metadata branch.
<blackboxsw> dmbaturin: the trello card I'll be tying branches to is this one
<blackboxsw> #link https://trello.com/c/5n5B8x23/802-cloud-init-query-standardized-json-information
<robjo> and https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/333904 should be on smoser plate
<robjo> or anyone else who wants to pick it up and get it merged, please
<blackboxsw> #link https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/335290
<blackboxsw> ^ dmbaturin initial template handling thoughts.... I have to create at least 2 branches to standardize datasource class apis to make the metadata content easier to generalize and I can add your launchpad username to the review as I put them up
<rharper> smoser: re: hostname, yes, that's right;  we probably could update the set_hostname docs to mention that detail w.r.t early hostname setting
<rharper> blackboxsw: sorry to interrupt
<blackboxsw> dmbaturin: what's your launchpad user name?  (mine's chad.smith)
<dmbaturin> blackboxsw: dmbaturin
<blackboxsw> heh.
<blackboxsw> thx
<dmbaturin> I'm too predictable. ;)
<blackboxsw> yeah, I lost a bet on blackboxsw :)
<blackboxsw> robjo: okay adding that branch too for review/landing
<blackboxsw> ACTION land  https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/333904
<blackboxsw> ACTION land https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/336794
<blackboxsw> alright, any other topics or discussions?
<dmbaturin> Trello is integrated with launchpad?
<blackboxsw> dmbaturin: nope, just easy to use for our agile workflow. And simple to cut-paste links, assign people, drag to different lanes as the work progresses
<blackboxsw> has a lot of github integraiton if you get the right plugins
<blackboxsw> we have some minimal tooling that can talk to lauchpad and inject cards, but that's hand-written, not part of trello product.
<dmbaturin> I mean, if you add my username there, will I get any notifications about card changes.
<blackboxsw> dmbaturin: I get emails from all trello card moves,changes. let's see
<blackboxsw> I can subscribe you to the card (you want the standardized json stuff?)
<dmbaturin> Yes.
<blackboxsw> hrm can't find your user
<blackboxsw> ahh
<blackboxsw> I think I invited you
<blackboxsw> and added your user to the card so you can watch it progress
<blackboxsw> ok I think that about wraps up our meeting for today
<blackboxsw> any parting shots?
<blackboxsw> I'll post these notes to our github project page
<blackboxsw> #link https://cloud-init.github.io/
<blackboxsw> thanks again all
<blackboxsw> #endmeeting
<meetingology> Meeting ended Tue May 29 17:05:59 2018 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2018/cloud-init.2018-05-29-16.05.moin.txt
<dmbaturin> blackboxsw: Since I'm not very familiar with the codebase as you can see, some tasks in form of "do this and that to this module" would be perfect.
<dpb1> blackboxsw: what was the bet?
<blackboxsw> dpb1:  During world cup I thought the US was going to lose and Dean bet otherwise, at stake was that I had to change my nick from csmith -> blackboxsw (my former side-job company name) because dean(aka, Beret) thought that nick was cooler than name-based nick
<dpb1> blackboxsw: us men?
<blackboxsw> US men. a couple years ago
 * blackboxsw secretly wanted to change my nick anyway
<dpb1> blackboxsw: seems like a safe bet on your part then, I guess it didn't work out
<dpb1> :)
<Beret> it's amazing how I remember that so differently
<blackboxsw> ... and yes I invoked Beret in #cloud-init +1 bbsw
<dpb1> I question beret's sanity if he bet on the US men to win something in the world cup
<blackboxsw> heh, I think it was a semi's game, but Beret can correct me, now that the source of truth is here. :)
<Beret> same here
<Beret> that's why I remember it differently
<blackboxsw> heh, maybe I bet they would win
<Beret> now US women would have been different
<blackboxsw> absolutely
<dpb1> hehe
<dpb1> the picture is now becoming clear
<blackboxsw> heh, good think meetingbot didn't record this conversation, otherwise I'd have too many sources of truth refuting my lies ;)
<dmbaturin> I use a name-based nick after 70's hackers.
<jocha> rharper: re: "the change (agent_command using __builtin__) upstream to the AzureDS would break existing behavior in ubuntu xenial", is this by design or is there more work to be done to address this? :)
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 6/11 16:00 UTC | cloud-init 18.2 released (03/28/2018)
<rharper> jocha: by design, we don't want to change behavior on Xenial instances
<jocha> alrighty, thanks!
#cloud-init 2018-05-30
<rezroo> Hi
<rezroo> I have a ubuntu-16.04 on google cloud platform, and I give it a multi-mime userdata which I know works on aws. cloud-init configuration seems fine on the server and cloud-init version is 18.2. It executes the first portion which is #cloud-config format but doesn't execute the text/x-shellscript part. Any idea why? I've check /var/log and I don't find any errors. I'm executing a simple 'touch somefile.txt' and there's no file on
<Odd_Bloke> powersj: I'm struggling to work out how to make this "get the SSH key on to the instance" change in an appropriate manner.
<Odd_Bloke> powersj: I was wondering if you might be able to implement a basic disable_root test (which would enable SSHing to the instance under test from the instance itself), and then I can expand on that once the test framework functionality is in place?
<powersj> Odd_Bloke: I'm not sure I see other tests re-using that functionality
<powersj> Odd_Bloke: can you show me what you have?
<Odd_Bloke> powersj: I haven't got anything, because I couldn't work out where to add it.
<Odd_Bloke> (AFAICT, I can't just do it for one test, because tests don't define any Python code that runs on the testing host, so the framework would need to change somehow.)
<blackboxsw> rezroo: stderr of scripts get emitted to /var/log/cloud-init-output.log so there may be error messages hiding in there. If you think there is a bug (because you say is works on aws but not on gce) you could file a bug with "ubuntu-bug cloud-init" on the commandline with the details
<blackboxsw> rharper: I'll need to steal you for a little bit today on the netplan match question for Azure. when's a good time?
<cyphermox> blackboxsw: sorry, what's this about?
<blackboxsw> cyphermox: cloud-init uses to netplan's match: macaddress as we expect that mac will be persistent and unique for a single device even despite any logical versus persistnet network naming. this causes an issue for azure instances which end up dropping the original nic and attaching a new nic to the instance.  cloud-init doesn't regenerate netplan network config for this instance, because azure doesn't expose a new UUID
<blackboxsw> for the instance (which would trigger a netplan config regeneration), so cloud-init doesn't update network config containing the mac address of the new device
 * blackboxsw needs to understand azure's use-case for dropping existing nic and replacing with a new nic. As this feels like an infrequent use-case.
<rharper> blackboxsw: I'm back now
<rharper> now is fine for chat
<blackboxsw> good, deal, grabbing a quick coffee
<blackboxsw> rharper: jumping into cloud-init hangout
<rharper> k
<rharper> blackboxsw: hangout or meet ?
<rezroo> blackboxsw: I filed a bug with google. I don't think it's ubuntu's fault. I think gce uses cloud-init in how it provisions ubuntu in it's infrastructure in such a way that subsequent x-shellscripts from the user don't run. If I understand it correctly these are only run once, and if gce runs its own shellscript then there would be a marker in /var/lib/cloud/instance that would stop another script to run. I'm 
<mgerdts> I'm starting down the path of backports to trusty.  git clone ; git checkout -b trusty-joyent remotes/origin/ubuntu/trusty; make deb
<mgerdts> And it wants to use bzr.  Ugh.  Never done that before
<dpb1> mgerdts: :)
<mgerdts> I can't find any instructions that help me check out this branch from bzr.  README.rst in that branch has instructions that are too terse for a first time bzr user.  Any hints?
<mgerdts> Or should I realize that it is after 5 pm and go find a beer instead?  I'm really ok if that is the right answer tonight.  :)
<dpb1> mgerdts: which branch?
<dpb1> mgerdts: can you point me at the README.rst?
<mgerdts> ubuntu/trusty
<mgerdts> https://github.com/cloud-init/cloud-init/blob/ubuntu/trusty/HACKING.rst
<mgerdts> sorry, HACKING.rst
<mgerdts> It was missing the launchpad-login step, so did that, then got a key that works with launchpad, added that to my keyring
<mgerdts> https://hastebin.com/fiqulimuwi.rb
<dpb1> thanks that helps
<dpb1> ah, I see
<dpb1> we switched to git
<dpb1> so, your first step there is failing, you are trying to checkout in effect the 'trunk' branch of the repository, but that is now in git.
<dpb1> I know that for trusty, things are still in flux, as you'll notice there is no branch in the upstream git repo for 'ubuntu/trusty'.
<dpb1> like there is for 'ubuntu/xenial'
<dpb1> mgerdts: tl;dr, for trusty, I think you should get that beer and ask when folks are around tomorrow. :)
<mgerdts> Good plan.  :)
#cloud-init 2018-05-31
<sedition> huh. for some reason the user declarations that work for cent dont work on amazon linux. 'failed to create user'
<smoser> sedition: /var/log/cloud-init-output.log and /var/log/cloud-init.log probably have more info.
<sedition> its getting far enough in the process for the ec2-user account to be gone
<smoser> mgerdts: you still there?
<smoser> hm.. sedition well, if you have other access then go in that way. other wise, i think youprobalby need to just make a image that you have another into, or detach the ebs volume and re-attach
<smoser> :-( sorry thats a pain.
<mgerdts> smoser here now
<smoser> mgerdts: so what i recommend for you to do for trusty is to use 'git ubuntu'
<smoser> you can use 'snap install git-ubuntu'
<sedition> smoser: its all good
<smoser> or skip that and just use git without the porcelin that it provides
<sedition> im just debugging
<sedition> i wanna know why the same rendered template craps out on amazon linux but works on cent7 when its just adding users
<smoser> git clone -o pkg https://git.launchpad.net/~usd-import-team/ubuntu/+source/cloud-init ubuntu
<smoser> cd ubuntu
<sedition> and installing vim
<sedition> lol
<mgerdts> do that on trusty, xenial, bionic, or something else?
<smoser> mgerdts: so... focus on what you have there. you'll see branches for 'pkg/ubuntu/trusty-devel'
<smoser> that is what you want to base yourself off of.
<smoser> when you make chnages in this way you have to actually put your changes into debian/patches/some-patch.diff
<smoser> mgerdts: i'm wrtining some more ... hopefully will get you ont he right step.
<mgerdts> ok, thanks.
<mgerdts> I'm involved in a couple other conversations at the moment and won't pick this up until the morning.
<smoser> https://hackmd.io/7K5bl1lwSoOCY7A3uu7bnQ
<smoser> mgerdts: going there.
<smoser> mgerdts: i'm goign afk. feel free to ping tomorrow.
<mgerdts> thanks
<mdt[m]> hi, i need some help how to investigate a problem. cloud-init does not work in an image i provided (debian) while another image (centos) works fine in the same environment. where can i start to dig? i found the logs and the only warning message is "Used fallback datasource" but no hint why.
<smoser> mdt[m]: there is probably something else, another WARN in /var/log/cloud-init-output.log
<smoser> and running cloud-init collect-logs is a start to collect what we would need
<smoser> philroche: that image has been booted before
<smoser> philroche: ping when you get a chance.
<smoser> i *think* what happened is that the first boot did not for some reason run cloud-init.service . only cloud-init-local.service run.
<smoser> we dont seem to have a journal from that failed run
<smoser> if we did i suspect it contains the words 'ordering cycle' and 'Breaking'
<smoser> we have a journal from the second run, and the second run seems happy
<philroche> smoser: I will mount the failed launch to get some more logs for you. I should probably create a bug for this
<smoser> philroche: i mounted the failed launch. there is no journal there. only the second boot you did.
<philroche> smoser: My bad. I meant to upload the clean image. I have done so now https://private-fileshare.canonical.com/~philroche/cosmic-dailies/20180530/
<mgerdts> smoser - the backport of my fixes to trusty is a bit of a mess because there's been a lot of change.  How distasteful would it be to backport at least '0b3e3f8 initial commit of rework' and perhaps all/most of these: https://hastebin.com/ivotoyirac.sql ?
<smoser> wow.
<smoser> hm..
<smoser> mgerdts: well, i'm fine with ou basically just having one patch that replaces the old file in SmartOS with what you want
<smoser> "sync-smartos-to-upstream.patch"
<mgerdts> I like the way you think.
<smoser> i like that you use hastebin :)
<smoser> and you just lucked in to .sql extension :)
<mgerdts> Are all of the interfaces it uses/exposes likely to be compatible?
<mgerdts> I avoided later changes that touched DataSourceSmartOS and other files because of feared incompatibility
<smoser> mgerdts... much of it the same... i do expect there are some significant differences though.
<sedition> anyone happen to know what's so custom about cloud-init for Amazon Linux?
<smoser> sedition: youc an get their source... i'mmve not compared them in some time. they do have some patches and fixes, but i believe they are largely getting those things upstream.
<sedition> so odd
<sedition> i still cant figure out why user declarations wont work
<hatifnatt> Is there any place where meta-data format is described? I can't find any good documentation anywhere.
<hatifnatt> What difference between OpenStack metadata and EC2 metadata? AWS have at least some docs about metadata https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html but can't find nothing helpful about OpenStack metadata.
<smoser> hatifnatt: https://docs.openstack.org/nova/latest/user/metadata-service.html
<hatifnatt> smoser: Thanks I have read this already, but that is not a reference, it's very superficial description of meta data service. I mean something like table with a detailed description like in EC2 docs https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories
<hatifnatt> For example I have line in configdrive meta-data file '"network_config": { "content_path": "/content/0000" }' where except source code I can find reference what does this mean?
<smoser> hatifnatt: well, what it means is that at the entry point of the metadata service
<smoser> under /content/0000 is the content of the network config.
<smoser> its basically a blob store location and reference to that.
<hatifnatt> smoser: so any parameter can be stored separately and referenced by 'content_path' or it's specila for network_config? As I can tell by source code https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/helpers/openstack.py#L298 "network_config" is special one.
<smoser> hatifnatt: well, its not special to that.
<smoser> other things can too
<smoser> the reason is config drive.
<smoser> they duplicate the versioned/ data on the disk
<smoser> and they did not want to duplicate all the things that were the same
<smoser> and no symlinks or such in standard iso9660
<smoser> or vfat
#cloud-init 2018-06-01
<blackboxsw> smoser: rharper ok I really need eyes on this for a moment http://paste.ubuntu.com/p/Zxtt4ZTm33/ lines 10-20. I'm wondering if that's why networking-online isn't blocked.
<rharper> ok
<rharper> blackboxsw: what's the expected behavior you're looking for ?
<blackboxsw> I enabled debug logs.... I was expecting to see that my yaml files were observed/honored (by wait-for-online) but the log message seem to indicate otherwise
<rharper> you want journalctl --unit systemd-networkd-wait-online.service
<rharper> it doesn't say much, but will display which interfaces it is waiting on
<rharper> networkd itself doesn't do the waiting
<rharper> wait-online itself does not look at any of the yaml files, it only talks to networkd itself over dbus to ask it, which interfaces are "managed"
<blackboxsw> ok May 31 22:13:12 SRU-worked systemd[1]: Starting Wait for Network to be Configured...
<blackboxsw> May 31 22:13:12 SRU-worked systemd-networkd-wait-online[819]: ignoring: lo
<blackboxsw> May 31 22:13:12 SRU-worked systemd-networkd-wait-online[819]: ignoring: lo
<blackboxsw> May 31 22:13:14 SRU-worked systemd-networkd-wait-online[819]: ignoring: lo
<blackboxsw> May 31 22:13:14 SRU-worked systemd-networkd-wait-online[819]: managing: eth0
<blackboxsw> May 31 22:13:14 SRU-worked systemd[1]: Started Wait for Network to be Configured.
<rharper> in our case, we should see that eth0 is managed by networkd, your debug log shows that it is, and you should see that wait online is waiting for eth0
<rharper> that looks right to me
<blackboxsw> ok, there's the specific 'ignoring' comment I was looking for note non-CamelCase Ignoring. I mistook the output logs.
<rharper> ah
<rharper> yes, that's networkd talking about itself
<blackboxsw> the issue is that I did  specific an eth1 match by name configuration as optional:false
<blackboxsw> I expected in this case that network online would have blocked here
<rharper> I'm confused, it blocked
<rharper> you should see something like network-oniline.target reached
<rharper> in journalctl before you see cloud-config.service run
<rharper> # journalctl -o short-precise | egrep "(Started Wait for Network|Reached target Network is Online|modules:config)"
<rharper> May 24 21:46:47.134424 b1 systemd[1]: Started Wait for Network to be Configured.
<rharper> May 24 21:46:47.869092 b1 systemd[1]: Reached target Network is Online.
<rharper> May 24 21:46:48.741229 b1 cloud-init[189]: Cloud-init v. 18.2 running 'modules:config' at Thu, 24 May 2018 21:46:48 +0000. Up 6.00 seconds.
<blackboxsw> Jun 01 17:42:05.526609 SRU-worked systemd[1]: Started Wait for Network to be Configured.
<blackboxsw> Jun 01 17:42:07.491945 SRU-worked systemd[1]: Reached target Network is Online.
<rharper> yes
<rharper> it's fast
<blackboxsw> on my system a 2 second wait, but I presented an eth1 optional:false device
<blackboxsw> I thought it's wait for a timeout on eth1 which never came up
<rharper> can you ssh add me? lp:raharper ?
<blackboxsw> yeah
<rharper> I'm not sure what you're trying to test
<rharper> maybe hangout  too ?
<blackboxsw> yeah hangout and 52.179.192.25
<mgerdts> smoser: I finally have some success.  It's been a long road, and I feel like I maybe am on the wrong path.  I have two patches.  The first does a resync mostly by copy as you suggested.  After that I realized that all calls to apply_network() were done in each datasource and that SmartOS didn't have that at the point that I started from.  So then I needed to write an ENI converter.  See what I mean by wrong path?
<mgerdts> resync: https://github.com/mgerdts/ubuntu-cloud-init/commit/d9cdb9cce708ec011205f3ef92cd27480523375c
<mgerdts> convert eni, call apply_network(): https://github.com/mgerdts/ubuntu-cloud-init/commit/b901ce58764e7eece41da4a8d425e325a677c5cc
<smoser> mgerdts: fwiw you can push to launchpad
<mgerdts> Yeah, I tried, searched, tried, and decided that getting this in front of you before the end of your day was more important than sorting out launchpad.
<mgerdts> Being in a different repo than I already have, things are difficult.  I seem to have lost the magic that I had once.
<smoser> mgerdts: here is my ubuntu git repo fwiw
<smoser> $ git remote -v
<smoser> pkg https://git.launchpad.net/~usd-import-team/ubuntu/+source/cloud-init (fetch)
<smoser> pkg ssh://smoser@git.launchpad.net/~usd-import-team/ubuntu/+source/cloud-init (push)
<smoser> smoser https://git.launchpad.net/~smoser/ubuntu/+source/cloud-init (fetch)
<smoser> smoser ssh://smoser@git.launchpad.net/~smoser/ubuntu/+source/cloud-init (push)
<smoser> yours would not be ssh for the push to 'pkg', but ... s/smoser/mgerdts/ should work
<mgerdts> thanks.   Now they are the top two changesets at https://code.launchpad.net/~mgerdts/ubuntu/+source/cloud-init/+git/cloud-init/+ref/trusty-joyent
<smoser> mgerdts: yeah.. so quick glance that looks really good.
<smoser> and thansk for separating out the configure ENI one
<smoser> right.
<mgerdts> I see that resolv.conf is still not configured, and probably routes
<mgerdts> Is all of this stuff that I need to be handling in DataSourceSmartOS?  It seems as though it should be handled in generic code.  But maybe I'm spoiled by working on code that is 4 years newer.
<mgerdts> And the fact that it's changing the network configuration on get_data() is cringeworthy.  But that seems consistent with the other datasources of that vintage.
<LzrdKing> hi, how can i make cloud-init restart rsyslog after it sets the hostname?
<smoser> mgerdts: yeah.. the code there is just so old it doesnt have any of this
<LzrdKing> i have read http://cloudinit.readthedocs.io/en/latest/topics/modules.html#rsyslog and it says service_reload_command is auto by default but its not being restarted
<mgerdts> ok, i'll just add the needed bits.
<smoser> LzrdKing: well it does not do anything if the rsyslog top level key is not there... that module just gets out.
<LzrdKing> would just "rsyslog:" in the config file be enough to make it restart?
<smoser> it might be enough to just do:
<smoser>  rsyslog:
<smoser>   service_reload_command: "auto"
<LzrdKing> on a running instance, i could put that in /var/lib/cloud/instance/cloud-config.txt and then run "cloud-init single -n rsyslog" to test?
<smoser> LzrdKing: maybe not.
<smoser> but in /etc/cloud/cloud.cfg.d/LzrdKing.cfg
<LzrdKing> i don't rsyslogd getting restarted
<LzrdKing> don't see*
<smoser> http://paste.ubuntu.com/p/cBt8JncKD7/
<smoser> LzrdKing: ^ that works here. it also had logic that says "if no configs, then nothing to do".
<smoser> and acutally you can probably dit the 'servic_relload_command'. just the configs entry is enough probably.
<smoser> hthat will write the comment to /etc/rsyslog.d/20-cloud-config.conf
<smoser> and will restart
<LzrdKing> could this be why its not running: "helpers.py[DEBUG]: config-rsyslog already ran (freq=once-per-instance)"
<LzrdKing> bingo
<LzrdKing> i ran "cloud-init single -n rsyslog --frequency always" and the proper hostname immediately showed up in the logs
<LzrdKing> now i wonder if we need the additional lines
<LzrdKing> we don't
<LzrdKing> and /var/lib/cloud/instance/cloud-config.txt seems to be the file it's reading
<LzrdKing> i need more than just "rsyslog:" though, otherwise i get "cloud-init[WARNING]: Ran rsyslog but it failed!"
<LzrdKing> syslog complains about "service_reload_command" being in /etc/rsyslog.d/20-cloud-config.conf though, not a big deal, much less annoying than the hostname being wrong
<mgerdts> smoser: looks like updating eni to configure routes and dns is not so bad.  I updated the eni patch: https://git.launchpad.net/~mgerdts/ubuntu/+source/cloud-init/commit/?id=d14e34e8496c2f8fc5b3e9bb1115a107b208d0cf
<smoser> mgerdts: i can try to take a more full look on monday.
<mgerdts> thanks
#cloud-init 2020-05-25
<paride> Odd_Bloke, I'm a bit stuck here and looking for smart ideas: https://github.com/canonical/cloud-init/pull/387
<paride> I'd like to modify .travis.yml, but I can't just commit to it as the change interferes with the quilt patch also touching it
<paride> and the change can't be handled with a quilt patch as the patching does not happen at the right time and location
<paride> one way out I see is to remove .travis.yml from the quilt patch, making the cherry-pick a partial lie, and carefully avoiding including it again in cherry-picked patches
<Odd_Bloke> paride: So we landed a change to avoid running the integration tests on ubuntu/* branches: https://github.com/canonical/cloud-init/commit/e9ab1235eb3944ede588188592a1aa1aaae02591#diff-354f30a63fb0907d4ad57269548329e3
<Odd_Bloke> paride: So we don't need bddeb to work on the packaging branches (I think build-package is the correct tool to use there as a developer?).
<Odd_Bloke> (Am I missing the point?)
<paride> Odd_Bloke, just partially. Even if we want integrate the "skip ubuntu/*" change in ubuntu/focal we need to modify .travis.yml as above. However modifying it will break the package building process as one patch in debian/patches touches .travis.yml too, and dpkg-source will refuse to apply it, breaking the package building process
<paride> I think we've got one thing wrong: touching .travis.yml in cpick-986f37b0-cloudinit-move-to-pytest-for-running-tests-211
<Odd_Bloke> paride: Right, I see.  IIUC, that pytest patch should be integrated from master to the packaging branches during the next SRU process (which we're starting this week), and then I think this problem goes away?
<paride> Odd_Bloke, yes, it will go away with the next sru. I guess the right thing to do is just to ignore the bddeb failures in ubuntu/focal up to that point
<paride> and maybe we should try to modify .travis.yml in separate commits, so the changes won't end up in cherry picked patches
<paride> I'll close the PR
#cloud-init 2020-05-26
<sj_dk> join #cloudinit
<sj_dk> sorry
<ananke> this just popped up on showhn: https://news.ycombinator.com/item?id=23310063
<ijohnson> hi folks, is there a way to ask cloud-init what datasource was used? I see that with`cloud-init status --long` there is a "details" section with this information, but is that the right way to do it and is that in a format that can be depended on ?
<Odd_Bloke> ijohnson: For programmatic use, I wouldn't depend on it.  It reads from /run/cloud-init/result.json, so that would be a better place to look.
<ijohnson> Odd_Bloke: so is /run/cloud-init/result.json a stable format that I can rely on for just reading the datasource ?
<Odd_Bloke> ijohnson: Yep, there's a v1 key which we'll keep stable.
<Odd_Bloke> ijohnson: I'd be interested to know a little more about your use case, too, for future reference.
<smoser> is this known?
<smoser>  http://paste.ubuntu.com/p/mDk8TNvgSc/
<smoser> launched a fresh azure instance of 18.04. the clock jumps during boot.
<smoser> 2020-05-17 17:02:58,229 - netlink.py[DEBUG]: Wait for media disconnect and reconnect to happen
<smoser> 2020-05-26 13:52:56,326 - netlink.py[DEBUG]: netlink socket ready for read
<Odd_Bloke> Wow.
<smoser> that seems a little "broken" to me. Why wouldn't you set your vm bios to the correct time to start?
<smoser> but ... ok, ntp does the rigth thing and fixes it.
<smoser> but what seems *more* broken is that uptime seems unaware of the jump
<smoser>  uptime
<smoser>  14:13:00 up 8 days, 21:10,  3 users,  load average: 0.00, 0.11, 0.15
<smoser> i guess its possible that this was a "pre-provisioning"
<smoser> but i would not have expected that... I suspect on the order of 5 ubuntu vms have been launched on thsi account in a year
<Odd_Bloke> smoser: `journalctl -o short-monotonic` might give you a separate indication of how long the machine has actually been up?
<smoser> yeah. just ran that
<smoser> it shows the hump
<smoser> pasting
<smoser> http://paste.ubuntu.com/p/66SRvpJY3k/
<smoser> so the jump occurs right around the dcp. so i'd certainly think its ntp related.
<Odd_Bloke> smoser: Does `journalctl -u chrony.service` shed any light?
<smoser> -- Logs begin at Sun 2020-05-17 17:02:46 UTC, end at Tue 2020-05-26 14:23:17 UTC. --
<smoser> -- No entries --
<smoser> so.. no
<Odd_Bloke> smoser: Oh, I was looking at GCE where I think they requested chrony, so perhaps: `journalctl -u systemd-timesyncd.service`?
<smoser> /
<smoser> https://paste.ubuntu.com/p/fdMpW8BXNy/
<smoser> am i just wrong? is uptime supposed to jump like that?
<Odd_Bloke> I'm not really sure TBH.
<Odd_Bloke> The fact the monotonic timestamp also jumps has me wondering if it's actually just the NTP clock setting that's going on.
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1880704
<ubot5> Ubuntu bug 1880704 in cloud-init (Ubuntu) "jump in kernel clock during boot on azure" [Undecided,New]
<Odd_Bloke> AnhVoMSFT: Would you be able to take a look at ^ and let us know what you think, please?
<cpaelzer> smoser: some time dameons do an initial time warp if the delta is above some threshhold
<cpaelzer> not sure what timesyncd does
<cpaelzer> is the final time after the sync the real time that makes sense?
<smoser> cpaelzer: right. that didn't surprise me.
<smoser> but i dont think the kernel clock is supposed to jump like that
<smoser> is'nt that what "monotonic" means ?
 * smoser realizes monotonic might *only* mean not-ever-decreasing
<smoser> well, per metacpan "A clock source that only increments and never jumps"
<smoser> which is what I thought too
<smoser> and since a perl documentation site agrees with me, i'm clearly right.  or should i just be embarrased.
<cpaelzer> exactly - monotonic is split in two way
<cpaelzer> never decrease
<Hani> Hi All , we are trying to use cloud-init to initialize our images , but mostly our images have multiple disks (due to xenserver server limitation on max single disk size) and we need to have lvm that span multiple disks to show as 1 disk for end users, I see that could-init  plan t support mdadm raid in the future , so my question do you know if
<Hani> there is any ETA for that ? and if yes do think the user case of having raid1 just to have all disks shown as 1 will be supported ?
<smoser> cpaelzer: did you miss "never jumps" above?
<rharper> Hani: hey,  advanced storage config is definitely on the roadmap, but it's not coming soon;    in your use-case, are you configuring the root disk or some other disk for end users?
<Hani> rharper most users prefer to have all the storage directly in root , so it should appear as 1 logical volume for the client
<cpaelzer> smoser: no I didn't miss it, it just never goes down
<cpaelzer> and strictly speaking there also are "strictly monotonic" clocks
<Hani> the max limit for vdi in xenserver is 2TB and we have some vms with 20TB storage , which will appear as 10x 2TB disks attached to the vm
<cpaelzer> smoser: which also guarantee to never report the same value twice
<cpaelzer> which can be hard if you need to hold that guarantee across all CPUs
<cpaelzer> not sure if Linux provides a strictly monotonic clock to userspace
<cpaelzer> some HW does provide one to the kernel
<smoser> cpaelzer: i'm still confused.
<smoser> google definitely says that a monotonic clock should not jump.
* blackboxsw changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting June 02 16:15 UTC | 20.1 (Feb 18) | 20.2 (Apr 28) | https://bugs.launchpad.net/cloud-init/+filebug
<smoser> there is clearly confusion on this. beyond just my head.
<Odd_Bloke> rharper: https://github.com/canonical/cloud-init/pull/391/
<Odd_Bloke> meena: ^ is the proper location for that PR I pinged you with y'day.
<rharper> Hani: assembling a root volume during boot isn't quite the storage plan for cloud-init; cloud-init does not run in the initramfs where one might be able to adjust a rootfs.   One can construct a two-stage approach here;  using cloud-init and curtin;   (https://curtin.readthedocs.io/en/latest/);  in curtin our testing harness does exactly this (boot an ephemeral environment with a configuration to assemble a rootfs from various disks);
<rharper> Hani: cloud-init's initial storage scope is targetting non-root volumes;
<Hani> rharper that what i was looking for , will check curtin Thanks
<rharper> Hani: https://curtin.readthedocs.io/en/latest/topics/integration-testing.html  and https://git.launchpad.net/curtin/tree/tests/vmtests   #curtin if you want to discuss further
<smoser> hey. i'm looking at http://paste.ubuntu.com/p/SY9n2SC6cW/ (66-azure-ephemeral.rules)
<smoser> where does ATTRS{device_id} come from ?
<rharper> smoser: I believe blkid/udev export those values
<smoser> sda on that system gets named 'fabric_root'.
<smoser>  https://paste.ubuntu.com/p/4QRHZTzCdt/
<smoser> that is udevadm info --quer=all and its not there.
<smoser> nor is it in /run/udev/data/b8:0
<rharper> I don't have an azure instance up just yet, but their scsi device exports those (or is supposed to export those values)
<smoser> rharper: smoser@51.143.89.81 if you want to look
<rharper> k
<smoser> there it is in find /sys | grep f8b3781
<rharper> what rule file ?
<rharper> ah, yeah, VMBUS
<smoser> there are actually 2 rules files. one cloud-init, one walinux-agent
<smoser>  lib/udev/rules.d/66-azure-ephemeral.rules /lib/udev/rules.d/66-azure-storage.rules
<rharper> $attr{file}, %s{file}
<rharper>        The value of a sysfs attribute found at the device, where all keys of
<rharper> that's what I was looking for
<rharper> one can directly read sysfs attrs
<rharper> in udev rules
<rharper> smoser: ok, I'm off, thanks
<smoser> annoying that those sysfs attrs aren't part of udev export though.
<rharper> I guess it's somewhat duplicate if you know the device path, which I think is knowable
<rharper>  /sys/bus/scsi/devices   and /sys/class/block/sda -> ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:07/VMBUS:01/00000000-0000-8899-0000-000000000000/host0/target0:0:0/0:0:0:0/block/sda
<smoser> yeah... obvious.
<meena> mhm, yeah. very.
<blackboxsw> Odd_Bloke: rharper: uss-tableflip functional change to add a new fix-daily-recipe script https://github.com/canonical/uss-tableflip/pull/50
<blackboxsw> I want to add docs around this release process. I'm adding notes there on testing:
<rharper> blackboxsw: cool
<blackboxsw> rharper: push testing instructions
<blackboxsw> pushed rather
<blackboxsw> I can fix daily focal builds with this and then we can iterate on the README docs (or I can add it to this PR)
<blackboxsw> I had wanted to get the 'fix' in to ensure daily build recipes are fixed for focal before a new-upstream-snapshot today
 * blackboxsw adds doc changes now. as it shouldn't be too much
<rharper> yeah
 * rharper will test 
<rharper> blackboxsw: https://paste.ubuntu.com/p/fnV3Jv9rTr/
<blackboxsw> rharper: sorry that should have been fix-daily-branch -s origin/ubuntu/focal -d ubuntu/daily/focal
<blackboxsw> missing origin
<rharper> ah, ok
<rharper> trying again
<blackboxsw> but we should sort that and provide an upstream
<rharper> +1
<rharper> that worked
<rharper> continueing with instructions
<blackboxsw> remote param could probably be used for both source_branch and the daily_branch params if provided
<blackboxsw> ok can tweak it to add the $remote if not already the prefix of source_branch or daily_branch
<rharper> https://paste.ubuntu.com/p/MpZnDnBHJD/
<rharper> does that look right? (it looks right to me)
<blackboxsw> schweet!
<blackboxsw> yeah
<blackboxsw> that should fix daily focal at the moment.
<blackboxsw> I'm scrubbing docs
<rharper> \o/
<rharper> and Odd_Bloke is +1 on us fixing focal "manually"  and we'll build/revise documents/tools ?
<blackboxsw> I think Odd_Bloke wanted us to review docs first and fix script later. they are both small so I'll push the doc changes to this branch so we can review them.
<blackboxsw> then we can land them as one
<blackboxsw> almost done w/ docs. probably 10 mins
<blackboxsw> rharper: docs pushed
<blackboxsw> rharper: one thing we could to is make sure scripts/cherry-pick emits the message (now run `fix-daily-branch -s ubuntu/<release> -d ubuntu/daily/<release>`)
<blackboxsw> so that the scripts document next step (just like new-upstream-snapshot does)
<rharper> that sounds nice
<Odd_Bloke> blackboxsw: Reviewed that PR; I'm +1 on using it locally in it's current state to get focal fixed.
<blackboxsw> thanks Odd_Bloke !
<blackboxsw> Odd_Bloke: thanks, late , but I've pushed the changes to require <release> and <remote> positional parameters
<blackboxsw> good thoughts
<Odd_Bloke> blackboxsw: Nice, thanks!
<blackboxsw> Thanks Odd_Bloke got it.
<blackboxsw> rharper: I resolved Odd_Bloke's review comments on https://github.com/canonical/uss-tableflip/pull/50 if you give it a final thumbs up I can fix ubuntu/focal daily builds
<rharper> blackboxsw: yeah
<blackboxsw> thanks rharper reverted that line in the docs
<rharper> ok
<rharper> blackboxsw: where does gitcmd come from ? I don't see a source wrapper or anything
<rharper> just curious
<rharper> nm
<rharper> right in my face
<rharper> blackboxsw: ok, +1
<blackboxsw> heh rharper right gitcmd was your suggestion for me on a previous PR, to catch failures :)
<blackboxsw> I figured I'd apply it here too
<blackboxsw> and thx
 * blackboxsw fixes ubuntu/daily/focal now
<blackboxsw> ok daily focal recipe trying to build now. https://code.launchpad.net/~cloud-init-dev/+archive/ubuntu/daily/+recipebuild/2574992
<blackboxsw> d'oh forgot to sync github -> LP
<blackboxsw> ok daily build recipe  working on focal now
<blackboxsw> and daily build recipe for groovy was broken, it was building packages for focal ~20.04 instead of groovy 20.10 so there is a conflict on upload.
#cloud-init 2020-05-27
<AnhVoMSFT> @smoser @Odd_Bloke that clock jump is expected for VMs that have been pre-provisioned. cloud-init can block "waiting for media state change" for hours or days, or in some extreme case weeks. Is there an undesirable behavior arising from the clock jump?
<AnhVoMSFT> looking at the log you provided in the ticket I could confirm from our data that the VM indeed was pre-provisioned. The prediction isn't based on the user's particular usage but rather based on the overall demand for the particular vm size/region/image...
<smoser> AnhVoMSFT: but it seems unlikely (to me at least) that that system was a pre-provision
<Odd_Bloke> smoser: Didn't Anh just confirm that it was pre-provisioned?
<Odd_Bloke> (And it sounds like the pre-provisioning doesn't happen per-user, but instead happens within a region and then the pre-provisioned instances are assigned to an account on "launch".  Given that, no user can infer whether to expect pre-provisioned VMs or not.)
<smoser> well, yes. but i didn't read that before typing. :)~
<AnhVoMSFT> yes, it was indeed a pre-provisioned VM (confirmed from cloud-init log itself, and from the backend, using the data available in cloud-init log)
 * smoser downloads the 'logs.tgz.gz' file which is infact a gzipped gzipped tarball
<AnhVoMSFT> lol - i just clickeda couple times on 7z and finally got to the log. I did wonder why it was gzipped a couple times
<smoser> i suspect that 'ubuntu-bug' is doing something generically now that it wasn't originally doing
<smoser> or launchpad is magically gzipping files longer than X or something.
<smoser> it feels almost too perfect to me
<smoser> that the system did not generate *any* journal entries betwee May 17 at 17:03 and May 26 at 13:52
<Odd_Bloke> I believe pre-provisioned VMs are suspended?
<Odd_Bloke> (Maybe I'm inventing that as an explanation for this behaviour though. :p)
<smoser> well.. some time ago they told me that that was just not possible
<smoser> as i suggested that is how they should do it.
<AnhVoMSFT> whether it's suspended or not is an implementation detail that can change, I believe. I think today they're not suspended
<smoser> i susggesetd, as all the cloud-init magic would seem generally unnecessary if it got suspended.
<AnhVoMSFT> however they might be in the future as they figure out some way to do it (I'm not in that team but did hear some plan about it a while back)
<smoser> the platform could just say "oh, this instance id isn't ready yet - stop its cpu"
<Odd_Bloke> Yeah, I guess properly suspended wouldn't bump the timestamps the way we're seeing.
<smoser> well, it would
<AnhVoMSFT> the system is indeed eerily quiet during that time. I think we ran tcpdump and saw almost no incoming/outgoing packet during that time
<smoser> as your laptop counts uptime even when suspended
<smoser> (i think... althoguh maybe i saw somethign about a kernel feature that did or did not do that)
<AnhVoMSFT> i think the cloud-init "magic" was meant to save on kernel initialization and python module loading overhead
<AnhVoMSFT> for smaller VM sizes a smaller number of IOPS is allocated to the VM and as such they would benefit more from pre-provisioning
<AnhVoMSFT> @Odd_Bloke did you guys cut an SRU yet?
<AnhVoMSFT> I saw a github release 6 days ago ubuntu/20.2-30-g8bcf1c06-0ubuntu1 - is this the SRU?
<blackboxsw> AnhVoMSFT: we just landed branches last night to fixup a daily build recipe. and yes AnhVoMSFT that 'version' of cloud-init will be what we kick off the SRU for.
<blackboxsw> AnhVoMSFT: I'll talk with the team today. I hope to queue the upload of the SRU this afternoon
<blackboxsw> and will post to this channel and the mailinglist when ready for our verification.
<Odd_Bloke> AnhVoMSFT: We expect to be doing another SRU in the next 3-6 weeks for https://github.com/canonical/cloud-init/pull/358
<falcojr> hrmph
<falcojr> looks like pyflakes doesn't provide a way to ignore a line
<falcojr> and it's complaining about my "from feature_overrides import *" line
<falcojr> can we ditch pyflakes and pycodestyle in favor of flake8?
<falcojr> doesn't make sense to use two different tools when one covers the same functionality
<falcojr> and lets you ignore lines :D
<falcojr> pyflakes github even says "If you like Pyflakes but also want stylistic checks, you want flake8, which combines Pyflakes with style checks against PEP 8 and adds per-project configuration ability."
<meena> 19:50 <falcojr> can we ditch pyflakes and pycodestyle in favor of flake8? â i thought i had recently seen a commit that got rid of flake8 xD
<powersj> falcojr, the feature flags need to import *?
<falcojr> yeah, as a means to provide a way to override them
<falcojr> discussion on it starts here
<falcojr> https://github.com/canonical/cloud-init/pull/367#issuecomment-630213500
 * powersj has some reading to do
<falcojr> huh, didn't know it was ever in the project, but it was removed here
<falcojr> https://github.com/canonical/cloud-init/commit/80dfb3b023a268d6d6204220665c2cf43eac66df
<falcojr> flake8 entails everything pyflakes and pycodestyle does
<meena> i know, i use itâ¦ elsewhere.
<rharper> Odd_Bloke: so meena 's https://github.com/canonical/cloud-init/pull/385  is good to go, except github shows it waiting on travis status, however, the two runs for the PR are green, https://travis-ci.org/github/canonical/cloud-init/builds/691054573
<rharper> meena: nice! close/open does the trick =)
<rharper> new job pending
<blackboxsw> paride: thanks for all the reviews on the ubuntu/*  branches dropping pyflakes. I've checked our daily build CI runs for focal and we are all green on lxd/kvm using the newly built debs from yesterday.   rharper Odd_Bloke paride I think we want to turn on our ec2-f CI integration test running and drop ec2-e https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-ec2-f/
<blackboxsw> ^ what do you folks think ^ on the jenkins running for ec2-f: on ec2-e: off?
<blackboxsw> or both on.
<blackboxsw> or even ec2-g ec2-f ec2-b would probably be more appropriate
<rharper> blackboxsw: e is not yet EOL, no ?
<rharper> so, +1 on ec2-g, f and b
<rharper> and I suspect we leave e on  until EoL
<blackboxsw> rharper: nope you're right
<blackboxsw> I'll enable them now. and put a PR up for server-jenkins-jobs to fix that (and the default )
<rharper> hrm,  ubuntu-distro-info  ; I thought used to show the EoL date (or days of support left etc)
<rharper> is that gone now ?
<rharper> or am I thinking of a different tool ?
<blackboxsw> I aways looked at the raw /usr/share/distro-info/ubuntu.csv
<blackboxsw> now there are two eols :)
<blackboxsw> with esm around
<blackboxsw> well 3 actually: eol,eol-server,eol-esm
<Odd_Bloke> falcojr: Yep, I also want to switch back to flake8 for a task I have on my plate ATM.
<Odd_Bloke> falcojr: But given my current state (back pain, for those following along at home), you should feel free to go ahead and DIY.
<rharper> blackboxsw: heh
<rharper> blackboxsw: so close, 2020-07-17
<blackboxsw> ok so we'll turn it on and add a timebomb for 2020-07-17
<blackboxsw> like if date >= 2020-07-17 raise RuntimeError :)
<blackboxsw> RuntimeError('remember to turn this job off after 2020-07-17')
<rharper> hehe, no need for timebombs;  I don't think
<rharper> maybe a trello card
<Odd_Bloke> rharper: I think you were thinking of `ubuntu-distro-info --days=eol --series eoan`
<rharper> yes
<rharper> the no-input help message not the default --help message
<rharper> =/
 * rharper needs more breadcumbs 
<Odd_Bloke> rharper: I only know the minutiae of u-d-i because, well, lol: https://github.com/OddBloke/distro-info-rs
<rharper> nice!
<Odd_Bloke> I think that needs updating for ESM, actuallly.
<kamoser> Hi all, when I have a package list like "packages:
<kamoser> how can I ensure 2 and 3 are installed even if 1 fails?
<kamoser> Reason for this is to simplify installs across distros. So in case the first package works on Amazon Linux but fails on Ubuntu, I want the rest of the packages to install
<meena> kamoser: you could use a jinja2 template for your cloud-config
<kamoser> meena Thanks. I assume you mean write statements based on the distro like "install package if Ubuntu"
<kamoser> I will look into that, I have use Jinja with Salt previously
<meena> is everyone fleeing Salt rn?
<rharper> kamoser: what meena means is your #cloud-config file can *be* a jinja2 template directly, cloud-init itself can fill out the template with cloud-init provided values, like the distro name;   ##template: jinja\n#cloud-config\npackages:\n{{ jinja logic here}}
<rharper> kamoser: it's also a reasonable feature request,  if you'd like, file an issue at https://bugs.launchpad.net/cloud-init/+filebug
<kamoser> rharper Thanks! I will do that as well. It would be hard to guarantee the package list doesn't change in the future, and our AMIs build automatically. Appreciate the help from both of you.
<rharper> kamoser: so, you might look at using a #include URL to your list
<rharper> so you can independently modify the "packages" cloud-config from the AMI itself
<rharper> cloud-init will fetch cloud-config from a URL
<rharper> you can use object store as well, but only via http (not sure if you need it to be pulled via https);
<kamoser> rharper awesome options. thanks :)
<rharper> yw
<meena> rharper: iâ¦ disagree :P failures should failâ¦ although, installing packages does tend to be very transitivelyâ¦ <word that explains things>
<rharper> meena: I think we're on the same page ... I would not change the *default* behavior; rather, if folks wanted to ignore install failures, that seems like a reasonable option for users to control;
<rharper> meena: another though would be to extend the config space, to a dictionary, which could specify package lists under distro keys;
<rharper> it's worth having a bug to discuss options
<meena> rharper: *nod *nod *nod
<blackboxsw> rharper: Odd_Bloke game of would you rather: our new-upstream-snapshot automatically updates to latest commit on master, and we released to groovy a few days ago so other commits have landed. Do we want to extend new-upstream-snapshot to take a commitish from which to 'snapshot' or do we want to propose a new groovy upload right now  and queue an upload for SRU with what has currently landed since 8bcf1c06
<kamoser> I changed my template to say "## template: jinja
<kamoser> #cloud-config". Should the header remain as Content-Type: text/cloud-config or should it be changed to text/jinja2? I don't see examples in the docs that clarify this point, but I am probably being dumb.
<blackboxsw> rharper: Odd_Bloke queuing a groovy upload should be 5 mins of work
<blackboxsw> +1 review time.
<blackboxsw> an extension to new-upstream-snapshot to take an optional commitish  could be helpful for future SRUs if we know that reviewers will trail authors when reviewing the upload PRs
<blackboxsw> here's the list of extra commits in tip of master at the moment https://paste.ubuntu.com/p/jScGYQWVFY/
<blackboxsw> one functional change for ubuntu, couple unit tests/CI, contributors signed up and bsd enablement for cfg mgmt (doesn't affect ubuntu-proper)
<rharper> blackboxsw: ok ...  I need to do a PR on new-upstream-snapshot too to help for the "first-time-sru" scenario ;  maybe you ran into it already ;  I currently used: new-upstream-snapshot --no-bugs --sur-bug XXXX which mostly DTRT (with exception of not appending ~XX.YY.1 to the release version string
<blackboxsw> right rharper https://trello.com/c/YZue6zO9/35-new-upstream-snapshot-needs-to-dtrt-and-append-proper-prefix-when-a-release-is-stable-on-first-sru
<rharper> blackboxsw: heheh
<blackboxsw> we have a coupe of onetime items that we need to fixup for SRU
<rharper> \o/
<blackboxsw> I added your mention to that item, as I was going to manually 'handle'  that in the ubuntu/focal SRU for cloud-init
<blackboxsw> I think it'd be better to sort the new-upstream-snapshot for that
<blackboxsw> then future us can forget all about that pain
<blackboxsw> ok I'll put up a minor new-upstream-snapshot to take an optional commitish param
<blackboxsw> rharper: want me to add functionality to inspect ubuntu.csv or are you already working that
<blackboxsw> hrm hold the phone I think new-upstream-snapshot takes a positional commitish
<rharper> blackboxsw: not working anything at the moment;  fixing up some curtin vmtest issues pre-sru
<blackboxsw> rharper: want to weigh in on this level of manual for ubuntu/xenial ?
<rharper> ?
<blackboxsw> rharper: https://github.com/canonical/cloud-init/pull/393
<rharper> looking
<blackboxsw> since we did a one-off new-upstream-snapshot that we didn't release, the tool doesn't handle consolidating multiple UNRELEASED changelog entries into one from two separate new-upstream-snapshots
<blackboxsw> so I had to manually move around a couple of entries
<blackboxsw> per steps 1-3
<rharper> ah
<rharper> yes, I've always worried about these
<rharper> I think instead of writing "steps", it'd be worth showing a diff between what the new-upstream-snapshot does; and what we actually want to commit ?
<blackboxsw> yeah I didn't really want to add specific debian/changelog content parsing logic in new-upstream-snapshot if we can avoid it
<blackboxsw> sure that works
<rharper> so do it once with no changes, then again with your manual steps and show the changes ...  folks can look at the delta and see what they need to change during changelog edit I think
<rharper> and I suspect, if we can automate that combintation of un-released snapshots, it would be a good backlog item to do (it's not often but nice for tooling to handle this stuff)
<Odd_Bloke> blackboxsw: +1 on cutting a new groovy release (unless I'm too late and you've already committed to not doing that, then don't sweat it :p)
<blackboxsw> Odd_Bloke: cutting groovy is also easy and I'll push that now. since I have to rework ubuntu/xenial anyway a bit to show a manual diff let's do it
<blackboxsw> PR coming Odd_Bloke
<Odd_Bloke> falcojr: https://github.com/canonical/cloud-init/pull/392 <-- we're unblocked on the flake8 issue, thanks to powersj
<blackboxsw> Odd_Bloke: https://github.com/canonical/cloud-init/pull/394
<falcojr> lol, I was about to push basically the same thing
<Odd_Bloke> blackboxsw: Approved.
<blackboxsw> Odd_Bloke: thanks uploading
<blackboxsw> at least our frequency of groovy uploads is healthy
<blackboxsw> now to get SRUs healthy
<powersj> falcojr, sorry :P
<falcojr> haha, np
<blackboxsw> community notice: ubuntu/groovy-proposed] cloud-init 20.2-38-g8377897b-0ubuntu1 (Accepted)
<falcojr> over time we should fix some of those. E.g., whitespace around operators or the hanging indents
<blackboxsw> community notice: ubuntu Groovy 20.10 has an upload representing tip of master that will show up in cloudimages in the coming days
<powersj> agreed
<falcojr> flake8 is a little more picky than the other tools were
<falcojr> also, extend the max line length
 * falcojr runs away
<Odd_Bloke> I would like to keep to 80 lines, but I would not object to figuring out a way to blacken the codebase over time.
<Odd_Bloke> Upstream have been a little reticent to engage in partial formatting.  It's a reasonable position to take, but it does make it hard for projects with a lot of history to move, because I don't really want `git blame` to be forever ruined by a formatting commit.
<Odd_Bloke> Well, I guess more than a little reticent: https://github.com/psf/black/issues/134
<blackboxsw> rharper: here's the ubuntu/xenial  SRU branch https://github.com/canonical/cloud-init/pull/395
<blackboxsw> if that approach makes sense I'll have something comparable for bionic eoan and focal I believe
<rharper> blackboxsw: lemme look
<kamoser> Some lines omitted but why doesn't this work on EC2 user data? "Content-Type: text/jinja2;
<kamoser> ## template: jinja
<kamoser> #cloud-config
<kamoser> packages:
<kamoser>  - {{ v1.distro }}"
<kamoser> Just says CI_MISSING_JINJA_VAR/distro
<rharper> kamoser: one sec
<rharper> kamoser: cloud-init --version ? what are you currently running there?  and Ubuntu image or Amazon Linux ?
<kamoser> ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20191002 (ami-04b9e92b5572fa0d1)
<kamoser> and /usr/bin/cloud-init 19.2-36-g059d049c-0ubuntu1~18.04.1
<kamoser> To test it, I'm running /usr/bin/cloud-init -d init then I'm looking at the file /var/lib/cloud/instance/cloud-config.txt which is where I see the statement CI_MISSING_JINJA_VAR/distro
<rharper> y
<kamoser> keeps kicking me. reason is, I am just trying to debug why my user data won't work on ec2.
<rharper> kamoser: ok, I can reproduce, that *seems* like  a bug,  you should be able to use v1.distro or distro, the original jinja commit mentions
<rharper> Additionally, any standardized instance-data.json keys scoped below a
<rharper>     '<v#>' key will be promoted as a top-level key for ease of reference in
<rharper>     templates. This means that '{{ local_hostname }}' is the same as using the
<rharper>     latest '{{ v#.local_hostname }}'.
<rharper> try with just {{ distro }}
<rharper> for now
<rharper> hrm, nope; thats broken too
<kamoser> Bummer. Should I just be using bash script to workaround? Basically what I'm trying to achieve is Ubuntu doesn't have package "A" so I was writing Jinja that said "if package is not Ubuntu" to go with my cloud-config packages list.
<kamoser> I was hoping the package list would come out as e.g. RHEL has packages A, B but Ubuntu only has packages B. So I inserted a Jinja if statement into the middle of my cloud-config "packages" list.
<rharper> kamoser: well, we need to fix our bug, but that won't help you for now;  you can in your shell script use fetch the distro value via /run/cloud-init/instance-data.json  ;;; use jq on it or whatever else you need
<kamoser> No worries, will do.
<rharper> kamoser: ah, I think on bionic, we don't have the newer keys available;
<rharper> kamoser: this will work on a focal image, but not bionic *yet*;  blackboxsw is starting our SRU process where we bring back the newer features into the older releases, so in a few weeks this should work on bionic;
<rharper> kamoser: and you can see the current keys available with : cloud-init query --all , and test them out with cloud-init query -f "{{ variable }}"  ;  putting in "{{ distro }}" returns exactly what you saw on bionic;  here's it working on focal, for reference  https://paste.ubuntu.com/p/Db8djv3nNc/
<kamoser> ok awesome. great tip
<rharper> kamoser: also, cloud-init devel render --user-data </path to your template file>  to render the whole thing
<blackboxsw> yeah +1 kamoser SRU will get you that 'distro' jinja variable :/ sry 'bout that
<blackboxsw> a bit delayed
<kamoser> rharper excellent, now I can test a lot faster without startup/shutdown of the instance. No problem about the variable... only thing is, our customer specifically requested Ubuntu 18.04. But I am going to check if we can get the newer version
<rharper> kamoser: it should be available in a few weeks;  if you *cant* wait till the SRU is done, you can try using our daily-ppa for bionic which runs cloud-init master built for bionic,  https://launchpad.net/~cloud-init-dev/+archive/ubuntu/daily
<kamoser> Thanks!
<blackboxsw> and kamoser you could also test using ubuntu groovy 20.10 which already has the fix
<blackboxsw> if that ubuntu release is available for you :/
<blackboxsw> thanks for the review rharper I've pushed ubuntu/bionic in the same light https://github.com/canonical/cloud-init/pull/396
#cloud-init 2020-05-28
<paride> blackboxsw, ec2-f was supposed to be enabled: https://github.com/canonical/server-jenkins-jobs/pull/120
<blackboxsw> paride: hahah, oops was mapping that to minutes incorrectly :.
<blackboxsw> thanks approved
<blackboxsw> paride: and landed
<paride> great
<blackboxsw> paride: two more back at you https://github.com/canonical/server-jenkins-jobs/pull/121
<paride> merged
<blackboxsw> paride: last one from me: https://github.com/canonical/server-jenkins-jobs/pull/122
<paride> blackboxsw, merged, thanks!
<blackboxsw> paride: it's probably past your EOD have time for me to put up another for  adding curtin-g CI jobs and cloud-init kvm|lxd-g  jobs too
<blackboxsw> we may load the CI machines to near breaking point
<paride> torkoal will explode :) blackboxsw I am going out, but I can review/merge tomorrow morning
<blackboxsw> ok one more cloud-init https://github.com/canonical/server-jenkins-jobs/pull/123  for lxd/kvm groovy
<paride> ok just in time
<blackboxsw> heh see ya paride tomorrow it is
<blackboxsw> :)
<paride> let me see
<blackboxsw> thanks again
<blackboxsw> if an issue, I'll blame the new guys :)
 * blackboxsw is looking at falcojr there :)
<blackboxsw> falcojr: I may have a couple of minor CI branches for you to look at today
<falcojr> Heh, sounds good
<paride> blackboxsw, merged, thanks
<blackboxsw> thanks paride have a good one
 * paride declares EOD for real :)
<blackboxsw> falcojr: on the hook from here on out ! :)
<blackboxsw> falcojr: as a note, our jenkins is behind a firewall at https://jenkins.ubuntu.com/server/
<blackboxsw> so you'll  need VPN up to access it
<blackboxsw> and any job changes we make are auto-syncd to jenkins every 15 mins, but we can manually pull with the job: https://jenkins.ubuntu.com/server/job/admin-jobs-update/
<blackboxsw> so if we break something we can fix quickly if needed
<blackboxsw> falcojr: for you if you end up having time today https://github.com/canonical/server-jenkins-jobs/pull/124
<blackboxsw> if not, no biggie
<falcojr> I should be able to today
<blackboxsw> thank you sir.
<falcojr> what does proposed mean in the context of these jenkins jobs?
<blackboxsw> falcojr: sorry missed this, proposed for the jenkins jobs means to           pull-lp-source curtin "$RELEASE"-proposed
<blackboxsw> it means we are testing the debian source bits that have been uploaded for curtin to the xenial-proposed bionic-proposed etc. deb pocket
<falcojr> gotcha, thanks
<blackboxsw> so it is testing what we plan to SRU, as those bits get queued in the $release-proposed pocket for testing and verification for an SRU until SRU verification is fixed
<blackboxsw> then when we pass verification the bits get published to $release-updates pocket
<blackboxsw> falcojr: https://people.canonical.com/~ubuntu-archive/pending-sru.html lists packages that are currently queued in the *-proposed pockets and haven't cleared SRU verification yet
<blackboxsw> generally there'll be links to a project bug  (or multiple bugs) related to that SRU verification that needs completion
<blackboxsw> another minor ci branch if anyone wants to grab it https://github.com/canonical/cloud-init/pull/400
<blackboxsw> our cloud_tests need to allow us to support groovy release
<blackboxsw> awesome falcojr https://github.com/canonical/server-jenkins-jobs/pull/124#discussion_r432033139  and thanks you are right
<blackboxsw> pushed fix
<rharper> blackboxsw: I reviewed the ubuntu/bionic;  I had a question in the review;  I wasn't sure if what you've pushed to ubuntu/bionic is the *final* form (it looks like my local branch following your steps has a diff from yours)
<blackboxsw> ahh rharper I held off on my manual patches, will apply them now and push
<rharper> blackboxsw: I see
<rharper> lemme diff to my previous commit
<blackboxsw> +1 rharper I'll wait to push
<rharper> \o/
<blackboxsw> ok awesome sauce. pushing manual changes in 2 mins
<rharper> ok, then I'll refresh
<rharper> though I think your commit message will need more than "test patches"
<rharper> for what you're pushing to ubuntu/$release right ?
<blackboxsw> rharper: fort pushed
<blackboxsw> force pushed rather
<blackboxsw> yeah rharper I manually did the following
<blackboxsw> git commit -m 'd/control: drop python3-nose and python3-unittest2' debian/control
<blackboxsw>  git commit -am 'changelog: add content for dropping python3-nose and python3-unittest2'
<rharper> +1
<rharper> OK, +1 on your update; we diff the same there as well
<blackboxsw> then I git rebase -i HEAD~5 and reordered the commits squashing the debian/changelog ones into 3703b789d3ad3a0495b204118a44a19ed8b31da1 the new-upstream-snapshot changelog entry
<blackboxsw> thanks rharper
<blackboxsw> eoan and focal are more straightforward
<rharper> blackboxsw: ok, ping me when you've got them up, I'll review
<blackboxsw> rharper: their up
<blackboxsw> already
<blackboxsw> focal: https://github.com/canonical/cloud-init/pull/398
<blackboxsw> eoan: https://github.com/canonical/cloud-init/pull/397
<rharper> nice
<rharper> blackboxsw: in your eoan PR, you show applying the patch first, then new-upstream-snapshot ? any reason for that ?
<rharper> shouldn't that go after the snapshot ?
<blackboxsw> rharper: if we apply it first, then new-upstream-snapshot takes care of it in the right order without a rebase afterward
<rharper> hrm, ok
<blackboxsw> I end up manually rebasing after to make sure the commit is in the changelog before we tag it anyway
<rharper> why did we not do that for xenial ?
<blackboxsw> rharper: we could have and should have landed it first, I only though about it after we got through bionic
<blackboxsw> but either way is reasonable. I can redo xenial and bionic if you like , but in xenial and bionic we have to rebase anyway because we re-order the debian/changelog content lines after new-upstream-snapshot is 'done' anyway
<rharper> blackboxsw: ok
<blackboxsw> so I'll still need to rebase -i and squash the manual re-order of debian/changelog commits
<blackboxsw> bionic and xenial only
<rharper> blackboxsw: ok, I'll review eoan and focal;  we can re-review xenial/bionic if we want to re-do
<blackboxsw> hrm rharper so on eoan I forgot to push my changes. done now
<blackboxsw> and focal push fixed https://github.com/canonical/cloud-init/pull/398
<blackboxsw> rharper: benefit of patching before new-upstream-snapshot is we can add the ~20.04.1 suffix so new-upstream-snapshot knows to redact bug #s and require an SRU_BUG_NUMBER
<blackboxsw> for focal specifically
<rharper> blackboxsw: ok, I'll re-review eoan/focal now
<blackboxsw> thanks powersj for jumping in on all the flake stuff
<powersj> blackboxsw, no worries wanted a couple easy ones :)
<rharper> blackboxsw: ok, eoan/focal good to go
<blackboxsw> thanks rharper
<rharper> yw
<blackboxsw> rharper: thanks I think I missed a review on ubuntu/xenial
<blackboxsw> but otherwise all others are uploaded
<blackboxsw> if you get a chance to double check that'd be great
#cloud-init 2020-05-29
<blackboxsw> I reworked ubuntu/xenial patch ordering so it can be done without the rebasre
<blackboxsw> I reworked ubuntu/xenial patch ordering so it can be done without the rebase
<rharper> blackboxsw: ubuntu/xenial branch approved
<blackboxsw> thanks rharper
<blackboxsw> rharper: I've pinged in ubuntu-release for both curtin and cloud-init uploads to xenial, bionic, [eoan] and focal
<rharper> blackboxsw: nice, thanks!
<blackboxsw> lucasmoura and falcojr since uploads are queued for Ubuntu SRU testing to release into Xenial, Bionic, Eoan and Focal. we can start carving out verification tasks for next week for most of the major clouds.
<blackboxsw> basically we work from templates here https://github.com/cloud-init/ubuntu-sru
<blackboxsw> each cloud to which we have credentials, we run a set of upgrade verification scripts manually, per the examples here: https://github.com/cloud-init/ubuntu-sru/tree/master/sru-templates/manual
<blackboxsw> if you both have a cloud preference for your verification run, put your avatar on the related trello card from the SRU cloud-init 20.2 trello board
<lucasmoura>  blackboxsw done, I will start with aws
<blackboxsw> excellent, I've uploaded a deb package for each series to https://launchpad.net/~cloud-init-dev/+archive/ubuntu/proposed
<blackboxsw> If the SRU team doesn't approve our pending and queued uploads into xenial|bionic|eoan|focal-proposed  until Monday we can work with the 20.2 debs in our proposed PPA
<blackboxsw> in a lot of our verification scripts in https://github.com/cloud-init/ubuntu-sru/tree/master/sru-templates you'll see the ability to use our -proposed PPA  instead of the official upstream *-proposed debian pocket
<blackboxsw> today I'm going to comb through the list of commits involved in this SRU upload to create a chit sheet of each functional change that we should verify separately before we say that cloud-init SRU verification is complete
<faa> hello, i'm add second interface, regenerate cloud-inid and seed.iso ( NoCloud format v1), after boot second interface down, how regenerate guest config? cloud-init clean -r work but wery hard way
<octoboar> Hi everyone! I'm not sure if cloud-init fits my use case, but here's what I'd like to do to do. I want to read an auth token from a filesystem, then use a HTTP datasource that requires auth (in-url or with 'Authorization' header). Is this possible to achieve without a custom datasource?
<rharper> blackboxsw: https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-groovy  ;  this recipe references ubuntu/groovy (which won't be there until series-h is released, it should be ubuntu/devel  ...
<blackboxsw> ahh right/correct thx rharper want me to fix it or you?
<rharper> octoboar: I think so, the DataSourceMAAS does something similar, in the DataSourceMAAS, it provides an oauth token value, and then hit's the specified URL with the token
<rharper> blackboxsw: I can fix
<blackboxsw> rharper: also drop ubuntu/daily/groovy
<blackboxsw> as that also won't exist until Ubuntu h
<blackboxsw> because we have no need until we are cherry-picking
<rharper> blackboxsw: we should just delete it
<rharper> we already have daily-ubuntu-devel building
<blackboxsw> rharper: yep, there is no ubuntu/daily/groovy branch
<blackboxsw> ahh right
<blackboxsw> right
<blackboxsw> delete the job
<rharper> ok, I've left it at build-on-request
<blackboxsw> recipe rather
<blackboxsw> thanks
<rharper> we'll need in 5 months
<rharper> I'll send a note to paride
<blackboxsw> thanks
<octoboar> rharper: So I guess I could simulate DataSourceMAAS server. I'm going to check out this option.
<octoboar> Or could I use cloud.cfg: NoCloud: seedfrom: http://datasource/<token yanked from a file>
<octoboar> Generally I want to insert a token from a file into the datasource url at boot time, before cloud-init starts doing anything.
<rharper> hrm
<rharper> octoboar: another option would be to encode the token into a field specified in the smbios table,  no file present but you can control the endpoint dynamically (which sounds like what you want more than oauth token)  ?
<rharper> like so, -smbios type=1,serial=ds=nocloud-net;s=http://10.10.0.1:8000/   -- see https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
<octoboar> rharper: My scenario is kinda weird. I want to create an image for 'home lab' bare metal servers. Users would 1) copy the image to a disk, 2) put a personal auth token into a config on a special partition, 3) and then boot from the disk.
<octoboar> Hmm, so whatever datasource I use I need is to copy the token from my user-accessible config to cloud.cfg. Going to try to do this with systemd before cloud-init runs.
<blackboxsw> rharper: for monday, we need to address the SRU_BLOCKERS in the current cloud-init SRU xenial: https://github.com/canonical/cloud-init/pull/406
<blackboxsw> I added two quilt patches to properly disable the upstream behavior that we don't want to surface on xenial
<blackboxsw> if that looks good I'll quickly reflect those changes via git cherry-pick to bionic and eoan
<rharper> blackboxsw: hrm, ok;  xenial does not have netplan by default (but there are some exceptions, RAX I believe has netplan/systemd in their image, but I think it also has cloud-config to control the specific renderer; we can follow up)
<blackboxsw> rharper: yeah I just wanted to drop any prioritization overrides we were setting as that was focal ++
<blackboxsw> but wondering the best way for us to review such a PR
<blackboxsw> do you want manual steps to generate it? or should we review the output diffs and make sure quilt push|pull build-package etc work fine with expected end result
<rharper> blackboxsw: let's revisit this on monday; I want to review the commit we landed and the discussion
<blackboxsw> sounds good. I'll hold off on bionic and eoan branches
<blackboxsw> since we should discuss and bring in falcojr, lucasmoura and Odd_Bloke on that discussion
<rharper> yeah
#cloud-init 2020-05-30
<Kruge> hi
<Kruge> Apologies if this has been asked a million times before but, in a non-cloud environment, is it possible to embed a cloud-config into a VM template and have it run at first boot?
<Kruge> Or should I just stick to doing ugly hacks like bash scripts which delete themselves?
<tds> Kruge: look at nocloud
<tds> that said, do you need lots of identical VMs? that seems like an odd situation
<Kruge> That would definitely be an odd situation
<Kruge> I was thinking more for deploy-time customisation of templates
<tds> if you have varying config per VM, you can bake the cloud-config into a little cd image for nocloud, rather than the template itself
#cloud-init 2020-05-31
<donthurtme> anybody here having luck deploying lxd containers with both ansible and cloud-init ?
<donthurtme> like im trying to apply a new profile to my brand new container
<donthurtme> I've tried setting up my
<donthurtme> config:
<donthurtme>         user.user-data:
<donthurtme> this new profile gets applied
<donthurtme> but its cloud-init config is never loaded
<donthurtme> here is how my cloud-init looks like : https://pastebin.com/nHDvRrq7
<donthurtme> btw all my containers run on debian/buster/cloud
<faa> hello, i'm add second interface, regenerate cloud-inid and seed.iso ( NoCloud format v1), after boot second interface down, how regenerate guest config? cloud-init clean -r work but wery hard way
<meena> you'd think adding a new network interface would invalidate the node id
<faa> but this regenerate ssh keys and any tasks :(
<faa> file /var/lib/cloud/instances/instance-id/obj.pkl conteine new data, but new interface not up
