#cloud-init 2014-06-02
<smoser> harlowja_, https://bugs.launchpad.net/cloud-init/+bug/1315501 is fun.
<harlowja_> damn, u guys changed to interfaces.d
<harlowja_> didn't know that existed (interfaces.d)
<smoser> yeah, its nice. but not so nice. all at the same time.
<harlowja_> :)
<harlowja_> smoser so that shouldn't be to hard to fix though, depending how u want to do this
<smoser> so i think the right thinkg to do is:
<smoser>  * read all interfaces.d files  (in order expected).
<harlowja_> did the format for those files change?
<smoser>  * identify each interface and the file it was originally found in
<smoser>  * update the file that it was found in. 
<smoser> alternatively, we could put a cloud-init file down that just was last in the order (and truped the others)
<smoser> the format didn't really change, the cloud iamge build process jsut started taking advantate fo the fact that /etc/network/interefaces can now do "#include interfaces.d" or whatever hte syntax is thre.
<harlowja_> ah
<harlowja_> u can probably then use my handy-dandy parser i suppose
<harlowja_> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/distros/net_util.py
<harlowja_> and then scan all files, identify ones that u want to write, and as u said rewrite ones that match
<harlowja_> *and leave the old way of doing this for pre-trusty?
<smoser> well, ideally, the new way would still work.
<smoser> right?
<smoser> because /etc/network/interfaces would not contain the 'source-directory interfaces.d'
<smoser> or if it did, we'd still order the stuff all correctly and know where to replace.
<smoser> i really dont know the best way to do it. between "update-the-right-stanza" (if there is even 1 stanza, they may be additive). and "write a 99_cloud-init" config file that just trumps at the end.
<harlowja_> ya, damn them for making this so complex :-P
#cloud-init 2014-06-05
<Wulf> Hello
<Wulf> Where can I get a complete list of things a cloud-init installation will do?
<smoser> Wulf, not a good list, no. 
<Wulf> maybe I can start with a not-so-good list
<harmw> Wulf: http://cloudinit.readthedocs.org/en/latest/
<smoser> so many improvements to cloud-init to do.
<harmw> yea yea, and so damn little time
<harmw> I know the drill :>
<harmw> sounds like the stuff I keep telling people at work everyday aswell
<harlowja> anyone can help make http://cloudinit.readthedocs.org/en/latest/ better btw
<harlowja> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/rtd/ 
<harlowja> ^ this is where the above site comes from
#cloud-init 2014-06-06
<smoser> harmw, powerpc cirros almost woring
<smoser>  http://paste.ubuntu.com/7598223/
<harlowja> i can finally run my G3 powermac
<harlowja> woot
<Wulf> Hello again..
<Wulf> I'm trying to add an ssh public key to my user but the ~/.ssh directory is never created
<Wulf> http://codepad.org/76MkwitT is the config. The user is created, "passdwd -l" is run, sudoers entry is created
<Wulf> any help with that one please?
<Wulf> never mind, found the problem and wrote a bug report
<harmw> smoser: nice!
<harmw> its sad I've not had a chance to hook my cubietruck ARM device into openstack, for testing the allmighty cirros
<JoeHazzers> how do i know in what order modules are run?
<smoser> Wulf, bug link ?
<smoser> JoeHazzers, its by the config
<smoser> see /etc/cloud.d/cloud.cfg .
<smoser> cloud_init_modules runs "as early as possible" (not guaranteed network access)
<smoser> cloud_config_modules runs "as soon as network is up"
<smoser> cloud_final_modules runs "rc.local time frame"
<ddosia> hello everybody
<ddosia> could someone tell me, is it possible to use #include and #cloud-config instructions in the same time?
<smoser> ddosia, multipart input
<smoser> or just 
<smoser> #include
<smoser> http://my.cloud.config.url
<smoser> http://some.other.thing
<smoser> you can do multipart input in one of 2 ways:
<smoser>  a.) cloud-config-archive
<smoser>  b.) mime-multipart
<ddosia> smoser: thanks
#cloud-init 2014-06-07
<Wulf> smoser: https://bugs.launchpad.net/cloud-init/+bug/1327065
#cloud-init 2014-06-08
<redhot1> hi
<redhot1> I've got some simple question
<redhot1> I want to read variable in cloud-init config
<redhot1> http://pastebin.com/racmcWwC
<redhot1> I want pass the value of 'hostname' and 'public-ipv4' (from meta-data) into shell script
<redhot1> is it possible to do?
<redhot1> i've seen page http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/ where guys use syntax likeaddr: $public_ipv4:4001
<redhot1> in my cause it doesn't work, it just pass string "$public_ipv4"
<redhot1> can someone help me? :)
#cloud-init 2015-06-01
<devicenull> can I configure cloud-init to not do anything if it can't reach some kind of metadata service?
<devicenull> I keep hitting issues where it'll fail to reach the metadata server, and fall back to the OS's default config
<strikov> devicenull: If you can modify the image you may want to overwrite datasource_list: [....] in the /etc/cloud/cloud.cfg. This is a list of datasources which cloud-init tries to use and you may leave just a single entry there (one that works).
<devicenull> yea, I think I figured it out
<devicenull> having ', None' at the end of that would cause it to ignore failures
#cloud-init 2015-06-02
<larsks> smoser harlowja : when you're around, do you have an opinion on https://bugs.launchpad.net/cloud-init/+bug/1461201 ? This attempts to make the check for systemd less dependent on distro naming...
<smoser> larsks, seems reasonable. 
<larsks> Thanks for taking a look.  I want to carry this (or something similar) in our RHEL packages for 0.7.6 to ensure that they work on derivatives.
<harlowja> seems fair to me, want to make a branch and put that one up?
<larsks> harlowja: well, it's here: https://github.com/larsks/cloud-init/tree/bug/detect-systemd
<harlowja> ah, ok
<larsks> Can you use that?
<harlowja> wrong system :-P
<larsks> I'll see if I can figureo out bzr this evening.
<harlowja> orrr http://stackoverflow.com/questions/20273368/launchpad-pull-request ;)
<harlowja> * http://stackoverflow.com/a/20273376 
<harlowja> yes i know, bzr is super-awesome
<harlowja> and we could really now go through the stackforge repo
<larsks> The neat things is that I am using git locally with bzr:: remotes, so I can't tell :)
<harlowja> ah
<harlowja> maybe its easy then, never tried doing that 
<larsks> But in any case I will submit that as an actual PR later today. 
<harlowja> cools
#cloud-init 2015-06-03
<impaque> hey guys, quick question: how can i use the variables provided by the (cloudstack) metadata, such as local-hostname, in my scripts?
<impaque> does cloud-init expose them somehow somewhere?
<larsks> It does not, currently (wish it did!).  You can talk to the metadata service yourself using a simple HTTP api.
<larsks> There is a 'cloud-init query' command, but it appears to be unimplemented at this time.
<impaque> larsks: thanks!
<impaque> although, i believe they *are* exposed in cloud-config scripts
<impaque> from some coreOS page:
<impaque> #cloud-config  coreos:   fleet:       public-ip: $public_ipv4       metadata: region=us-west
<impaque> (merged to one line, but you get the idea)
<impaque> hmmmm
<larsks> Note that coreos has their own version of cloud-init.
<impaque> ah :/
<smoser> larsks, yeah, we need cloud-init query
<smoser> so... impaque we will fix that for you in 2.0... its in the plans.
<smoser> cloud-inti will expose (via json or versioned python module/access) objects to get you that stuff.
<impaque> smoser: so basically, the only way to get that meta-data is by writing a module, if i get that correclty, via cloudinit.util?
<weaseal> does anyone know if it is possible to detect AWS resources such as RDS, ELB, etc by only feeding in a VPC ID?
<weaseal> using cloud-init
<smoser> "detect" ?
#cloud-init 2015-06-04
<larsks> harlowja: btw, submitted that pr yesterday.  And now it has tests, too.
<Odd_Bloke> smoser: I'm looking at fixing https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1411582 in cloud-init.
<Odd_Bloke> smoser: My suggested fix is to shift the udev rule that creates /dev/disk/azure/... in to cloud-init, and then teach cloud-init to dereference /dev/disk/... symlinks.
<Odd_Bloke> smoser: We could also look at making the directory created more generic (e.g. /dev/disk/cloud-init/...), so that we have a single place to look for stuff on different clouds.
<Odd_Bloke> (This would, for example, allow us to answer 'do I have an ephemeral disk?' by just looking at /dev/disk/cloud-init/ephemeral or whatever)
<Odd_Bloke> smoser: Thoughts?
<smoser> Odd_Bloke, well, looking forward, there is a plan for a block device usage languange. that explicitly declares how disks should be used. this will support raid, lvm, bcache ... 
<smoser> that is happening for curtin usage.
<smoser> i'd like for cloud-init to be able to take that language and render it into being also.
<smoser> the added piece that cloud-init needs is kind of what you were mentioning. curtin will be fed disk config like http://paste.ubuntu.com/10939715/ .
<smoser> but ideally cloud-init can then allow substitution of things like 'ephemeral0' where it finds out what that is.
<harlowja> larsks awesome
<Odd_Bloke> smoser: Right, is that something that can happen now?  We've got pressure from Azure to get this fixed.
<Odd_Bloke> smoser: (Also, will we realistically ever take the curtin language in to 0.7.x?)
<smoser> Odd_Bloke, probably not
<smoser> Odd_Bloke, so i think i'm ok with the overall idea.
<smoser> then cloud-init would allow users to specify 'ephemeral0' as they can elsewhere.
<smoser> and on azure that would map to /dev/cloud-init/ephemeral0
<smoser> i think thats what you're saying
<Odd_Bloke> smoser: Yeah, it would be /dev/disk/cloud-init/ci-ephemeral or somesuch.
<Odd_Bloke> smoser: Ben proposed this udev config: http://paste.ubuntu.com/11566987/
<smoser> it doesnt seem unreasonble.
<smoser> but i dont know how this determines between "running on azure" and "running on hyper-v"
<alexpilotti> smoser: this is for cirros?
<smoser> alexpilotti, that above was azure ubuntu
<alexpilotti> oh, k tx
<Odd_Bloke> smoser: I'm not sure either; udev is not my strong suit. Let me check with Ben.
<Odd_Bloke> smoser: Ben confirmed with Microsoft that that would only match in Azure environments.
<smoser> why would they only match in azure
<smoser> that is odd
<smoser> dont you think ?
<smoser> ID_VENDOR of Msft
<smoser> odd
<Odd_Bloke> smoser: I _think_ it's the UUIDs that are Azure-specific.
<smoser> ah. that does seem more reaosnable.
<alexpilotti> smoser: Iâm hearing about ext4 for config drives
<smoser> alexpilotti, that is odd / roughly stupid
<smoser> not really here. but a link to info woudl be good. but really cloud-init doesnt care. 
<harlowja> windows dll fun scares me lol
#cloud-init 2015-06-05
<harlowja> claudiupopa https://review.openstack.org/#/c/170242/ updated :-P
<chrisgh> Howdi are there still static network bugs in cloud-init related to Ubuntu? https://bugs.launchpad.net/cloud-init/+bug/1225922
<chrisgh> never mind user error
<Odd_Bloke> claudiupopa: Do you know how the gate Jenkins jobs are configured?
<Odd_Bloke> claudiupopa: Specifically, I'm wondering if we can enable coverage testing.
<claudiupopa> Through openstack-infra/project-config.
<claudiupopa> See this for example https://review.openstack.org/#/c/169293/
<claudiupopa> Never tried it though, so I'm not sure.
<Odd_Bloke> claudiupopa: Cool, thanks for the pointer.
<Odd_Bloke> It looks like none of the coverage jobs in Jenkins have ever run, so I'm probably chasing up the wrong tree. :p
<claudiupopa> but nevertheless we should have coverage enabled. :P
<Odd_Bloke> claudiupopa: Yeah, see https://review.openstack.org/188739
<Odd_Bloke> I'm happy to bike-shed on the number (probably with input from smoser and harlowja).
<claudiupopa> nice.
<claudiupopa> 90% seems sufficient for the moment.
<Odd_Bloke> Yeah, for me it's more of a statement that we care about it.
<jengelman> Is it possible to merge user_data from multiple datasources?
<jengelman> I'm trying to create an AMI that has a static cloud-config in a file and I want that to be bundled with the normal EC2 datasource processing, but can't seem to figure out how to do it
<larsks> The standard behavior does merge /etc/cloud/cloud.cfg along with any cloud-config data provided via the metadata service.
<larsks> But user_data can only come from one source, I think.
<jengelman> so I can just specify a user-data section in /etc/cloud/cloud.cfg and it will be merged?
<larsks> I don't believe there is a way to put user-data into a cloud-config file, no.
<larsks> But you can do other things, like write files, run shell scripts, etc.
<Odd_Bloke> harlowja: Hmm, let me look at that doc thing.
<Odd_Bloke> harlowja: It Works On My Machine (TM). ;)
<Odd_Bloke> harlowja: Aha, got it; I was using environment variables in tox rather than doing it properly in setup.cfg; good catch.
<jengelman> Ah, so it's not clear for the documentation that you can put files in /etc/cloud/cloud.cfg.d and there formats are just the same as what you would be put in user-data
<jengelman> but that works
<Odd_Bloke> harlowja: http://docs-draft.openstack.org/75/188775/2/check/gate-cloud-init-docs/c600b34//doc/build/html/contents.html HAPPY NOW?
<Odd_Bloke> harlowja: So after you approve a change, do I still need another core dev to +2 it for it to get merged?
 * Odd_Bloke hasn't used the OpenStack CI stuff before.
<claudiupopa> isn't autodoc importing modules?
<Odd_Bloke> claudiupopa: It does; we can exclude modules from it though.
<claudiupopa> what will happen when it will include windows specific stuff?
<harlowja> Odd_Bloke cool, 90% ya, hmmm
<harlowja> i like 125%
<harlowja> lol
<harlowja> Odd_Bloke there is also some ci coverage job that afaik the openstack CI stuff runs
<harlowja> perhaps we should also just use that?
<Odd_Bloke> harlowja: https://jenkins.openstack.org/job/cloud-init-coverage/ ?
<harlowja> something like that ya, ha
 * harlowja never remebers what it runs
<Odd_Bloke> harlowja: If you can work out how to actually use that, be my guest. :p
<harlowja> Odd_Bloke how about jumping on the #openstack-infra channel and asking
<Odd_Bloke> So demanding.
<harlowja> those folks know, ha
<harlowja> i asked once upon a time, but i forgot, lol
<harlowja> because if u search for 'coverage' in https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml u'll see many
<harlowja> https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L5223 maybe just needs something to be added?
<harlowja> *the cloud-init section
<Odd_Bloke> You've done it now, harlowja, I'm having to clone project-config.
<harlowja> woot
<harlowja> ha
<Odd_Bloke> harlowja: So I'm not 100% sure that using that job makes sense; it will run a single tox environment, so we won't be able to easily get coverage for each Python version.
<harlowja> can that be fixed?
 * harlowja doesn't know the answer to that
<harlowja> fixed/changed/soemting
<Odd_Bloke> harlowja: My tox-fu is possibly lacking, but I don't think so.
<harlowja> would infra be willing to change something?
<harlowja> *would that help?
<harlowja> if not i guess we do what we have to do then
<Odd_Bloke> harlowja: So I _think_ -infra would need to move to a model similar to the different versions of Python (i.e. separate coverage27, coverage34 jobs).
<harlowja> is that possible?
<Odd_Bloke> Note the timestamps, I said that before fungi did. ;)
<harlowja> maybe some small change to https://github.com/openstack-infra/system-config/  or project-config will make this possible
<harlowja> and then everyone will be jolly and happy
<Odd_Bloke> harlowja: Perhaps, but for now I think the solution in that code review is the best we have.
<harlowja> sure, for now == forever ? ;)
<Odd_Bloke> If someone else implements the infra changes, I will happily take care of transitioning us to use them. ;)
<harlowja> we gotta be BFF with the infra folks, if we want automated cloud-init jobs (with real-images)
<harlowja> someone needs to be better BFF than i am, ha
<harlowja> aka, probably just needs some way to connect a coverage script into https://github.com/openstack-infra/project-config/tree/master/jenkins/scripts 
<harlowja> instead of just using tox -ecoverage or whatever
<Odd_Bloke> harlowja: Looking at how things are done elsewhere, I think it would definitely need to go through tox.
<Odd_Bloke> That's how projects get to customise things.
<Odd_Bloke> And that could be problematic.
<harlowja> right
<Odd_Bloke> Because tox knows what py27 and py34 mean, so sets things up properly before executing testenv.
<harlowja> but https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/run-tox.sh is already existing, maybe it can be used
<harlowja> afaik that thing is running all the other tox stuff already
<harlowja> https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/run-tox.sh#L122 ...
 * harlowja doesn't get what that couldn't be used to run different coverage venvs or something
<Odd_Bloke> harlowja: If you want to run coverage on 3.4, you'd need to do something like "tox -e coverage-py34".
<harlowja> right
<Odd_Bloke> harlowja: But tox doesn't know what that is, so you have to define it in your tox.ini.
<harlowja> sure, and then we need the infra team to have that job run that triggers that venv
<Odd_Bloke> But you'd _also_ have to define coverage-py27.
<Odd_Bloke> There isn't a way to get them to share their definition.
<harlowja> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L18 
<harlowja> add that job to cloud-nit
<harlowja> then have envlist beee
<Odd_Bloke> (Whereas py26, py27, py34 et al are defined to use testenv)
<harlowja> - coverage-py27
<harlowja> - coverage-py34
<harlowja> - coverage-py26
<harlowja> ?
<harlowja> afaik that is a custom tox env that gets used
<harlowja> and could just be one we use to
<Odd_Bloke> harlowja: Right, I'm saying that we'll have to define each of those separately in our tox.ini.
<harlowja> put that under https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L3457 and profit?
<harlowja> sure
<harlowja> but at least infra will run it for us
<Odd_Bloke> And all this really gains us is waiting for two extra Jenkins jobs. :p
<harlowja> ya, meh
<harlowja> they run in parallel
<harlowja> * https://review.openstack.org/#/c/187750/ (already used to waiting, ha)
<harlowja> we'll likely need more soon anyway (if we can get real cloud-init from the commit into some image job that also gets basic tests)
<harlowja> ^ therefore ensures the cloud-init from that commit functions to some level
<harlowja> * ie a windows image could be part of that (or linux, or freebsd...)
<Odd_Bloke> harlowja: It occurs to me that we also have this problem with pep8; we'll only be running that for either 27 or 34.
<harlowja> more tox envs
<harlowja> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L1545 (another project i'm a core of)
<harlowja> probably should just start adding ones we want into that...
<harlowja> *into something similar for cloud-init
<harlowja> Odd_Bloke want to submit those ;)
<Odd_Bloke> harlowja: Yeah, will do.
<Odd_Bloke> Seeing if I can do some tox magic to reduce repetition.
<harlowja> k
<Odd_Bloke> I really want "[testenv:coverage-{env}]" to DWIM.
<harlowja> https://github.com/openstack/taskflow/blob/master/tox.ini#L41 
<harlowja> u can refer to other envs from other envs
<harlowja> ^ from another project i'm a core on, lol
<harlowja> maybe something similar
<harlowja> but idk, thats all my tox-fu i got, ha
<harlowja> ok back to url_helper stuff
<harlowja> project-config stuff (and infra reviews in general) seem to take a while btw, like a week(ish)
<harlowja> just fyi
<Odd_Bloke> That's fine, I only have about a day a week on cloud-init 2.0 stuff. :p
<harlowja> thats 24hours
<harlowja> use it or lose it
<harlowja> lol
<harlowja> i expect 24 hours worth of work done
<harlowja> or else
<harlowja> chop chop
<Odd_Bloke> I didn't mean one of your Earth days.
<harlowja> :-/
<harlowja> where u at
<harlowja> lol
<Odd_Bloke> harlowja: Eris sounds about right: http://www.bobthealien.co.uk/table.htm :p
<harlowja> your message sure do come quickly
<harlowja> how are u bypassing light-speed limit?
<Odd_Bloke> Erisians have precognition.
<harlowja> please tell me how so i can get nobel prize, lol
<Odd_Bloke> harlowja: You didn't answer my earlier question: do I need to bug another cloud-init core dev to review the things that you've +2'd?
<harlowja> likely, if we want to be 'official' about all this
<Odd_Bloke> Cool.
<Odd_Bloke> The green tick is confusing. :p
<harlowja> needs a +2 +a (approve)
<harlowja> so ya, 2 usually is how it goes
<harlowja> although meh, sometimes the officalness is a waste of life/time
<harlowja> and a +1 from jenkins
<awkwords> is their any man pages for cloud-init?
<Odd_Bloke> harlowja: With this tox config: http://paste.ubuntu.com/11593770/ and http://paste.ubuntu.com/11593770/, I think we'll be good.
<Odd_Bloke> harlowja: Thoughts?
<harlowja> seems ok to me, not sure how 'coverage-{py27,py34}' works in envlist
<harlowja> but if it works, okie dokie
<Odd_Bloke> harlowja: It works fine. :)
<harlowja> k
<harlowja> i've usually just seen them individually listed
<harlowja> but if it works, thats cool
<Odd_Bloke> harlowja: Bah, it apparently does not work fine.
<harlowja> :-P
<Odd_Bloke> harlowja: I suspect that I'm getting screwed by an old version of tox.
<harlowja> possibly, ha
<harlowja> just individually list them?
<Odd_Bloke> That isn't even the bit that errors. ;.;
<harlowja> ah
<harlowja> ha
<Odd_Bloke> And I get a different error if I use tox 1.6 (which is what's in trusty).
<harlowja> :-/
<Odd_Bloke> AHA
<harlowja> ok i updated https://review.openstack.org/#/c/170242/
<Odd_Bloke> They're using 2.0.
<Odd_Bloke> harlowja: I still don't like UrlResponse; but if we are going to keep it, could we change its names to be consistent with the requests object that it's a wrapper for?
<harlowja> RequestsResponse ?
<Odd_Bloke> harlowja: Like having OurNIHThing.contents wrap Response.content; that's actively developer hostile.
<harlowja> lol
<Odd_Bloke> And .code wrap .status_code.
<harlowja> k
<Odd_Bloke> Also having .ok be a method when requests.Response.ok is a bool.
<harlowja> thats cause ours is super-better
<harlowja> ha
 * Odd_Bloke grins and nods while backing away slowly. :p
<harlowja> ha
 * harlowja runs towards u
<harlowja> *fastly
<harlowja> let's see if we can get https://review.openstack.org/#/c/188901/ to work also
<harlowja> should work, not sure if that bot needs permissions or something on the channel
<harlowja> makes it somewhat easier tosee what to review
<harlowja> Odd_Bloke ChangeLog is generated by the doc build?
<harlowja> i guess pbr is building it
<Odd_Bloke> harlowja: Yeah, pbr is building it.
<Odd_Bloke> harlowja: Happy to stop it doing that instead, if that's what we want.
<smoser> Odd_Bloke, would you say that cloud-init requires gdisk ?
<smoser> sgdisk
<smoser>  cloudinit/config/cc_disk_setup.py
<smoser> seems to use it.
<Odd_Bloke> smoser: Yeah, I think we need it for dealing with GPT partitioning.
<smoser> so cloud-init never depended on it before.
<smoser> and cloud-initramfs-utils dropped the dependency on it as it does not need it any more (it can just use sfdisk)
<Odd_Bloke> Ah.
<Odd_Bloke> Oops.
<smoser> noticed this on failure to deploy wily/ppc64el as we dont have sgdisk in the image any more but we are using it for uefi/gpt partitioning
<harlowja> operator granted, woot
<rEd_quEEn> ;)
<harlowja> smoser operatorship granted
<harlowja> this allows us to get a gerritbot in here for reviews
<Envigado> harlowja, please grant it to me too
 * harlowja will let smoser grant the rest, isn't really sure who should/shouldn't be, lol
<harlowja> i defer, ha
#cloud-init 2016-06-06
<cpaelzer> smoser: already around?
<cpaelzer> smoser: while the former two quesitons are solved by time, I would still ahve another if you are around
<smoser> cpaelzer, here now.
<smoser> what sup?
<cpaelzer> hiho
<cpaelzer> I'm testing more nd more regarding the cloud-init issue you reported due to my merge
<cpaelzer> most things are done, I found a few on top thou
<cpaelzer> working on these, but that was not the question
<cpaelzer> at some point I'll have to throw it into a adt or ppa to test if it works there as well
<cpaelzer> since packaging isn't in the repo's itself I wondered what the recommended way is to do so
<cpaelzer> I mean there are many ways
<cpaelzer> pull-lp-source - merge a bzr tree in
<cpaelzer> bzr tree and merge a debian dir into that
<cpaelzer> or something totally different
<cpaelzer> I assume you don't do that manually each timne, so I wanted to know which way I should go to test ppa/adt builds
<cpaelzer> smoser: ^^
<smoser> cpaelzer, right. need to test building in ppa.
<smoser>  ./packages/bddeb
<smoser> it hsa --help, but largely i'd run:
<smoser>  ./packages/bddeb -S
<smoser> then
<smoser>  sbuild ...
<cpaelzer>   /packages/bddeb -S gievs a .dsc and a orig tarball then?
 * cpaelzer is reading bddeb ...
<smoser> yeah, it uses bzr export to get a .orig and then goes from there.
<cpaelzer> ok nice, then I can hit these stages once I fixed a corner case of the keyserver issues
<cpaelzer> smoser: btw I also made sure I port fixes I did this way between curtin/cloud-init as I work on the same code for both
<cpaelzer> so I hope to end the day with an update to the curtin MP and a new cloud-init MP
<smoser> ok. great.
<GivenToCode> hi i have a custom cfg in /etc/cloud/cloud.cfg.d that specifies an fs_setup to mkfs ext4 on ephemeral0 (ec2), however on c3 instances types we get an error in the log that /dev/xvdb is already mounted when trying to mkfs...
<GivenToCode> this is 0.7.5-0ubuntu1.18
<GivenToCode> I would expect fs_setup to run before any mounting, right?
<GivenToCode> there is an entry for xvdb in fstab, and I've tried to override that by specifying a mounts [ ephemeral0, null ]
<cpaelzer> GivenToCode: yes it should set up fs_setup things before the mount step of cloud-init
<cpaelzer> GivenToCode: but I'd wonder if something else could mount it even before
<cpaelzer> GivenToCode: depending on you influence to this environemnt you could add in your .cfg commant to check the status early on
<cpaelzer> GivenToCode: like early_commands: checkmounts: 'mount'
<cpaelzer> and then check that output if it really is mounted at the beginning
<cpaelzer> cpaelzer: sorry I'm mixing up things here - let me rethink
<cpaelzer> writing too muhc curtin / cloud-init code at the same time - that would have been the curtin answer
<cpaelzer> GivenToCode: doc/examples/cloud-config-boot-cmds.txt should be better
<cpaelzer> GivenToCode: if you prefer an url http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot
<cpaelzer> smoser: did the mirrorlist issues (the last one you added the skipif for) happen to you in sbuild as well, or only in the buildd's?
<smoser> cpaelzer, no. only in a buildd
<smoser> cpaelzer, but think about it working on centos
<cpaelzer> smoser: k, thx - setting up a victim ppa then
<smoser> we want unit tests to work there.
<cpaelzer> smoser: I already have
<smoser> ok.
<smoser> i maeant to tell you this...
<smoser> there is a 'FilesystemMocking' class
<smoser> or somethignt
<cpaelzer> smoser: I already covered that - at least I think so :-)
<smoser> aht basically mocks all fsreads.
<smoser> to another directory that you can popoulate
<cpaelzer> smoser: for this in particular it doesn't need a fs mocking class
<smoser> ok. thats fine.
<cpaelzer> the feature of influencing apt environment is very very ubuntu/debian specific
<cpaelzer> so are the tests
<cpaelzer> but that can be nicely done with a skipIF to the apt binary
<cpaelzer> I covered this and more and you can take a look once it is mature enough for a merge proposal
<cpaelzer> for now I seem to uncover further issues faster than I like and EOD is coming closer ...
<cpaelzer> :-/
<smoser> cpaelzer, i dont know..
<smoser> it is in fact debian/ubuntu specific
<smoser> but that does not mean the unit tests should not run elsewhere.
<cpaelzer> for the start I'd like to disable things so they work properly
<cpaelzer> over time one can (r not) commit the time to enable this on more environments
<cpaelzer> that is what I'd prefer
<cpaelzer> I feel to postpone too much as I try to get this done - so I wouldn't add a centos env test to the task list right now
<cpaelzer> but you are right overall - you want a centos dev on cloudinit to see when he breaks things
<smoser> well, thats fine. but generally speaking the failure i saw was from either a file existing or not existing (or a directory or something)
<smoser> i'm not really sure
<smoser> and the fact that that exists or does not exist is somethign that would ideally be covered by the unit test
<smoser> which should not require the system its running on to be in a given state
<cpaelzer> ah that one - it is a file delivered by cloud-init itself, and we don't check if the file exists
<cpaelzer> we check if cloud-init "checks" if the file exists
<cpaelzer> because that is the code path that it should take and that is what the unit test does
<cpaelzer> the file doesn't exist on my system either
<cpaelzer> but the test works
<cpaelzer> what happens there is the issue in find_apt_mirror_info that I need to debug in the ppa environment, because I can't see locally what is breaking
<cpaelzer> an issue in that function causes the one you have seen as follow on error
<smoser> right. there is a build logo from an existing one if you wantd.
<cpaelzer> I have checked the two logs you linked
<smoser> yah. ok.
<cpaelzer> but I need more debug data on it
<smoser> you're doing well,. sorry to sound like i'm giving you flack
<cpaelzer> for now I'm unhappy wit hthe gpg key things - I wanted to fix them right
<smoser> you wrote a bunch of test cases for stuff that wasnt' tested and fixed a bunch of stuff and there was fallout
<smoser> the first part is +100
<smoser> )
<cpaelzer> I'm fine and we will make it good :-)
<cpaelzer> thanks
<smoser> the second part is admittedly *my* fault and cloud-init fault for accepting your MP before it built in a ppa
<smoser> :)
<cpaelzer> so I added a nice fallback in case network is unavailable - but it will use real network if it is available
<cpaelzer> unfortunately now that is "can" work it complains in sbuild envs for things that are not writable
<cpaelzer> will get it done ...
<smoser> please dont depend on presense of a keyserver
<cpaelzer> just gets a longer and longer task :-)
<smoser> those things really suck
<cpaelzer> smoser: thats what I ment - I don't DEPEND anymore, but IF I can reach it properly I do
<smoser> that will cause arbitrary failures
<cpaelzer> and if I can't reach it, it has a fallback that works
<cpaelzer> so the test is happy with and without keyserver
<cpaelzer> it even works with broken keyservers or random dns names
<cpaelzer> but IF it works it just is closer to the real thing
<cpaelzer> which isn't bad for a unit test
<smoser> i was honestly fairly OK with not unit testing that shell blob. and prefer that to cloud-init's unit tests mucking with my .gpg dir
<cpaelzer> hehe
<cpaelzer> it isn't a shell blob anymore
<cpaelzer> :-)
<smoser> ok. but i still would really rather it not ever modify my .gpg dir
<cpaelzer> thatr with the .gpg dir is what I have to tackle next
<cpaelzer> for sbuild env anyway
<cpaelzer> smoser: so you tihnk it is ok to throw parts away and mock that side of things?
<cpaelzer> ok, I can do that - that will throw away some code that cost me some sweat today, but it was good for the learning still
<cpaelzer> at least we got the shell blob to become python as you wanted as well, thanks to a nice suggestion of rharper and some polishing I did
<rharper> cpaelzer: nice
<cpaelzer> I think with the plan to not mess around with .gpg at all in the tests I won't complete it today anyway
<cpaelzer> smoser: do you mind if it is tomorrow and not today that you get the next MP revision?
<rharper> yeah, I'd be very dubious of relying on a .gpg dir
<rharper> rather a tmpdir and export GPG var to point to it
<rharper> certainly GPG will check ENV for that location
<cpaelzer> rharper: it has an argument
<cpaelzer> rharper: just need to find a way to insert that into the subp's
<cpaelzer> I still want to test the original code and not mock it all
<rharper> sure, lemme see if I've example of setting env for subp
<smoser> you can just mock the output of gpg
<smoser> capture that call really is i think the right way to do it.
<cpaelzer> I'd mock the gpg calls but then assert on them to make sure they were poked the right way
<smoser> honestly, mocking the call to that shell blob as i did, we have 2 lines of untested code out of that.
<smoser> and no mucking with your .gpg
<cpaelzer> mucking vs mocking :-P
<smoser> for code that hasnt changed and worked fine for 4 years (except for arguably invalid user input)
<smoser> basically, i dont want you to spend a bunch of time making sure we get test coverage on that.
<cpaelzer> ok
<cpaelzer> I'll try to mock it then and get it working tomorrow
<rharper> cpaelzer: Popen takes a env= param; that's not exposed in subp IIRC, but one could,  then myenv = os.environ_copy(), and myenv[VAR]=VAL , pass env=myenv;
<cpaelzer> rharper: thanks for checking, but given the requirements - for now - I'd go with smoser and "just" mock some things
<rharper> yep
<smoser> subp takes env also.
<smoser> yeah.
<rharper> nice
<GivenToCode> cpaelzer, yes something is mounting everything in fstab before fs_setup, we don't have anything custom running before cloud-init
<GivenToCode> ok, i think the fix is to remove it from fstab when we bake the image
<ajorg> smoser: Can I submit GitHub pull requests instead of bzr branches for contributions?
<smoser> ajorg, wont be github pull requests. but launchpad git pull requests should be shortly.
<smoser> ajorg, basically upstream code path will still be launchpad but revision control in git
<ajorg> okay, I'll try to learn bzr for today and maybe leave some of our more interesting patches for when git is available.
<ajorg> If I did a terrible job of preparing this, it would be good to correct me now as I have several more patches I'll be trying to submit this week: https://code.launchpad.net/~ajorgens/cloud-init/python26/+merge/296575
<smoser> ajorg, that looks fine. i'm really sorry about regressing 2.6 function
<ajorg> Thanks.
<ajorg> By way of introduction, I'm the cloud-init maintainer at AWS.
<ajorg> (for the Amazon Linux AMI)
<ajorg> We have a feature or two, several bugfixes, and a bunch of backward compatibility stuff for our 0.5 fork from before Yum / RPM were supported.
<ajorg> I'd like to upstream whatever parts are palatable.
<smoser> we need to get some c-i in place that can test on a 2.6.  the thing that makes it hard is that there is actally not even a python2.6 *available* in a supported ubuntu release (12.04 shipped 2.7)
<ajorg> Anything I can do to get that in place?
<ajorg> CentOS 6 testing would be prudent, and would cover 2.6
<smoser> harlowja_at_home, has mentioned some options with his remote_tox
<ajorg> cool
<smoser> but yeah, we need to get this stuff in place.
<harlowja_at_home> https://github.com/harlowja/remote_tox ifu care
<smoser> dict comprehension is just so nice
<smoser> i think thats the right word
<ajorg> I know!
<smoser> {a:b for a, b in d.items()}
<harlowja_at_home> dict((a,b) for a, b in d.items()) ?
<harlowja_at_home> just use the word 'dict' lol
<smoser> harlowja_at_home, deal.
<ajorg> Amazon Linux AMI has been on 2.7 for a few releases, so we won't be a fruitful source of 2.6 patches going forward.
<ajorg> hmm, I get to learn how to do author attribution in bzr for some of these
<smoser> ?
<smoser> author attribution ?
<ajorg> assuming it's possible, submitting a patch in someone else's name (other authors @amazon.com)
<ajorg> I know how to do it in git.
<smoser> ah. well you can commit with '--author='
<smoser> that sprobably the thing you want.
<smoser> then you become 'committer' and author is author
<ajorg> perfect, very much like git
<ajorg> thank you
<smoser> larsks, fyi, i just uploaded a 0.29 tarball with your growpart fix https://launchpad.net/cloud-utils/trunk/0.29
<larsks> smoser: thanks for the heads up!
<harlowja> ok backs
<larsks> This is a dumb question...under ubuntu, "python3 setup.py install ..." wants to put cloud-init into site-packages, but that dir doesn't exist.  What's the right way to install it?
<larsks> Ah, --prefix=/usr/local seems to be the thing.
<smoser> yeah, need some --prefix.
<smoser> ./packages/bddeb though would probably be preferable in most cases
<larsks> This is just for testing, not packaging...I was just confused by the "site-packages" does not exist error.
<larsks> Apparently I haven't run setup.py on an ubuntu system before.
#cloud-init 2016-06-07
<harlowja> smoser  https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor/+merge/293957 should be up and running again
<harlowja> seems to pass unit tests (allthough a few new unittests fail)
<harlowja> aka one that runs 'Command: ['dpkg', '--print-architecture']'
<harlowja> or ''lsb_release', '-cs']'
<harlowja> stuff that's not being mocked afaik
<harlowja> https://gist.github.com/harlowja/d0a8d56ca460d0a7adbc99f1db3c8fe6
<ccard_> I am creating an OpenStack instance with an ephemeral disk, and cloud-init creates a filesystem and mounts it as /mnt - is there a way to stop cloud-init from doing this? I want to use the ephemeral disk to create a drbd device.
<GivenToCode> ccard_, I just had to do something similar, check this out https://castro.io/2015/01/24/preparing-ec2-instance-store-with-cloud-init.html
<GivenToCode> I had to override the mounts and also remove an existing entry in fstab for the mount
<ccard_> GivenToCode: thanks, that's helpful. Ideally I'd like there to be no filesystem at all created on the ephemeral disk, since I want to create a drbd partition on it.
<GivenToCode> ccard_, can't help you there, amazon puts a filesystem on most of its ephemeral disks automatically. Though you can replace it with whatever you like via fs_setup or bootcmd, etc
<smoser> harlowja, yeah, cpaelzer is working on those.
<cpaelzer> smoser: ?
<cpaelzer> smoser: out of my log since I sometimes disconnect over night (like tonight) due to vpn
<cpaelzer> smoser: was that a reference to the bug you opened with the unittests failing in some environments?
<smoser> harlowja said:
<smoser> smoser  https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor/+merge/293957 should be up and running again.  seems to pass unit tests (allthough a few new unittests fail). aka one that runs 'Command: ['dpkg', '--print-architecture']' or ''lsb_release', '-cs']'.  stuff that's not being mocked afaik https://gist.github.com/harlowja/d0a8d56ca460d0a7adbc99f1db3c8fe6
<cpaelzer> thanks smoser
<cpaelzer> yeah the dpdk print architecture fix is in my last MP
<cpaelzer> dpkg
<cpaelzer> I can't type it anymore :-/
<smoser> :)
<smoser> link ?
<cpaelzer> I thought it falls into your inbox, gimme a sec ...
<cpaelzer> smoser: https://code.launchpad.net/~paelzer/cloud-init/bug-1589174-fix-tests-in-adt-env/+merge/296643
<cpaelzer> smoser: you might argue about the (unrelated) bddeb change, since ther emight have been a way I just haven't seen
<cpaelzer> smoser: but other than that most changes should be straight forward
<cpaelzer> smoser: tested in Ubuntu, ppa, sbuild and a CentOS container
<smoser> cpaelzer, it does fall into my inbox with the slew of 300 other things :)
<cpaelzer> smoser: hehe
<cpaelzer> smoser: it takes most of my first 30 minutes each day to sort that out
<cpaelzer> smoser: don't tell me you have not all read in your inbox :-P
<GivenToCode> has anyone run in to issues in upstart with a service with start on stopped cloud-init running before cloud-init is finished?
<smoser> GivenToCode, not that i'm aware of, and that surely should not happen,.
<smoser> harlowja, what would you do if you wanted an Enum but did not want to depend on python2 package to provide python 3's builtin Enum ?
<smoser> cpaelzer, you have a conflict in merge at url above
<cpaelzer> smoser: ok, I'm gonna refresh later
<cpaelzer> I should always before MP'ing next time ...
<GivenToCode> smoser, I found this: http://www.madorn.com/cloud-init-stages.html#.V1bVB7orJMM which helped me see I need start on stopped cloud-final
<smoser> right. :)
<smoser> GivenToCode, i thought maybe you didn't mean cloud-init but cloud-inti-final
<cpaelzer> 1227 wasn't out yet when I branched this morning
<cpaelzer> I merged and uploaded again
<cpaelzer> thou I'm not sure all in the merge worked correctly
<cpaelzer> the conflict was minimal
<cpaelzer> smoser: please let me know if there still is a conflict
<cpaelzer> did a pull into a trunk dir, found the conflciting rev 1227, did a merge into my dir, found the conflict, resolved and committed the merge
<cpaelzer> yet it seemded to have an empty diff
 * cpaelzer lacks one more piece of bzr magic
<harlowja> ok backs
<harlowja> sooo smoser Emum that u don't want to use the pkg from
<harlowja> i can make a crappyEnum that will work
<harlowja> lol
<harlowja> in nearly the same manner
<harlowja> smoser do u want that ?
<smoser> harlowja, my query was for stuff for curtin..
<smoser> there is http://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python
<smoser> which kind of has a crappy enum for py2, but i just wanted to know what you woudl ot there.
<harlowja> prob make enum (maybe less crappy) that works for u
<harlowja> and try to make it so that it works with py3 (using native stuff) and py2 (using your stuff)
<harlowja> smoser  i'll make some
<harlowja> thing
<smoser> the enum seems pretty nice. do people use that ?
<harlowja> i do
<harlowja> lol
<harlowja> i am all the people
<harlowja> lol
<smoser> my expectation was to use for basic constant like things
<harlowja> smoser ya, one sec, whipping up something for u
<harlowja> lol
<smoser> looks like we used this in curtin (and cloud-init/reporting) before
<smoser>  pastebinithttp://paste.ubuntu.com/17095446/
<smoser> http://paste.ubuntu.com/17095446/
<harlowja> hmmm
<harlowja> ya, that's pretty ghetto :-P
<harlowja> mine will be a little better, ha
<smoser> i suspected as much.
<harlowja> https://gist.github.com/harlowja/f60d91128a1fcd99d30d27715d5a9a30 smoser
<harlowja> have fun
<harlowja> ha
<harlowja> should use the native enum, and a close-enough equivalent on py2
<smoser> harlowja, if six.PY3 ?
<smoser> if enum is None
<smoser> ?
<harlowja> ya, do that, i was just doing that approach cause wanted to make sure my mac tested the other path
<harlowja> and the venv i run in has enum lib installed, ha
<smoser> magicalChicken, ^
<smoser> that is for enum stuff. harlowja thinks our namespace is for pathetic lusers
<harlowja> lol
<harlowja> it at least mirrors more closely the py3 stuff
<harlowja> so the basics should work across both
<magicalChicken> smoser: Yeah that amkes sense haha. Using __getattr__ is kinda not good
<magicalChicken> I'll get that pulled into the storage-config branch real, quick, thanks harlowja
<harlowja> np
<harlowja> do is enum not None though
<harlowja> as smoser pointed out
<harlowja> it probably is missing some things, but should work for basic things
<harlowja> soooo that gets me enough brownie points to get https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor merged right
<harlowja> lol
<harlowja> smoser in that i'm just for that merge not making the net/ folder any less copy/pasteable
<harlowja> think change big enough already
<harlowja> can do that later or something
<harlowja> or ummm, neterator
<harlowja> lol
<smoser> harlowja, it does in deed.
<harlowja> oh goodie
<harlowja> does it also get enough for https://gist.github.com/harlowja/d63a36de0b405d83be9bd3222a5454a7 after that to also merge
<harlowja> lol
<harlowja> actually i gotta change that due to the nova bug
<harlowja> (with naming things dumb)
<harlowja> but, bb, gotta do oslo ptl stuff for a little
#cloud-init 2016-06-08
<copumpkin> which cloud-init cc_X module is responsible for creating the centos user that I find in all the usual AMIs?
<copumpkin> ah, it looks like it isn't a module
<copumpkin> oh, users_groups, nevermind
<Toger> In cloud init 0.7.5 (included in centos 7.2) Iâm trying to get a node to rerun cloud-init and the userdata scripts.  Currently I do rm -rf /var/lib/cloud/sem/* /var/lib/cloud/instance /var/lib/cloud/instances/* /var/lib/cloud/data/*; cloud-init --debug init; and cloud-init --debug modules -m final but the chef: block is not being rerun. How do I trigger that?
<smoser> Toger, kill the instances directory too
<smoser> mostly i just do rm -Rf /var/lib/cloud /var/log/cloud-init* /run/cloud-init
<smoser> fwiw, if you're just trying ot debug things quickly.... lxd is really nice now
<Toger> hm, when i nuke that directory it no longer uses datasourceec2, falls back to DataSourceNone
<Toger> if i make it explicit in the cloud.cfg, I get Failed at attempted import of 'DataSourceEc2' due to: No module named DataSourceEc2
<Toger> which is preposterous as it worked on the same node 3 mins ago
<Toger> before removing the status file
<smoser> which directory are you removing ?
<Toger> I tried just the ones you mentioned
<Toger> And the root issue im having is that cc_chef fails in ensure_dir on /var/lib/chef. It calls selinux:restorecon on /var/lib and it fails with                                                      OSError: [Errno 2] No such file or directory. Of course /var/lib exists.  Also I set the boolean to make selinux run in permissive mode, and there are no AVC denieds in the logs for this.
<Toger> It is successful for /var/log
<Toger> and if i newrole into cloud_init_t I am able to see /var/lib and /var/lib/chef as well
<copumpkin> I'm writing my own cc_* module for cloud-init
<copumpkin> and am trying to find the most idiomatic way to determine the account ID I'm in
<copumpkin> 169.254.169.254/*/meta-data/network/interfaces/macs/*/owner-id seems like the simplest way if I were to query the metadata service directly
<copumpkin> but is that stored somewhere more structured already in cloud-init?
<copumpkin> oh, I see
<copumpkin> self.metadata is populated recursively by traversing that path
<copumpkin> I'm still confused though: NoCloud seems to talk about a key in metadata called network-interfaces, which doesn't correspond directly to anything I see in 169.254.169.254
<copumpkin> is there an easy way for me to get a python REPL that has that dictionary populated in the same way I would have in cloud-init?
<copumpkin> oh
<copumpkin> I just imported it and it worked :)
<copumpkin> for anyone following along: next(metadata['network']['interfaces']['macs'].itervalues())['owner-id']
#cloud-init 2016-06-09
<copumpkin> okay, I'm seeing something super puzzling
<copumpkin> when my cc_copumpkin_custom_crap runs at instance startup, cloud.datasource.metadata only contains a subset of the stuff it contains when I run it interactively later after boot
<copumpkin> on the same machine
<copumpkin> oh!
<copumpkin> it's hardcoded to request an older version of the metadata :(
<copumpkin> ARGH, it's the version right before the one that has the feature I want
<copumpkin> 2011-01-01 introduced the network hierarchy, and cloud-init explicitly asks for 2009-04-04
<copumpkin> smoser: is there an easy way to override DataSourceEc2's api_ver from my cloud-init config?
<smoser> copumpkin, we can probably move to the nwere version...
<copumpkin> yeah, they only ever seem to add to it
<copumpkin> my really elegant solution right now is to patch cloud-init in place before I snap the image ;)
<smoser> right. it should be just that one string, right ?
<smoser> i'm not opposed to something that checks and is able to use a newer version
<smoser> but if we bump the version we have to be able to fall back
<copumpkin> yep! super simple sed
<copumpkin> fall back how?
<smoser> as other clouds impersonate ec2 and likely dont' have newer YYYY-MM-DD
<copumpkin> oh, hmm
<copumpkin> didn't know that
<smoser> its very common to impersonate ec2
<smoser> :)
<copumpkin> aha
<copumpkin> although if we just bump it up to the 2011 date
<copumpkin> that's still pretty ancient
<copumpkin> I wonder if there are impersonators who impersonate one particularly ancient version but not the one right afterwards
<copumpkin> "we have a policy to only replicate APIs that are 6 or more years old, and yours is only 5 years old"
<copumpkin> :P
<smoser> copumpkin, yeah, what i think i'd do is read the old one, and if its present then go looking for new ones.
<copumpkin> oh, that makes sense
<smoser> and i'd just jump fro 2009 to whatever is most recent.
<smoser> there is an index too
<smoser> if you just get the metadata/ i think you get a list of versions
<smoser> but since we never got that before we'd then be adding some new expectation, which probably does work.
<harlowja> smoser u review my stuff yet, shall i merge it
<harlowja> i will be around to fix shit if i broke it :-P
<harlowja> (which of course it does not)
<harlowja> (obviously)
<smoser> harlowja, reading...
<smoser> i will review today.
<konetzed> Hay anyone know how to get cloud-init to set hostnames via reverse dns or will i need to write a custom module for that?
<rtheis> How do I request a revert of https://github.com/openvswitch/ovs/commit/b00bdc728e7a0ae697b4fc59a4f9958b688c6789 ?  networking-ovn gate is failing because ovs-bugtool.in fails flake8
<rtheis> sorry wrong channel
#cloud-init 2016-06-10
<waldi> okay, ephemeral disk setup is pretty broken on Azure. stuff from fstab is mounted way before cloud-init-config.service, which is needed to re-create the filesystem
<smoser> harlowja, https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-refactor/+merge/293957
<smoser> thats what i owe you right ?
<smoser> harlowja, why do you randomly change things some times. tox.ini at that mp
<harlowja> let me see , i think that smoser was also fix the 26 stuff, there isn't much of a need to have a requirements.txt file and a seperate list of requirements for 26
<harlowja> seeing as thats the point of a requirements file ?
<smoser> k
<smoser> we had listed the envs specifically as trusty (i think) does not support the newer tox
<smoser> that gleans the python version from the testenv name
<harlowja> testenv name still there
<harlowja> just not a special list of deps
<smoser> you yanked
<smoser> [testenv:py3]
<smoser> basepython = python3
<smoser> and you can't do that i dont hink.
<harlowja> ok, that doesn't seem like it will hurt to put it back in, though that it was inferred
<harlowja> *thought
<smoser> only in newer tox
<harlowja> k
<harlowja> ok dokie smoser cleaned that up
<smoser> harlowja, how did you drop if PY26 in tests/unittests/helpers.py ?
<smoser> the TestCase define
<smoser> oh. i see . use unittest2
<harlowja> ya, unittest2 should handle all we need here
<harlowja> unless u want me to undo that?
<harlowja> but damn was that testcase nasty
<harlowja> lol
<harlowja> (custom testcase)
<harlowja> lol
<smoser> wonder who wrote that garbage
<harlowja> :-/
<harlowja> not it
<harlowja> lol
<harlowja> me of 2 years ago?
<smoser> it was probably 2014 harlow. man... *that* guy.
<harlowja> lol
<harlowja> ya, what a jerk
<harlowja> that old me
<harlowja> lol
<harlowja> the new jerk is much cooler
<harlowja> lol
<smoser> so to run test, we'l need python-unittest2
<smoser> right ?
<harlowja> right
<harlowja> should be everywhere, its not a new thing
<smoser> ./packages/bddeb is busted then. i'll fix that.
<smoser> i suspect that bdrpm is probably busted too
<smoser> do you actually need unittest2 on python3 ?
<smoser> i guess so.
<harlowja> no, prob not, but then we need a pip that can understand python version constraints :-P
<smoser> ?
<harlowja> there is a way to do unittest2 ; python_version < 2.6
<harlowja> but needs newer pip to understand that
<smoser> oh. that wasnt what i was askin g i dont htink
<smoser> is unittest2 what you get in python3's unittest ?
<smoser> and http://paste.ubuntu.com/17182223/ is what i have in my diff on your tree right now.
<smoser> the last 2 hunks just cause you'd done that in some tests too
<harlowja> smoser  yes, unittest2 is pretty much a backport lib
<harlowja> https://pypi.python.org/pypi/unittest2
<harlowja> but it should work on ' 2.6, 2.7, 3.2, 3.3, 3.4 and pypy.'
<harlowja> alot of these backport libs also internally just use the newer stuff on versions where they aren't needed
<harlowja> so it should be ok
<smoser> thats fine
<harlowja> http://paste.ubuntu.com/17182223/  looks ok, want me to add, or u?
<smoser> you can please
<harlowja> yes sir
<smoser> generally speaking i think i want to avoid 'six' if i can at all do that.
<smoser> in the net/ module
<smoser> as i want it to be esxportable to curtin where we do not have a ddependency on six
<harlowja> next change, next change :-P
<harlowja> six though should be ummm, pretty normal
<harlowja> i'd wonder why six is that big of a deal :-P
<harlowja> otherwise u just make mini-six
<harlowja> which i've seen alot, lol
<harlowja> (in other libs/...)
<smoser> well, heres the thing.
<smoser> curtin runs in an ubuntu cloud iamge back to precise (12.04)
<smoser> six in 12.04 was universe.
<smoser>  https://launchpad.net/ubuntu/+source/six
<smoser> this i think would be the first usage of universe package by curtin
<smoser> let me see for sure on that.. but even adding any dependency notin the image is less than ideal
<smoser> rharper, so the reason i bothered you in #curtin was usage of six in cloud-init/net
<smoser> https://server-team-jenkins.canonical.com/job/curtin-vmtest-venonat/266/artifact/output/PreciseHWETBcacheBasic/logs/install-serial.log
<smoser> that is a precise install
<rharper> y
<smoser> http://paste.ubuntu.com/17183262/
<smoser> that is the interesting piece for this conversation
<rharper> right,. what's not in precise by default
<smoser> currently we are getting some things from universe
<smoser> gdisk and bcache-tools
<smoser> adding a dependency on six would be one more
<smoser> which is not really the end of the world.
<smoser> but precise is the only image supported that does not have python-six (or python3-six).
<rharper> curtin is six free, do we have a lot more use of six in cloud-init already ? I mean what's the impact ?
<smoser> well, if cloudinit/net adds six and cloudinit/net is used in curtin
<smoser> then curtin gains six dependency
<smoser> harlowja, ok. i think its sane
<smoser> i'm still not sold on six. as we're using it for a *very* minisix
<smoser> but sure
<harlowja> ya, its either that or for py2/py3 u make something minisix
<harlowja> either can be done, i've seen both, its not anything imposssible/that hard
<harlowja> just less crap to be written
<harlowja> lol
<harlowja> and i like less crap
<smoser> right
<smoser> harlowja, 'skip_first_boot'
<harlowja> ya
<harlowja> that's for testing
<harlowja> in that during testing and getting data from say a local file, don't mess with my networking, lol
<harlowja> although i can just change it to mock out 'on_first_boot' there
<smoser> i think i'd prefer that. it just looks odd as that function signature is different
<harlowja> kk
<smoser> i do understand the desire to not break your networking :)
<harlowja> :-P
<harlowja> ya, test starts messing with my networking == bad
<harlowja> lol
<smoser> and grab http://paste.ubuntu.com/17185178/
<smoser> and i'm happy i think
<smoser> i hit approve, and commented. go ahead and do those things an then you can pull to trunk
<smoser> thank you josh.
<harlowja> no thank you
<harlowja> lol
<harlowja> smoser is the coolest person ever
<harlowja> lol
<smoser> e-ver
<rharper> e-vah!
<harlowja> amen
<harlowja> ?
<harlowja> lol
<harlowja> oh evar
<harlowja> got ya
<rharper> =)
<harlowja> also smoser another one u might be interested in (when running on a mac)
<harlowja> https://gist.github.com/harlowja/b7bbf4f18057b3668f16bbf49ccffe26
<harlowja> probably something small off there
<smoser> did i just regress taht ?
<harlowja> unsure
<smoser> what is this /Users dir
<smoser> i dont have one of those
<harlowja> mac?
<smoser> nor a /var/folders
<smoser> i think you computer is broken
<smoser> maybe install Ubuntu
<harlowja> mac
<harlowja> lol
<harlowja> mac mac
<harlowja> mac
<harlowja> not that big of a deal, mac
<harlowja> lol
<harlowja> you'll be cool, mac
<smoser> i would be pretty cool if i had a mac and a handlebar mustache
<harlowja> lol
<smoser> Odd_Bloke, ^ would be nice if you could fix that test
<harlowja> not high priority (obviously)
<harlowja> damn macs
<harlowja> lol
<smoser> i'm not really sure what test_generate_certificate_uses_tmpdir is tryin gto prove
<harlowja> man  i keep on doing `git diff` in the cloudinit dir
<harlowja> smoser that's gonna be fixed over the weekend right?
<harlowja> lol
<harlowja> other ones that happen on centos i think are already known https://gist.github.com/harlowja/aa2b506c069fe874a51a774cd65a745c
<harlowja> smoser  ok, merging that refactor in
<harlowja> ok, https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-sysconfig/+merge/297115
<harlowja> eyes on that would be cool
<harlowja> or new tests
<harlowja> rharper  ^
#cloud-init 2016-06-11
<devster31> hi, I can't find this, if cloud-init failed the first boot ddoes it get executed at the second boot?
<devster31> and if so, how can I prevent that?
#cloud-init 2017-06-05
<rharper> bug 1679817
<ubot5> bug 1679817 in cloud-init "dual stack IPv4/IPv6 configuration via config drive broken for RHEL7" [Medium,Fix committed] https://launchpad.net/bugs/1679817
<smoser> blackboxsw, https://gist.github.com/smoser/8904199bb8f00a90dd04
<smoser> powersj, https://git.launchpad.net/cloud-init
<tmbcsd> I'm having trouble finding documentation on how to reference meta-data variables in cloud-config scripts, or if that is definitively possible.  For example, refence the internal ip of the new instance from AWS meta-data.
<smoser> tmbcsd, its not really possible. it is a feature that would be good to have.
<smoser> on ec2... the easiest thing would probably be to
<smoser> ip=$(ec2metadata --local-ip)
<tmbcsd> @smoser thanks.  is that bash only, or will it work for #cloud-config scripts too?
<tmbcsd> i was curious about this as well, related to the metadata variable question: http://cloudinit.readthedocs.io/en/latest/topics/datasources.html
<smoser> tmbcsd, that is shell only.
<tmbcsd> hmm, cool I think i'll just write_files to some heredoc scripts then runcmd them
<tmbcsd> thanks again
<smoser> blackboxsw, http://paste.ubuntu.com/24788492/
<smoser> thoughts ?
#cloud-init 2017-06-06
<smoser> blackboxsw, https://code.launchpad.net/~smoser/cloud-init/add-gce-datasource/+merge/206070 because its not for propose to merge into master. that was me proposing a merge of a branch of mine into someone elses
<blackboxsw> smoser: yeah I can't do anything with that branch. makes sense why I wouldn't have perms. You'd have to drop it since it looks like it's already in.
<blackboxsw> https://code.launchpad.net/cloud-init/+activereviews
<blackboxsw> smoser: can you set Rejected on this one https://code.launchpad.net/~cloud-init-dev/cloud-init/trunk/+merge/296021
<smoser> blackboxsw, there is no more 'lp:' on https://code.launchpad.net/cloud-init/+activereviews
<blackboxsw> Thanks smoser.
<blackboxsw> so clean
<smoser> cpaelzer, https://code.launchpad.net/~paelzer/cloud-init/+git/cloud-init/+merge/314076
<smoser> can you set that to correct state and /or deleete
<smoser> https://wiki.ubuntu.com/IRC/Bots
<rharper> smoser: http://paste.ubuntu.com/24796092/
<smoser> powersj, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/325192
<powersj> https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/build/562061/
<powersj> smoser: ^
#cloud-init 2017-06-07
<masber> good morning, is this the right place to talk about cloud-init? I have an issue I would like to discuss
<smoser> masber, it is the right place.
<smoser> but i'm about to go afk.
<masber> oh
<smoser> masber, i'm US/Eastern.
<masber> ok im Australia/Eastern
<smoser> file a bug and paste a link here https://bugs.launchpad.net/cloud-init/+filebug
<smoser> i'm going afk. good night.
<masber> ok good night
<dgarstang> Is there any way to pass the output of a script to the hostname to be set by cloud-init ?
<Hawson> Good $time_of_day everyone
<Hawson> I'm running version 0.7.5-10 (the version packaged with Centos 7), trying to provision some AWS instances.  The instances come from a custom AMI, but there are a few things that need to happen after the instance is launched.  Specifically, I'm tryting to run some commands via "runcmd", but I can't find any evidence that they actually get run.
<powersj> rharper: https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/build/562061/
<Hawson> the /var/lib/cloud/instance/<instanceID>/scripts/runcmd file is properly created, and has the expected contents....
<rharper> thx
<Hawson> but it never seems to actually tun.
<powersj> smoser: https://copr-be.cloud.fedoraproject.org/results/%40cloud-init/cloud-init/epel-7-x86_64/00562061-cloud-init/root.log.gz
<Hawson> grep -i runcmd /var/log/cloud*
<Hawson> also turns up nothing
<smoser> Hawson, hey.
<smoser> i suspect that the cloud-final systemd job is not running
 * Hawson tips his hat to smoser 
<Hawson> I checked that, actually
<rharper>  https://copr-be.cloud.fedoraproject.org/results/%40cloud-init/cloud-init/epel-7-x86_64/devel/repodata/repomd.xml
<Hawson> and it looks like it is...
<Hawson> systemd "says" it ran successfully....
<smoser> Hawson, i assumethere isnt much in /var/log/cloud* right ?
<Hawson> correct, not much in the logs
<Hawson> Jun  6 21:55:08 localhost cloud-init: Cloud-init v. 0.7.5 running 'modules:config' at Tue, 06 Jun 2017 21:55:08 +0000. Up 9.64 seconds.
<Hawson> Jun  6 21:55:09 localhost cloud-init: Cloud-init v. 0.7.5 running 'modules:final' at Tue, 06 Jun 2017 21:55:09 +0000. Up 10.18 seconds.
<Hawson> Jun  6 21:55:09 localhost cloud-init: Cloud-init v. 0.7.5 finished at Tue, 06 Jun 2017 21:55:09 +0000. Datasource DataSourceEc2.  Up 10.39 seconds
<rharper> ImportError: No module named six
<rharper> https://copr-be.cloud.fedoraproject.org/results/%40cloud-init/cloud-init/epel-7-x86_64/00562061-cloud-init/build.log.gz
<Hawson> I should note that one of the scripts I want to run is expected to take many seconds (upto a minute or two) to complete
<Hawson> so for it to finish is about a quarter second is...suspicious. :)
<Hawson> bootcmd: operations *do* run, however
<powersj> smoser: rharper: this is the spec file I used to get those builds to go: https://git.launchpad.net/~powersj/cloud-init/tree/packages/redhat/cloud-init.spec?h=redhat-spec
<smoser> Hawson, /etc/cloud/cloud.cfg.d/05_logging.cfg
<smoser> int hat file , can you comment out
<smoser> - [ *log_base, *log_syslog ]
<smoser> (just add a '#' at the beginning)
<smoser> then rm -Rf /var/lib/cloud* /var/log/cloud-init*
<smoser> and reboot
<Hawson> There's a stanza for "_log:"
<Hawson> do anything with that?
<smoser> at the bottom
<smoser> log_cfgs:
<smoser>  ...
<Hawson> _log: is at the top, actually.
<smoser> there should be a line like
<smoser> - [ *log_base, *log_syslog ]
<Hawson> the lines you are talking about are at the bottom
<Hawson> already commented out and bouncing
<smoser> Hawson, now i understnad your confusion
<smoser> "at the beginning". i meant "at the beginning of the line"
<Hawson> I understand your confusion at my confusion. :)
<Hawson> Anyhow, host is up.  checking logs
<Hawson> interesting though--lots more outptu this time around.
 * Hawson thinks I should disable syslog in general...
<Hawson> not really much better.
<Hawson> better logs, but still not actually running the script.
<Hawson> a few lines about calling cc_runcmd, writing the actual file, but not actually running them
<Hawson> hmmm...
<Hawson> 2017-06-07 10:46:44,671 - importer.py[DEBUG]: Looking for modules ['cc_runcmd', 'cloudinit.config.cc_runcmd'] that have attributes ['handle']
<Hawson> 2017-06-07 10:46:44,671 - importer.py[DEBUG]: Failed at attempted import of 'cc_runcmd' due to: No module named cc_runcmd
<Hawson> 2017-06-07 10:46:44,672 - importer.py[DEBUG]: Found cc_runcmd with attributes ['handle'] in ['cloudinit.config.cc_runcmd']
<Hawson> is that....normal?
<smoser> Hawson, there is a bug with systemd that coud-init starts writing there before syslog is really "up", so it goes to devnul
<smoser> 'runcmd' is not the name of the module that runs that stuff.
<Hawson> systemd is stupidly broken?  I'm *SHOCKED*!  Shocked, I say.
<Hawson> that's just the module that writes out the script, it seems.... (looking at cc_runcmd.py now)
<Hawson> so that brings up the question:  what *does* actually execute the script?
<smoser> scripts-user
<Hawson> eh....okay.  I don't see any indication of scripts_user exec()ing scripts/runcmd
<Hawson> I _do_ see lines like:  "helpers.py[DEBUG]: Running config-scripts-per-once using lock...."
<Hawson> smoser: if scripts-user handles the actual exectution, and runcmd merely generates a script, what am I missing to get one to talk to the other?
 * Hawson is somewhat confused about the execution flow here.
<Hawson> although...https://git.launchpad.net/cloud-init/tree/config/cloud.cfg  implies that runcmd happens in the 'config' stage, not final?
<smoser> Hawson..
<smoser> sorry.
<smoser> so the default config is that the 'runcmd' module runs earlier, but all it does is write files that are executed later by scripts-user
<Hawson> Okay.
<smoser> can you post /var/log/cloud-init.log and /var/log/cloud-init-output.log
<Hawson> So here's a dumb question.... :)
<Hawson> I have stanzas for cloud_init_modules:, cloud_config_modules: and clout_final_modules:
<Hawson> none of them have runcmd in them....
 * Hawson inherited this setup from someone else
<smoser> Hawson, well, you need 'runcmd' to read your 'runcmd:' stanza and render that into /var/lib/cloud/instance/per-instance/... (i think thats the path)
<smoser> then you also need 'scripts-user'
<Hawson> yeah, that's the path
<smoser> or you wont *execute* what the other wrote
<Hawson> hmm.
<Hawson> this may be a timing issue then
<Hawson> cloud_https://pastebin.com/gBGMbEVL
<Hawson> ehh..
<Hawson> https://pastebin.com/gBGMbEVL
<smoser> yep
<smoser> yeah, those are wrong order
<smoser> just move runcmd up
<Hawson> Yep.
 * Hawson kicks off the AMI baking...
<Hawson> The *REALLY* annoying part of this is that what I'm trying to get running is a 100% useless waste of time. :-(
<Hawson> and I've spent a lot of time getting it working (not just with cloud-init)
<Hawson> ...we build an AMI with packer...that runs puppet...then calls cloud-init after boot.
 * Hawson looks around for more bailing wire and duct-tape
<smoser> Hawson, i'm sorry that 'runcmd' as the module doesnt actually do the running
<smoser> (that is very confusing...)
<Hawson> Heh, yeah.  That's kinda confusing.  :)
<Hawson> I do appreciate the help though.
<smoser> rharper,
<smoser> http://paste.ubuntu.com/24801194/
<powersj> https://copr-be.cloud.fedoraproject.org/results/%40cloud-init/cloud-init/epel-7-ppc64le/00562449-cloud-init/build.log.gz
<powersj> smoser: ^
<blackboxsw> %{?systemd_requires}
<blackboxsw> per https://fedoraproject.org/wiki/Packaging:Scriptlets
<Hawson> woohoo!
<Hawson> it's running this [CENSORED] script at boot.
<Hawson> awesome
<Hawson> kinda
<Hawson> the cloud-init bit is awesome.  The script is a steaming pile of [censored]
 * Hawson owes smoser a beer
<smoser> glad we could help.
<Hawson> I do suggest a bit of clarification in the docs though.
<rharper> https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/build/562456/
 * rharper crosses fingers
<rharper> smoser: http://paste.ubuntu.com/24802038/   specfile template fixes works for el6/el7
<powersj> rharper: https://docs.pagure.org/copr.copr/how_to_enable_repo.html#how-to-enable-repo
<rharper> https://copr.fedorainfracloud.org/coprs/alonid/yum-plugin-copr/
<powersj> yum copr enable @cloud-init/cloud-init
<rharper> fancy
<rharper> ok
<rharper> http://paste.ubuntu.com/24802167/
<rharper> that works in a fresh cent7 container
<Hawson> can cloud-init fire off a script "in the background"?
<Hawson> via runcmd?
<rharper> Hawson: I think typical bashism like disown would work (https://askubuntu.com/questions/611968/differences-between-command-disown-and-nohup-command-disown)
<Hawson> Hmm..."disown" is a new one to me
<smoser> rharper, https://code.launchpad.net/~akaris/cloud-init/+git/cloud-init/+merge/325186
<rharper> ack
<rharper> larsks: working on the redhat spec in trunk, and for rhel7 seeing this issue https://github.com/tony/tmuxp/issues/111  ;  it appears that setuptools injects 'argparse' in requires.txt but in rhel7 argparse is part of python2.7 already
<rharper> Is there an distro bug against setuptools?  I've not found one yet;  I'm thinking of sed'ing out argparse if we're el7 (or if python -c 'import argparse' returns 0)
<rharper> looks like it was still in our requirements; but we'll add it into the spec template only for el6 releases; then we can drop it from the requirements;  that should clean things up
<larsks> rharper: isn't argparse a part of python 2.7 everywhere?
<rharper> it isn
<rharper> sorry
<rharper> yes, it is
<rharper> we 've been dealing with 2.6
<rharper> we've got it worked out now;   no longer including it in requirements.txt, and then in specfile for systems using py26, to include the dep as a BuildReq and Req
<larsks> Awesome.
<rharper> https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/
<rharper> smoser: should be merging the branch to trunk soon (fixing tox) and then we'll have daily trunk builds  up
<rharper> definitely look at the specfile template and push any other fixes or issues you see with it;
<rharper> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/325192
<larsks> I'll take a look!
<rharper> larsks: cool, thanks!
<rharper> smoser: https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/build/562649/
#cloud-init 2017-06-08
<smoser> blackboxsw, http://paste.ubuntu.com/24804339/
<smoser> rharper, where does the setuptools depends come from ?
<rharper> no where, we need to explicitly depend on it
<rharper> in el6, it gets pulled in via some other package dep but we BuildRequires we should also just Requires it as well
<rharper> we had it fixed earlier today
<smoser> i dont know.. i dont think i ever had that in there.
<rharper> hrm
<smoser> i think that requirements.txt file is getting written with it in there.
<smoser> while we do not need it
<rharper> test-requirements.txt:setuptools
<rharper>  
<rharper> not sure if that's getting picked up; in any case, we *do* need it since the loader requires it (/usr/bin/cloud-init calls it for pkg_resource)
<smoser> hm..
<smoser> test-requirements.txt says
<smoser> # Only really needed on older versions of python
<smoser> contextlib2
<smoser> setuptools
<rharper> I'm just grepping around, I don't know how we got setuptools into the previous build that worked, but it's not installed now via yum install cloud-init
<rharper> I thought we agreed it's a requirement as long as we're using setup.py
<smoser> no. i think that was pkg_resources
<smoser> i forget.
<smoser> what a pain
<rharper> pkg_resources comes from setuptools
<smoser> ok. i added that to spec
<smoser> and pushed
<smoser> rharper, ^
<rharper> k
<smoser> and wih that... /me goes to bed.
<smoser> ci is happy with it now too.
<smoser> good night
<rharper> k
<rharper> https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init/build/562783/
<rharper> smoser: that's working on el6 and el7 =)
<smoser> rharper, i split into 2 merge proposals
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/325311
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/325192
<rharper> ok
<smoser> ie, cloud-config in one and redhat spec in another.
<rharper> ack
<smoser> one question... i dont know if we need the centos variant.
<smoser> do we?
 * smoser walks down
<rharper> yes, for the cloud-config
<rharper> we use the distro.variant to set the distro value as well as the default user name (and Gecos values)
<larsks> smoser: dumb question: should I expect cloud-init to handle network_data.json when booting with a network data source, rather than config drive?
<smoser> larsks, not yet
<smoser> really, really want to do that...
<smoser> but not at the moment :-(
<larsks> smoser: thanks, just wanted to make sure I wasn't crazy.
<smoser> larsks, the goal is to make the openstack datasource work like the digital ocean one does.
<smoser> we'd identify (via dmi data) that we are running on penstack.
<smoser> and then raise an interface with the link local address, and hit the metadata service
<smoser> get the network data
<smoser> take the nic down
<smoser> apply the network_data.json
<larsks> That makes sense.
<smoser> rharper,
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/325192
<smoser> addressed all your things there
<rharper> smoser: ok, trunk needs this http://paste.ubuntu.com/24809440/
<rharper> smoser: http://paste.ubuntu.com/24809461/  look ok ?
<smoser> powersj, http://paste.ubuntu.com/24809515/
<smoser> powersj, http://paste.ubuntu.com/24809682/
<powersj> rharper: https://paste.ubuntu.com/24810376/
<rharper> powersj: nice!
<smoser> larsks, we're trying to get spec file going in trunk. it builds happily now in copr and we get rpms out.
<smoser> horay!
<smoser> but we would also like the things to actually *work*. :)
<smoser> and that is causing a problem in that right now we're not getting systemd stuff enabled in the spec file.
<larsks> smoser: "not getting enabled" == "you are running systemctl enable but service isn't starting"? Or something else?
<rharper> %define use_systemd (0%{?fedora} && 0%{?fedora} >= 18) || (0%{?rhel} && 0%{?rhel} >= 7) || (0%{?suse_version} && 0%{?suse_version} >=1210)
<rharper> so, in specfile, that sets a flag to use systemd based on the dist version
<larsks> Sure.
<rharper> I think we need that
<rharper> larsks: does that look sane  to add ?
<rharper> then we do an if %use_systemd and BuildRequires systemd
<smoser> http://paste.ubuntu.com/24811338/
<larsks> rharper: It seems like a reasonable idea.  Is it not working?
<smoser> spec ends up looking like ^
 * larsks looks
<rharper> larsks: using what smoser posted
<rharper> but I thought that rhel7 builder chroots would have systemd in it by default
<rharper> if we switch to my post, then we need to buildrequires systemd
<rharper> based on version, I think we'll need to do that
<powersj> COPR builds pushed by jenkins: https://jenkins.ubuntu.com/server/job/cloud-init-build-rpm/1/console I'll do the testing of the rpm's tomorrow.
<larsks> rharper: I can take a closer look later this evening; kids go to sleep by 9pm (us/eastern).
<rharper> larsks: sure, thanks
<rharper> powersj: \o/
<larsks> Will you be around then? And/or are you using a spec file different from what smoser posted?
<rharper> I'll be out though
<smoser> larsks, well ^ is in trunk now.
<smoser> you can 'make srpm'
<larsks> Cool.
<larsks> Will do.
<rharper> PYVER=python2 make srpm
<rharper> one of the two
<smoser> it seems odd to me that one would need from cent6 to cent7 people would have to change every package that had systemd scripts
<smoser> i guess maybe over time in fedora that just happened.
<smoser> blackboxsw,
<smoser>     return json.dumps(data, indent=1, sort_keys=True,
<smoser>                       separators=(',', ': ')).encode('utf-8')
<smoser> format the json with that json.dumps
<blackboxsw> thanks
<rharper> smoser: larsks: this looks to work http://paste.ubuntu.com/24811524/
<rharper> re: specfile updates
#cloud-init 2017-06-09
<larsks> rharper: that looks better, yes.
<larsks> rharper: although I note that I get KeyError: 'r' when trying to run 'make srpm'...
<larsks> ...because I needed PYVER=python2, right.
<larsks> Although: I would call it a bug if something declares "template: cheetah" and we continue without producing an error message...
<rharper> larsks: thanks for the feedback, we're looking to drop cheetah as a template (there is some code which does fallback if you don't have cheetah but those messages go to logging, which isn't configured for the tool)
<rharper> so you're right w.r.t the error; and we're looking to drop cheetah soon
<blackboxsw> hiya larsks, do you have any idea what meta package I'd need in centos/7 to setup repositories which would give me access to download python3 system package for oauthlib? I've got epel-release installed, but it doesn't yum search is coming up empty.
<blackboxsw> sorry larsks, I should introduce myself. I'm Chad Smith, a recent join to Canonical's cloud-init/curtin team.
<larsks> blackboxsw: hi there!
<larsks> I know that oauthlib is not available in RHEL.  It may not be in CentOS either.  Let me look.
<larsks> blackboxsw: yeah, there is no generally available oauthlib package.
<blackboxsw> ok thanks for the confirmation larsks.
<larsks> It is available in the openstack repositories (e.g., centos-release-openstack-newton), but you wouldn't normally expect those to be enabled where people are running cloud-init.
<rharper> http://oauthlib.readthedocs.io/en/latest/installation.html  says it has a python3 oauthlib
<rharper> for redhat
<rharper> but that doesn't seam right, I would expected python34 or something
<larsks> rharper: we actually include a patch in our cloud-init package specfically to fail gracefully if oauthlib is missing.
<rharper> ah, ok
<larsks> (we catch the importerror, and then raise an exception later on if something actually tires to call the oauth_headers method)
<rharper> we probably want that then
<larsks> http://chunk.io/f/c756394a7c91432f8a92fdbad2682fda
<rharper> cool
<blackboxsw> thanks larsks that works for me. I was just working on setting up a make ci-deps-centos target and I didn't want to rule out installing any dependencies I could get together if they are reasonably available for setting up our test environments.
<blackboxsw> so not having python3-oathlib isn't a deal breaker for our CI
<larsks> fwiw, it's a good idea not to depend on anything not available in the stock centos repositories.  E.g., don't rely on EPEL.
<blackboxsw> +1
<rharper> https://bugzilla.redhat.com/show_bug.cgi?id=1417025
<ubot5> bugzilla.redhat.com bug 1417025 in cloud-init "cloud-init tries to run hostnamectl before dbus is up" [Unspecified,Closed: rawhide]
<rharper> that's going to be "fun"
<larsks> rharper: looks like that was fixed.
<rharper> larsks: ok;  I was looking at ubuntu and our dbus.service doesn't depend on After=sysinit.target ;  was going to test with that
<larsks> rharper: I finally threw up my hands and completely abandoned the systemd units in the source tree.
<rharper> =/  ok, I can look at the changes;  we may need to template the units as well
<larsks> There were too many weird edge cases where ubuntu and redhat had different dependencies.
<rharper> yeah
<larsks> Well, I had another suggestion:
<larsks> Using systemd drop-ins for the distribution specific stuff.
<rharper> ah, yes
<rharper> that is a good idea
<rharper> I need to play with those some more
<larsks> THe model would be: cloud-init units only have dependencies on other cloud-init units, and dependencies on anything else would be configured in drop-ins.
<rharper> yes, that makes sense
#cloud-init 2018-06-04
<smoser> blackboxsw: if you had a minute
<smoser>   - bug 1770712 fixes for ubuntu package branches.
<ubot5> bug 1770712 in cloud-init (Ubuntu Cosmic) "It would be nice if cloud-init provides full version in logs" [Medium,Confirmed] https://launchpad.net/bugs/1770712
<smoser>     devel https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347380
<smoser>     bionic https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347381
<smoser>     artful https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347382
<smoser>     xenial https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347384
<smoser> that'd be good to land into cosmic today... just get those all in and an upload would be good.
<blackboxsw> +1
<blackboxsw> will do
<blackboxsw> smoser: we're good on the devel portion of those branches, because PACKAGED_VERSION exists in cloudinit/version.py. in bionic and older branches we haven't yet pulled back 5446c788160412189200c6cc688b14c9f9071943
<blackboxsw> shouldn't we pull that back too ?
<blackboxsw> I realize the packaging change doesn't break currently, but it also doesn't do anything yet
<smoser> blackboxsw: well, the next new-upstream-snapshot will get it
<smoser> so at this point it is just "staged".
<smoser> you're correct though in that it basically adds dead code.
<smoser> (the daily builds *would* have it )
<blackboxsw> ok just wanted to make sure this was intended, they are decoupled from each other kindof, so I wanted to confirm that we are staging it and know that we are not yet expecting to report full pkg version number in <= Bionic ok I'm good
<blackboxsw> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347380 with nit on UNRELEASED -> cosmic
<blackboxsw> going through the rest now
<blackboxsw> to approve
<smoser> blackboxsw: if i uploaded to cosmic i think i'd just do a new-upstream-snapshot
<smoser> whicih would then dtrt
<blackboxsw> +1 good dela
<blackboxsw> deal
<smoser> so same there... rwe're just "staging" a change basicalkl.  i think that is generally the flow we'd have on all changes to the packaging branches.
<blackboxsw> smoser: want me to queue a new-upstream snapshot then for cosmic.
<blackboxsw> ?
<blackboxsw> and you can merge in your existing branches?
<smoser> you can if you'd like. or i can just do it.
<smoser> i'll pull existing.
<smoser> merged devel
<blackboxsw> ok putting up MP
<smoser> i'll grab the others too
<smoser> ok. all pacikaging branches have it now.
<blackboxsw> smoser: testing this now https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347396
<blackboxsw> I didn't push the tag
<blackboxsw> ok package built and proper _package_version is showing up
<smoser> blackboxsw: doing 'build and push' so that willl land in cosmic-proposed shortly
<blackboxsw> rharper: have 5 mins to chat about https://trello.com/c/Uk7OA71K/798-cloud-provided-network-configuration-openstack-azure-aws ?
<blackboxsw> specifically the azure portion
<rharper> blackboxsw: here now;  still need to chat?
<blackboxsw> cool rharper yeah just for a couple mins
<blackboxsw> *-mtj
<rharper> ok, lemme get setup
<blackboxsw> headphone trouble here. coming
<blackboxsw> rharper: lost you at 'wouldn't really'
<vila> hi there,
<vila> I'm encountering an issue that is hard to debug :-/
<vila> A few months ago I did install cloud-init on scaleway images (17.1 was the cloud-init version) then.
<vila> Booting from these images worked fine.
<blackboxsw> https://bugs.launchpad.net/cloud-init/+bug/1775074 filed.
<ubot5> Ubuntu bug 1775074 in cloud-init "collect logs: grab /var/lib/cloud data files" [Medium,Confirmed]
<blackboxsw> hi vila, yeah just explain the prob as best you can, maybe someone can help
<vila> I'm now using the exact same scripts to install cloud-init (18.2) on more recent images and things break:
<vila> 2018-06-04 19:37:35,378 - stages.py[DEBUG]: cache invalid in datasource: DataSourceScaleway
<vila> 2018-06-04 19:37:35,378 - handlers.py[DEBUG]: finish: init-local/check-cache: SUCCESS: cache invalid in datasource: DataSourceScaleway
<vila> on top of that, then run finished by creating /var/lib/cloud/instance/boot-finished at a time where /var/lib/cloud/instance does not exist (i.e.the '/var/lib/cloud/instance' symlink), so a dir is created instead
<vila> further runs of cloud-init then fail because then can't delete the dir (a symlink is expected)
<vila> any hints on how such issues can be debugged highly appreciated
<blackboxsw> vila: hrm looking. That specific log message on "cache invalid", means that the datasource will attempt to re-run  metadata collection again because it appeared that that instance cache was invalid (and needed a refresh).
<vila> blackboxsw: Where and how is the cache said to be invalid ?
<vila> blackboxsw: I was able to unpickle it from python
<blackboxsw> specifically in /usr/lib/python3/dist-packages/cloudinit/stages.py
<blackboxsw> it checks datasource.check_instance_id
<vila> yeah, opened already
<blackboxsw> in Scaleway it looks like that always returns False *I think*
<blackboxsw> which means always re-run get-data
<vila> when I unpickled it, I check hasattr(ds, 'check_instance_id')
<vila> but I was unclear about inferring self.cfg
<vila> I guessed it was the instance-id and as far as I could see it was the same instance (I did a reboot)
<vila> blackboxsw: I have a vague feeeling it may related to /var/lib/cloud/instance being deleted at that point but I don't where to look for that
<blackboxsw> from the base class cloudinit/sources/__init__.py:DataSource.check_instance_id  is just a dummy function returning False and I don't see that Scaleway is overriding that
<vila> nope indeed
<vila> and it used to work
<vila> let's rewing a bit may be
<vila> rewind
<vila> starting from a booted image, I run apt-get install cloud-init --no-install-recommends
<blackboxsw> please do.
<vila> anything I need to do for c-i to behave at next boot (same instance or different one)
<blackboxsw> A way to test a clean boot scenario  from cloud-init would be hrm so 'sudo cloud-init clean --logs --reboot' would perform a
<blackboxsw> 'greenfield' install as if the system had never run cloud-init before
<vila> \o/
<blackboxsw> it's what we use for upgrade testing and fresh boot validatin
<blackboxsw> validation
<blackboxsw> that will blow away /var/log/cloud* /var/lib/cloud/* with the exception of a /var/lib/cloud/seed  subdir if applicable (as that seeds some metadata on some clouds)
<vila> blackboxsw: done
<vila> cloud-init analyze show
<blackboxsw> http://cloudinit.readthedocs.io/en/latest/topics/capabilities.html#cloud-init-collect-logs for more details
<blackboxsw> yeah analyze show is good for quick inspection of what cloud-init performed.
<vila> says no cache found but fails to find the scaleway ds
<vila> and cloud-init.log shows the scaleway datasource is not seen nor used
<blackboxsw> cloud-init status --long?
<vila> detail:
<vila> ('ssh-authkey-fingerprints', KeyError('getpwnam(): name not found: ubuntu',))
<vila> but that's a fallout from not using the ds and not finding the user-data
<blackboxsw> hrm, ok so we have a couple errors looks like
<blackboxsw> right
<blackboxsw> hmm
<vila> so, red herring
<vila> why is the datasouce missed ?
<vila> I have datasource_list: [ Scaleway, None]
<vila> and disable_root: false
<vila> (right, forgot to mention that I added the later because scaleway default login is on root)
<vila> which was a first hint that things behave differently
<blackboxsw> that's good at least. mind doing a 'sudo cloud-init collect-logs' and sending an email to chad.smith@canonical.com. I can glance at it quickly here
<blackboxsw> collect-logs will dump cloud-init.tar.gz in your cwd
<blackboxsw> it'll contain all logs and potentially your user-data
<vila> no worries, nothing secret there
<blackboxsw> good deal
<vila> sent
<blackboxsw> checking thanks
<vila> I've tried various workflows giving different results, for example, after installing, running 'systemctl start cloud-final && cloud-init status --wait' find the datasource and process properly, but the next boot fails
<vila> right now, for the logs I sent, I have a broken /var/lib/cloud/instance (a dir rather than a symlink)
<blackboxsw> vila: hrm, normally in cloud-init logs I
<blackboxsw> am accustomed to seeing init-local stage, then init   then modules:config, but your logs skip the 'init' stage
<blackboxsw> can you cloud-init analyze show | pastebinit
<blackboxsw> can you 'cloud-init analyze show | pastebinit'
<vila> https://paste.ubuntu.com/p/ZwG4BnnJRd/
<blackboxsw> normally I'd see an Starting stage: init-network after init-local and before modules-config
<blackboxsw> hrm
<blackboxsw> can you cat /etc/cloud/cloud.cfg | pastebinit
<vila> https://paste.ubuntu.com/p/Gb6Mzxms9q/ <- that one worked
<vila> like ~2 hours ago
<vila> blackboxsw: /etc/cloud/cloud.cfg is untouched, but I add:
<vila> cat <<EOC > /etc/cloud/cloud.cfg.d/99_byov.cfg
<vila> # Generated by byov at $(date)
<vila> datasource_list: [ Scaleway, None]
<vila> apt_preserve_sources_list: true
<vila> disable_root: false
<vila> EOC
<vila> https://paste.ubuntu.com/p/zVQDMtmG65/ <-  cat /etc/cloud/cloud.cfg
<vila> but yeah, it's the missing init-network that I'm tracking indeed
<blackboxsw> interesting. so your cloud-init.log mentions 2018-06-04 20:48:32,448 - __init__.py[DEBUG]: Searching for local data source in: []
<blackboxsw> that list should have represented Scaleway in it
<blackboxsw> something is modifying that datasource list
<vila> exactly, sometimes it's there sometimes it's not
<blackboxsw> any other files int /etc/cloud/cloud.cfg.d
<vila> and it seems the invalid cache somehow mark the datasource entirely wrong
<blackboxsw> like /etc/cloud/cloud.cfg.d/90_dpkg.cfg ?
<vila> blackboxsw: nope, I used to have datasource_list: [ NoCloud, OpenStack, Scaleway, None] when that was working
<vila> right, that one is overriedn... oh, let me check
<vila> nope, standard content:
<vila> # to update this file, run dpkg-reconfigure cloud-init
<vila> datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, None ]
<vila> and scaleway is there
<blackboxsw> ok that's good. yeah and your ds-itentify.log in the cloud-init.tar.gz also shows ds-identify properly detected Scaleway as an option
<vila> Also, I could find when ds-identify is run, but I noticed it's run more than once in some scenarios
<blackboxsw> ds-identify rather
<vila> Also, I could NOT find when ds-identify is run, but I noticed it's run more than once in some scenarios
<vila> so, I keep 'cloud-init clean --logs --reboot' in my notes for the future, but it failed here
<vila> which reproduces my issue so it still a good recipe but it doesn't give the result you expected I think
<blackboxsw> vila: on the failed case, 'systemctl list-dependencies | grep cloud' this is what I see http://paste.ubuntu.com/p/TQM8RWbwkP/
<blackboxsw> I'd expect a cloud-init.server job/unit listed in systemd. it's what runs 'cloud-init init' which is the network stage that we are missing in your failed case
<blackboxsw> not sure I'm going down a rat hole there
<blackboxsw> not sure *if*
<vila> blackboxsw: right, I can rebuild the instance without installing cloud-init and restart from there may be ?
<vila> blackboxsw: once cloud-init is installed, I'm running https://paste.ubuntu.com/p/xtGNYpfzZT/
<blackboxsw> sounds like a good plan, installing the new cloud-init deb in your environment after the fact should take care of creating the right systemd generators to queue cloud-init stages during boot (if something got mangled across the upgrade path)
<blackboxsw> vila: running cloud-init init-local; and cloud-init init 'naked' outside of the standard boot process on an instance is not exactly what we intended (and could be rife with some error condtions)
<vila> blackboxsw: I used to run nothing and all was rosy ;-)
<blackboxsw> yeah nothing is what we hope is always rosy (and intended).    Just booting normally should take care of ordering all cloud-init stages appropriately (including module configuration etc).
<vila> blackboxsw: what *is* the intended workflow ? install, save image, boot ?
<blackboxsw> yes vila, boot clean image, install cloud-init, power off, copy clean image, let cloud-init boot in user-configured environment to collect and config based on metadata/user-data
<blackboxsw> trying to look more at your latest paste
<vila> damn it, that was I did and I thought may be I missed a step
<blackboxsw> vila, yeah something smells funky, (I don't have a scaleway acct unfortuntately), I'll try to bisect the diffs on Scaleway datasource from 17.1 -> 18.2 I didn't think we have anything significant in that upgrade path other than some exception handling changes on url retries  in that space
<vila> blackboxsw: yup, went there saw that, could find a link either (but I'm not the expert ;-)
<blackboxsw> I'd like to see a /var/log/cloud-init.log in the case where cloud-init was upgraded and only a reboot run. (not manual run of cloud-init init --local and 'cloud-init init').
<vila> just got the instance without cloud-init installed
<vila> so, apt-get cloud-init --no-install-recommends
<vila> *install
<vila> reboot
<vila> https://paste.ubuntu.com/p/HDhH8hWpmd/
<vila> yet https://paste.ubuntu.com/p/z6wXtF6Jtx/
<vila> blackboxsw: you said "in the case where cloud-init was upgraded" s/upgraded/installed/ otherwise, nothing from my script
<blackboxsw> ok so Scaleway ordered before Ec2, Ec2 considered maybe, that shouldn't break anything specifically.for Scaleway's datasource. reading your cloud-init.log
<blackboxsw> 2018-06-04 21:31:31,651 - stages.py[DEBUG]: no cache found    === fresh boot, no cruft from previous around
<vila> right, so the cache itself is not the root cause, well done
<blackboxsw> line 74 of your first paste is showing us we are properly attempting  the discover Scaleway (and many other datasources) in python (instead of ds-identify which is just a shell script (for speed)
<vila> and https://paste.ubuntu.com/p/PJ2C5tQsSJ/ should cover all the datasource inputs
<blackboxsw> line 78 rather
<blackboxsw> ohh wait
<vila> right
<blackboxsw> no Scaleway in line 78
<vila> while still there in line 77
 * vila thinks
<vila> could it be that /var/run/scaleway is created too late (aka race ?)
<blackboxsw> I had thought Scaleway datasource was defined as FILESYSTEM only. checking the DataSourceScaleway.py again
<blackboxsw> my bad
<blackboxsw>     (DataSourceScaleway, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
<blackboxsw> that means Scaleway datasource is init-network stage detected only ... ok so we expect to filtered out of init-local stage
<blackboxsw> ok so we're still good in init-local stage (not detecting scaleway)
<blackboxsw> but the fact that init-network (otherwise called via CLI as 'cloud-init init') should not be skipped,
<blackboxsw> that's what should have detected scaleway....
<blackboxsw> reading down past init-local in your cloud-init log now. sorry for the noise
<vila> no no ! very helpful
<vila> (and entertaining ;)
<blackboxsw> 2018-06-04 21:31:32,137 - handlers.py[DEBUG]: finish: init-local: SUCCESS: searching for local datasources
<blackboxsw> 2018-06-04 21:31:34,280 - util.py[DEBUG]: Cloud-init v. 18.2 running 'modules:config' at Mon, 04 Jun 2018 21:31:34 +0000. Up 16.05 seconds.
<blackboxsw> heh
<vila> (don't get derailed but line 136 : 2018-06-04 21:31:31,963 - util.py[DEBUG]: dmi data /sys/class/dmi/id/sys_vendor returned Scaleway)
<vila> There is a comment that dmi is not implemented IIRC...
<vila> * check DMI data: not yet implemented by Scaleway, but the check is made to
<vila>       be future-proof.
<blackboxsw> vila: what's systemctl list-dependencies | grep cloud tell you?
<vila> https://paste.ubuntu.com/p/7SSDVB7ZsG/
<blackboxsw> that's ok on the dmi read, as it was something cloud-init did to determine that it's not running on DigitalOcean.
<vila> ha
<blackboxsw> meh. something is causing cloud-init to skip init-network stage in that environment. (like a systemd job falling over maybe?) I see no tracebacks indicating why that is skipped. lemme see if I can digup the format of the systemd job
<blackboxsw> do you have a /lib/systemd/system/cloud-init.service  ?
<vila> yes
<vila> https://paste.ubuntu.com/p/JgDkpMCy5v/
<blackboxsw> bah. ok I think we need a bug here. I'll have to get a scaleway account setup to checkit out. nothing should have changed w.r.t. 17.1->18.2 and the systemd startup jobs/units. but skipping init-network stage is broken and that's why things are falling over.  I'll have to get a scaleway acct setup to triage more
<blackboxsw> what ubuntu release was this instance?
<vila> xenial
<blackboxsw> bionic? xenial?
<blackboxsw> ok
<vila> blackboxsw:
<blackboxsw> would you kindly 'ubuntu-bug cloud-init' vila and file a bug per instructions?
<blackboxsw> it'll dump your collect-logs output into a bug attachement
<vila> blackboxsw: from inside the instance ?
<vila> -bash: ubuntu-bug: command not found, installing
<blackboxsw> vila: yes please (if it has outbound connectivity). otherwise  you could just file a bug at https://bugs.launchpad.net/cloud-init/+filebug and attach a the cloud-init.tar.gz from your latest run to the bug
<blackboxsw> all ubuntu-bug does is ask a question or two about your cloud platform and collate that data when filing output from 'sudo cloud-init collect-logs'
 * vila installs apport
 * vila thinks about giving access... should be a matter of adding an ssh key on my account ?
<blackboxsw> yeah in the nearterm your sudo cloud-init init --local; sudo cloud-init init;  sudo cloud-init modules --mode config,  sudo cloud-init modules --mode final    I *think* should get you 90% of the way there
<blackboxsw> vila: right you could add ssh-import-id chad.smith to the instance
<blackboxsw> vila: right you could run 'ssh-import-id chad.smith' to the instance then I'd be able to login as whatever user you did that under
<vila> blackboxsw: root ! what else ? :-D
<blackboxsw> hah! but that said, I'm going to have to disappear shortly so I may not get to it until tomorrow morn my time
<blackboxsw> <--- and file your back acct and social security number here ;)
<blackboxsw> it may be good to have a reference bug so the others on the team can peek at the triage/respose too
<vila> hehe
<vila> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1775086
<ubot5> Ubuntu bug 1775086 in cloud-init (Ubuntu) "cloud-init fails to recognize scaleway" [Undecided,New]
<blackboxsw> thanks again. vila I have to bail for a while. will check it out
<vila> blackboxsw: thanks to you, at least I'm not mad and something is going on that is worth fixing ;-)
<vila> blackboxsw: if only for /var/lib/cloud/instance being a dir...
<blackboxsw> thx vila on the not being a dir issue I'll track a separate bug on 'cloud-init collect-logs' cmd being more resilient of failure cases
<blackboxsw> added that content to https://bugs.launchpad.net/cloud-init/+bug/1775074
<ubot5> Ubuntu bug 1775074 in cloud-init "collect logs: grab /var/lib/cloud data files" [Medium,Confirmed]
<blackboxsw> will try to kill 2 birds with 1 stone there
<blackboxsw> vila: comment for you on your bug. ok systemd has removed the cloud-init.service job for some reason and I need to dig into why
<vila> haaaa
#cloud-init 2018-06-05
<blackboxsw> rharper: looks like you can add public ips without restarting instances in azure, so there's a gap with a cold-plug only solution
<blackboxsw> ... and they don't let you assign multiple public ip addresses to a single nic
<blackboxsw> examples https://hackmd.io/aODzXfa_TOikNtYBLt8erA
<blackboxsw> but, having said that, the public ip isn't represented anyway via dhcp responses on azure, it's all private anyway
<smoser> blackboxsw: yeah, so new solution there is as good as old ifconfig udev solution
<smoser> err.. ifupdown
<smoser> vila: were you able to reproduce that /var/lib/cloud/instance/ being a dir ?
<smoser> i have seen it but looked at code and did not ever understand how it go tthat way
<smoser> (other than the fact that it was probably created by ensure_dir)
<smoser> write_file -> ensure_dir
<vila> smoser: blackboxsw is to blame for the scenario understanding ;-)
<vila> https://bugs.launchpad.net/cloud-init/+bug/1775074 explains some bits
<ubot5> Ubuntu bug 1775074 in cloud-init "collect logs: grab /var/lib/cloud data files" [Medium,Confirmed]
<vila> err, wrong one
<vila> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1775086
<ubot5> Ubuntu bug 1775086 in cloud-init (Ubuntu) "cloud-init fails to recognize scaleway" [Undecided,Invalid]
<vila> smoser: with cloud-init.service removed by systemd, instance is not created (where exactly I don't know)
<vila> smoser: that's still a bit vague but I would never have figured out alone ;-)
<vila> smoser: that's still a bit vague but I would never have figured it out alone ;-)
<vila> smoser: so basically can be reproduced by removing/disabling cloud-init.service ?
<smoser> vila: yeah, i guess that could do it. thanks.
<rharper> smoser: standup ?
<smoser> oh. sure.
<blackboxsw> cloud-init pulling data from azure's IMDS service https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/341662
<blackboxsw> oops bad paste cloud-init pulling data from azure's IMDS http://paste.ubuntu.com/p/dxQmhV5Fch/
<blackboxsw> now cobbling up network v2 for the datasouce
<blackboxsw> now cobbling up network v2 for the datasource
#cloud-init 2018-06-06
<smoser> interesting network config openstack change
<smoser>  https://review.openstack.org/#/c/312626/
<blackboxsw> nice refernce smoser
<blackboxsw> reference even
<blackboxsw> some interesting cases in Azure IMDS, the metadata service provides no way to distringuish between static IPs and dynamic IPs. so cloud-init can't really know what to set as dhcp versus static.
<blackboxsw> examples in my hackmd doc from earlier
<blackboxsw> and running dhcp on eth1 gets only one private addr.
<smoser> hm..
<dgautam> I am facing util.py[DEBUG]: failed stage init-local" error. it is failing for bond-master keywork. any pointers ?
<dgautam>   File "/usr/lib/python2.7/site-packages/cloudinit/net/sysconfig.py", line 455, in _render_bond_interfaces     iface_master_name = iface['bond-master']
<dgautam>   File "/usr/lib/python2.7/site-packages/cloudinit/net/sysconfig.py", line 455, in _render_bond_interfaces
<dgautam>     iface_master_name = iface['bond-master']
<dgautam> KeyError: 'bond-master'
<smoser> dgautam, where is this ?
<smoser> i'm guessing this is centos based on 2.7
<smoser> and likely fixed in upstream
<smoser> you check with a copr repo
<dgautam> cloudinit.log , I launched a centos 7.5 baremetal image
<dgautam> I am using cloud-init 0.7.9
<smoser> https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/
<dgautam> smoser: Thanks . I'll try the patch or upgrade cloud-init package to latest
<smoser> dgautam, that is a daily build... so it will move.
<smoser> i suspect https://copr.fedorainfracloud.org/coprs/g/cloud-init/el-stable/ will be good enouh for you.
<smoser> and wont move without us doing some reasonable level of testing (which... means that it is quite old)
<blkadder> Hi all... May have nothing to do with cloud-init but thought I'd ask: I am calling hostnamectl set-hostname which works when instantiating hosts and it survives a single reboot (Ubuntu 16.04/AWS). I've noticed though that it often loses the hostname on subsequent reboots and reverts to ip-whatever as the host name. Any suggestions on where to look?
<rharper> blkadder: what does /etc/hostname say  ?
<rharper> before and after each step ?
<blkadder> Well, when it is working the correct host name.
<blkadder> When it is rebooted ip-xxx-xxx-xxx-xxx
<blkadder> So the file itself is changing.
<blkadder> Don't know if this is cloud-init, systemd weirdness or something else...
<blkadder> Heck could be AWS for all I know...
<blkadder> The weird thing is that with the vast majority of my cloud-init stuff, I do a reboot which works fine (keeps host name). It's subsequent reboots where it loses the name.
<rharper> well, cloud-init will set the hostname to the hostname value in the instance metadata;
<blkadder> Which it does.
<blkadder> Let me dig around there for a bit.
<blkadder> Thanks.
<smoser1> cloud-init is not supposed to set it if you have changed it.
<blackboxsw> if cloud-init were changing hostname you
<blackboxsw> if cloud-init were changing hostname you will see the following in the cloud-init.logs timestamped after your reboot: " Setting the hostname to <someamazingname>"
<blackboxsw> the log I referenced is /var/log/cloud-init.log
#cloud-init 2018-06-07
<blackboxsw>  https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1671951
<ubot5> Ubuntu bug 1671951 in systemd (Ubuntu) "networkd should allow configuring IPV6 MTU" [Medium,Confirmed]
<blackboxsw> rharper: per your review coments on my mtu branch (in _extract_addresses) what's our desired behavior if config represents on mtu key at the physical device level and one in an ipv4 subnet?
<blackboxsw> as it is currently, the subnet mtu would be overwritten by the top-level mtu
<blackboxsw> I could change your review comment to only honor device-level mtu setting if an mtu(ipv4-specific subnet setting) doesn't already exist
<Guest91333> Hi, I am trying to write a part-handler, version2, but even a simple import os will make the part-handler stop working. No printout in the logfiles from code after the import os. Any pointers what might be wrong ?
<Guest91333> I am using cloud-init 0.7.5
<smoser1> Guest91333: 0.7.5 is very old. i'd suggest first seeing if you can get something working with more recent.
<smoser1> either way though, a fix to 0.7.5 is not really l ikely
<rharper> blackboxsw: hrm, not sure; there can be only one v4 mtu (aka, device level) it's not clear to me which one we should respect;  I think I'd like to  log that we found conflicting values and pick one
<Guest91333> upgrading the cloud-init version is difficult, as the image is given to us. Anyone has an example except the one in the docs ?
<blackboxsw> if on redhat/centos :    https://copr.fedorainfracloud.org/coprs/g/cloud-init/el-testing/builds/ has our builds
<blackboxsw> hrm Guest91333, if you are adding a part-handler to an image you don't control content on, I'm having a hard time seeing how that part-handler will be executed as cloud-init will only run on initial clean boot in that environment.
<blackboxsw> unless you are running a script after the fact I guess
<Guest91333> mime type part-handler - oh, you call more than one thing a part-handler ? Sorry forthe confusion then !
<Guest91333> my code is based on this https://github.com/cloud-init/cloud-init/blob/master/doc/examples/part-handler-v2.txt
<blackboxsw> ohh gotcha, ok so you are generating a text/part-handler mime part. ok. now I'm with you, yet I haven't walked through this before. And yes, there have been significant changes since 0.7.5 that may have impacted the mime handling. :/
<blackboxsw> If there was some issue importing modules in your mime part-handler part I'd expect to see some error log along the lines of "Failed at registering python file:"  with the mime filename you supplied.  Also there could be stderr over in /var/log/cloud-init-output.log that could help
<blackboxsw> "Failed at registerting python file .* \(part handler\).*  looks like. Anyway that's about as deep as I go on that issue at the moment
<blackboxsw> :/
<Guest91333> I didn't see any errors in /var/log/cloud-init-output.log - just missing output from printouts after the e.g. import os. Also other logs in /var/log didn't show anything suspicious. Checking if we can update our image now.
<blackboxsw> powersj: ci fixes for pylxd ... needed serially installed  before installing pylxd as latest urllib3 that came out 2 days ago results is a contextualdependency conflict when installing pylxd's sub-deps
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347637
<blackboxsw> this broke all ci runs on cloud-init with the following: https://jenkins.ubuntu.com/server/job/cloud-init-ci/53/console
 * blackboxsw shoots to get rid of at least one red status ball  on jenkins today
<powersj> ugh
<blackboxsw> powersj: I just reviewed https://github.com/lxc/pylxd/pull/309#pullrequestreview-126924422 I'm not sure that tip of pylxd has the fix right. though I'm not positive what the fix should be
<blackboxsw> thx for the review powersj , it should fix a couple of jenkins jobs (like cloud-init-integration-nocloud-kvm-x) too
<blackboxsw> just kicked it off to confirm
<powersj> blackboxsw: thanks! yeah we have a few more red dots to take care ;)
<Guest91018> got disconnected - sorry if this is posted again. Short update from the mime-type part-handler. Seems it was a problem with the logging. The example uses print, which seems to be swallowed after so many bytes. Switching to loggers helped. Would be nice if the example in https://github.com/cloud-init/cloud-init/blob/master/doc/examples/part-handler-v2.txt could be updated to use loggers. Thanks !
<smoser1> Guest91018: well merge proposals are welcome for doc for sure
<Guest32630> another update from mime-type part-handler: it wasn't the logging, it was me testing with os.getlogin() which threw an exception. print works fine. Sorry for the noise. After reading explanation on stackoverflow, all makes sense now
#cloud-init 2018-06-08
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347559
<blackboxsw> https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-lxd-a/373/console has a set of ntp/warning errors I hadn't looked at yet
<smoser1> powersj: jenkins seems not to be working ?
<smoser1> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347698
<smoser1> oh wait. yes it is.
<powersj> smoser: I believe the issue there is master was broken yesterday due to another pylxd issue with imports
<powersj> blackboxsw put in a merge https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347637
<blackboxsw> yeah, powersj looks like pylxd just posted an update to tip which may or may not have resolved that problem so we might be able to revert my changes now
<smoser> powersj: yeah, i was wrong.s orry. i thought it hant run.
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347702 has the upstream change.
<blackboxsw> good thx smoser will test and land it
<smoser> blackboxsw: i'mve got a land in progress now for the subp change i had . so just make sure not to fight on that.
<blackboxsw> smoser: I'm awaiting a jenkins CI approve vore on your tox/lxd branch first. so should be about 15 mins until I kick something off. will check before I do
<smoser> right
<blackboxsw> ... and review-mps should react appropriately now by stopping the land attempt :)
<blackboxsw> or failing locally rather, instead of updating the branch before trying to git push
<blackboxsw> or failing locally rather, instead of updating the *branch status in LP* before trying to git push
<smoser> rharper: you can take a quick https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347698 your comment is aresse there.
<rharper> y
<blackboxsw> rharper: done with your mtu review comments sysconfig fixed (it never honored device-level mtu in the first place, only subnet-level). Added unit test validation of warning message emitted on sysconfig/eni/netplan rendering)
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347559 should be clear
<rharper> blackboxsw: ok; yes, sysconfig render path typically involves openstack which network_data.json always uses subnet config but thanks for adding
<blackboxsw> rharper: done; only warn on differing mtu @ device-level vs subnet-level
<rharper> cool
<blackboxsw> I'll watch the branch for the approve or otherwise.
<rharper> blackboxsw: did you add a test where we skip the warn (device and subnet mtu, both are the same value) ?
<blackboxsw> ahh nope, will add a separate test
<smoser> man
<blackboxsw> was that a part-handler utf-8 man smoser?
<smoser> yeah. i am lost
<blackboxsw> yeah isn't it crazy/weird. hard to get pdb's and log traces on it :/
<smoser> i used to be able to do this.
<smoser> to get a pdb from cloud-init
<blackboxsw> it's 'string' type in py3 both with and without utf-8 content... so I was unable to see simple type differences and log won't print out the utf-8 content/and traceback is swallowed there :/
<blackboxsw> anyway, /me is adding things a --preserve-instance option to ci tests so we avoid the teardown (to grab salt && chrony issues)
<smoser> ah. k
<blackboxsw> powersj: smoser rharper https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/347716
<blackboxsw> minor tweaks that help be get ahold of an integration instance under test using out ci
<smoser> ok.
<smoser> so the one thing i've found out here.
<smoser> well, two things.
<smoser> a.) man, what a mess.
<smoser> b.) _CLOUD_INIT_SAVE_STDOUT=1 _CLOUD_INIT_SAVE_STDIN=1 cloud-init --debug init
<blackboxsw> leaves a message like:  2018-06-08 20:20:20,632 - tests.cloud_tests - INFO - Preserving test instance cloud-test-ubuntu-artful-modules-ntp-chrony-a6v27keuq27i794lru2
<smoser> that will let pdb (or ipdb break in)
<blackboxsw> wow smoser, ok maybe we add that to hacking docs on rtd, or maybe just cloud-init docs in ubuntu-sru repo?
<smoser> :)
<blackboxsw> yeah I've frequently tried/failed pdb using cloud-init's cli
<blackboxsw> didn't spend enough time to unravel it
<blackboxsw> that is a good tip that I'd like to reference in the future... .checking it now to hopefully commit it to muscle-memory (bashhistory memory)
<blackboxsw> or a gist :)
<blackboxsw> hrm I think our integration tests are seeing https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/1589780
<ubot5> Ubuntu bug 1589780 in chrony (Ubuntu) "chrony.service doesn't start on LXD container" [High,Fix released]
<blackboxsw> on artful
<blackboxsw> digging a bit more
<blackboxsw> but generally chrony isn't starting up with something like chronyd[190]: Fatal error : adjtimex(0x8001) failed : Operation not permitted
<blackboxsw> will have to talk to cpaelzer_ about this on Monday
<smoser> that is expected
<smoser> blackboxsw: well that bug is not marked as fixed in artful
<blackboxsw> as a result maybe we just shouldn't be testing chrony cloud-config on artful right?
<blackboxsw> because it results in errors which we expect (chrony support bionic++ right?)
<smoser> well, it isnt expected to work
<blackboxsw> yeah, I'm going to disable that ntp_chrony test on artful then.
<blackboxsw> wanted to air that thought for objections
<smoser> well, in a container.
<smoser> i guess.
<smoser> it could/should run on ec2 ?
<smoser> or non-lxd
<blackboxsw> will verify. yeah I  think it's just a container where it fails
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 6/18 16:00 UTC | cloud-init 18.2 released (03/28/2018)
<smoser> blackboxsw: wel...
<smoser> we lose the data when we call 'convert_string'
<smoser> in UserDataProcessor:process
<blackboxsw> email sent moving cloud-init status meeting one week
<blackboxsw> oooh :(
<smoser> ok... i have to leave for the night :-(
<smoser> blackboxsw: so what happens is that when we're storing this to disk in the mime format
<smoser> we call
<smoser> convert_string(raw_data)
<smoser> which raw_data is a bytes
<smoser> then util.decode_binary(blob) returns a string that includes utf-8
<smoser> then we end up doing
<smoser> email.message_from_string(that_string)
<smoser> so..
<smoser> msg = email.message_from_string(b"echo hi \xc3\x84\n".decode('utf-8'))
<smoser> msg.as_string()
<smoser> works, but
<smoser> msg.as_bytes()
<smoser> UnicodeEncodeError: 'ascii' codec can't encode character '\xc4' in position 8: ordinal not in range(128)
<smoser> and then we read that with fully_decoded_payload
<smoser> its somewhere in there.
<smoser> ... i'm sorry . i have to run for now.
<smoser> i figured i could have solved this in the afternoon, which is why i took it.
<smoser> :-(
<blackboxsw> yeah it's nasty the # of conversions
#cloud-init 2018-06-09
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/347727
<smoser> that is not perfect, but it is a much better place to start for you on monday, and it shows some progress on that bug.
<smoser> and /me out now.
#cloud-init 2020-06-01
<meena> Good point
<meena> i can't tell from the documentation a new network interface is considered
<meena> faa: i'd have to look at the code, and chances are, before i'm able to do that, someone more experienced with the project will have woken up
<faa> meena: thank, good news :)
<meena> my hour of laptop time has been wasted, and now i have to go and wake my daughter from her nap
<faa> this is not an urgent problem, but I will be glad to any advice, because as far as I can see, now cloud-init does not have the ability to change the network configuration
<cjmm6> Hi, I need help with latest cloud-init version. In machines that not needed or not works cloud-init, I use a systemd service (I cannot use boot parameters), that using a Before=cloud-init-local.service and some conditions, then write the file to disable cloud-init /etc/cloud/cloud-init.disable (or remove this file if need use cloud-init)
<cjmm6> This works in previous versions of cloud-init
<cjmm6> But in the last version not works because exists one previous task called "cloud-init-generator". Then, so i don't know how to make my systemd unit run before that task
<cjmm6> My systemd unit is:
<cjmm6> [Unit]
<cjmm6> Description=Control cloud-init to enable or disable cloud-init dinamically at boot time
<cjmm6> DefaultDependencies=no
<cjmm6> After=systemd-remount-fs.service
<cjmm6> Before=NetworkManager.service network.service network-pre.target
<cjmm6> Before=shutdown.target sysinit.target
<cjmm6> Before=cloud-config.service cloud-final.service cloud-init-local.service cloud-init.service cloud-config.target
<cjmm6> Conflicts=shutdown.target
<cjmm6> [Service]
<cjmm6> Type=oneshot
<cjmm6> ExecStart=/usr/local/bin/control-cloud-init.sh
<cjmm6> RemainAfterExit=yes
<cjmm6> TimeoutSec=0
<cjmm6> # Output needs to appear in instance console output
<smoser> cjmm6: you wont really be able to reliably run before a generator
<smoser> at least not with a systemd unit
<smoser> you can disable the generator and then you'll get cloud-init behavior like before.
<smoser> unfortunatlye "fully disabled" isn't really possible without the generator.  cloud-init will still run through all its stages and use the 'none' datasource.
<Odd_Bloke> The generator doesn't do much to the system, either, so it probably isn't the end of the world if it runs and then your custom unit disables all the cloud-init units that it just added to the boot graph.
<cjmm6> disable is as follows?
<cjmm6> ln -sf /dev/null /etc/systemd/system-generator/cloud-init-generator
<cjmm6> because early systemd cannot use systemctl
<cjmm6> and how to reenable generator without systemctl?
<Odd_Bloke> The generator runs before any units run, so you won't be able to disable the generator using a unit.
<Odd_Bloke> And I was going to say that you should be able to disable the units in your own unit, but that may be too late for systemd to pick up the change.
<Odd_Bloke> Hmm.
<cjmm6> I cannot try to disable all cloud-init units
<cjmm6> error!
<Odd_Bloke> cjmm6: How do you get the systemd unit onto these systems?
<cjmm6> i haven't tried disabling the unit systemd of cloud-init, i will test it if it works
<Odd_Bloke> cjmm6: Sorry, I meant how do you get the systemd service which was working with older versions of cloud-init onto the system?
<cjmm6> Its because I use a VM image compatible with multiple environment systems.
<Odd_Bloke> Right, so the conditions in the unit are used to determine enable/disable based on where you're booting?
<cjmm6> Yes! Dinamically
<cjmm6> Then, I use one unique VM image for multiple systems
<cjmm6> https://github.com/cesarjorgemartinez/automate-virtual-machine-linux-images/tree/master/CentOS7Minimal
<cjmm6> Until this challenge (Ubuntu):
<cjmm6> https://github.com/cesarjorgemartinez/automate-virtual-machine-linux-images/tree/master/Ubuntu20Minimal
<smoser> https://github.com/cesarjorgemartinez/automate-virtual-machine-linux-images/blob/master/Ubuntu20Minimal/files/control-cloud-init.sh
<smoser> is that what is controlling cloud-init ?
<smoser> it looks to me that you enable cloud-init on kvm and aws. and disable elsewhere.
<Odd_Bloke> cjmm6: OK, so one thing you could potentially do is configure cloud-init to only consider the data sources that will present on KVM and EC2.  The generator should then disable cloud-init when it fails to find those datasources.
<Odd_Bloke> s/find/detect the applicability of/
<smoser> but in reality... cloud-init really should do all the right things.
<smoser> i'im surprised if it ever enables itself in a scenario where you think it should not run
<smoser> or even disables itself where you think it should
<cjmm6> Not ...  cloud-init really should do all the right things... In many systems it is even harmful
<smoser> like ?
<cjmm6> Then, if you can test, Centos7 and 8 works correctly because not use the last cloud-init
<smoser> the intent of the generator is to enable cloud-init when there is a "datasource". and do nothing otherwise.
<smoser> i'm sorry, i didn't follow.
<cjmm6> Centos 7 => cloud-init-18.5-3.el7.centos.x86_64
<smoser> i agree that cloud-init was very annoying in the past.
<smoser> without the generator, when it ran and chose the "None" datasource, it would have undesireable side effects.
<cjmm6> ubuntu 20 => 20.1
<smoser> but the generator was intended to *fix* those.
<smoser> can you actually provide examples when it ran and you think it should not have ran ?
<cjmm6> ad-hoc environments, or cloud-init not supported
<cjmm6> + using compatible VMs between environments, as the purpose of this projet
<cjmm6> "boot one image in multiple envs"
<smoser> can you provide a 'cloud-init collect-logs' from any of those environments ?
<cjmm6> without changes
<smoser> ubuntu cloud images are "one image" that boots in multiple envs
<smoser> and all "just work" (and if they do not, then feel free to file bugs)
<smoser> i'm not trying to be rude.  we want to make this work, but we've intended to solve the problem that you're having with the generator
<smoser> so we want to fix the generator if it is not behaving correctly
<smoser> rather than make users like yourself have to write scripts to do the right thing.
<smoser> cjmm6: ^
<smoser> Odd_Bloke's suggesting of configuring the datasource_list should also work.  but the goal you're after of "one image" is already a design goal of the project.
<cjmm6> but need one thing to disable, always... Only works in my test, if, when the VM is booted, enter to ssh and create /etc/cloud/cloud-init.disable, and second reboot
<smoser> i dont understand.  "but need one thing to disable, always"
<smoser> in some scenarios you want to stop cloud-init from running ?
<smoser> what are those scenarios?
<cjmm6> yes, in adhoc envs, when no need to change nothing. New envs without already cloud-init support, etc.
<cjmm6> Situations that, exists a big bug with difficult solution, or the Operating Systems not update the cloud-init for their own reasons...
<smoser> so how do you determine that this is such an environment?
<smoser> as you seem to know, you can disable cloud-init always by touching /etc/cloud/cloud-init.disabled.
<rharper> cjmm6: a couple of comments;  the image build process (https://github.com/cesarjorgemartinez/automate-virtual-machine-linux-images/blob/master/Ubuntu20Minimal/postscripts/post-stage-1.sh)  does a lot of work cleaning up the image due to the image being booted in the first place via packer;  you may want to look at using something like mount-image-callback (in Ubuntu's cloud-image-utils package) to allow you to mount the image without booting so you
<rharper> can add your various changes (injecting scripts and programs and updating defaults in the image);    second, for Centos 7/8 cloud-init, RedHat merges features/fixes from master; so while 18.5 is "old" the core capabilities around dynamically enabling/disabling cloud-init (via ds-identify);   third; I suspect that the cloud-init control script is not needed; the cloud-init generator using the ds-identify code (/usr/lib/cloud-init/ds-identify, writes
<rharper> output to /run/cloud-init/* ) one can examine that to see why cloud-init gets enabled or not;  If you have a scenario where cloud-init ran when you did not want it to; or the reverse; capturing /run/cloud-init/* contents into a bug will let us help sort out if it's expected behavior or if there is a bug
<Odd_Bloke> smoser: Can you still disable cloud-init by touching that file after the generator has run?
<smoser> Odd_Bloke: well it wont next time
<smoser> but the ship has almost certainly sailed when you touch it otherwise.
<meena> anyone here seen / answered faa's problem of adding another network interface?
<rharper> meena: faa: by default, adding a nic won't trigger any update to existing network configuration rendered;  so that's expected behavior.    the instance-id is set by the meta-data in the NoCloud datasource;  changing the instance-id will boot like a new instance;
<rharper>  If you've added a nic, and you know what an updated network-config (v1) format should look like, you can use 'cloud-init devel net-convert'  to render a specific distro/renderer output; and then copy that to where you need ;  Long term, as we land support in Datasources for refreshing metadata and handling hotplug, https://github.com/canonical/cloud-init/pull/47  ;  I could see NoCloud (configured with an HTTP end-point instead of local files) could
<rharper> refresh (fetch metadata) on hotplug of nic.
<Odd_Bloke> blackboxsw: rharper: Could one of you look over my proposed changes to https://github.com/canonical/cloud-init/pull/358, please?  They're minor, but as mruffell is out this week, I'd like at least one other person to check them before I land them.
<rharper> Odd_Bloke: sure
<Odd_Bloke> Thanks!
<meena> rharper: you set pr has gone staleâ¦
<rharper> yes, it's still valid, just needs a refresh and push
<smoser> rcj or maybe Odd_Bloke , or maybe one of the microsoft people here...
<smoser> it was my impression in the past that when Ubuntu images were uploaded, that the upload was done with sparseness in-tact
<smoser> but then that the copy from region to region did not maintain that sparseness.
<smoser> it seems that a feature went into some tools https://github.com/Azure/azure-cli/issues/11509
<smoser> late in 2019 that may make my knowledge obsolete.
<Odd_Bloke> I believe we addressed that a while ago; I remember pairing with SteveZ (of Azure) to fix it at a F2F a few years back.
<smoser> My question then... Do infrastructure (user->user or region->region) copies inside azure now retain sparseness ?
<smoser> my current employer is in a similar situation where we publish sparse images, but then use of those images was believed to entail non-sparse copies.
<smoser> i'd love to have some link to doc that says otherwise, or link to a nice manual that I could RTFM.
<Odd_Bloke> The image copy/replication happens on the backend for Ubuntu image publication (kicked off via private APIs), so I don't know if knowledge from Ubuntu publication will transfer over to a third-party image publication flow.
<Odd_Bloke> (We are categorically different from images uploaded by users.)
<smoser> hm... yeah.
<smoser> so what i'm after is
<smoser> a.) we publish an image (we do this i think as well as we can, using sparseness)
<smoser> b.) user launches that image or imports it into their account
<smoser> i'm not actually sure how they do that, but if its "click click" or otherwise somehow underlying infrastructure, then i'd certainly hope it was sparse.  and if it is *not* click click, then i'd like to at least know how to tell someone that they can copy it the right way.
<smoser> thanks Odd_Bloke
<smoser> i'm disappointed that cjmm6 disappeared. i was honestly interested in his/her use case.
<smoser> well, if someone ms knowledgeable could reach out, i'd love to chat some.
<Odd_Bloke> rharper: I've updated https://github.com/canonical/cloud-init/pull/358, could you take another quick look pls?
<rharper> Odd_Bloke: yes
<Odd_Bloke> Thanks!
<rharper> Odd_Bloke: one more comment, I just left for you
<blackboxsw> falcojr: lucasmoura if either of you get a chance. I've put up a basic SRU doc  from which we'll work to validate this cloudinit SRU https://github.com/cloud-init/ubuntu-sru/pull/97/files
<blackboxsw> it basically creates a bunchof  broken links which we will fill in as we verify each cloud or manual work it
<blackboxsw> work *item*
<Odd_Bloke> rharper: And replied.
<rharper> k
<blackboxsw> lucasmoura: falcojr rharper the much anticipated quilt patches for xenial https://github.com/canonical/cloud-init/pull/406
<blackboxsw> and procedure
<lucasmoura> blackboxsw, ack
<blackboxsw> I'll do basically the same for bionic and eoan now.
<blackboxsw> without the procedural steps
<rharper> ok
<blackboxsw> rharper: falcojr lucasmoura https://github.com/canonical/cloud-init/pull/407 bionic
<rharper> thx
<Odd_Bloke> rharper: Updated, thanks again for the review!
<blackboxsw> and eoan https://github.com/canonical/cloud-init/pull/409
<Odd_Bloke> blackboxsw: rharper: https://github.com/canonical/cloud-init/pull/358 just landed, so we can SRU away now, I think.
<blackboxsw> nice Odd_Bloke I don't mind doing another new-upstream-snapshot round
<blackboxsw> might as well
#cloud-init 2020-06-02
<knaccc> hey y'all, is it correct that I should not be editing 50-cloud-init.yaml to put in a custom nameserver list or search domains, and should instead create a /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg and then...? any help greatly appreciated
<knaccc> i'm very confused that when i set up the custom DNS servers (8.8.8.8 etc), "systemd-resolve --status" shows that it's recognizing that for "Link 2 (enp3s0f0)", but not for "Global". and i think therefore for some reason it's ignoring my DNS settings and search domain
<knaccc> the server has 2 eth interfaces, and only one is configured,
<Odd_Bloke> knaccc: cloud-init will regenerate /etc/netplan/50-cloud-init.yaml on each boot, so yes, you don't want to modify that.  If you're still having the issue, please pastebin all config you have in /etc/netplan and we can try to help out. :)
<blackboxsw> rharper: lucasmoura Odd_Bloke falcojr this would be additional set of commits if we ran new-update-snapshot https://pastebin.ubuntu.com/p/qVMwsrKHKH/
<Odd_Bloke> That LGTM, would definitely be nice to get that Chef change in.
<rharper> blackboxsw: yeah, +1
<blackboxsw> rharper: Odd_Bloke falcojr lucasmoura https://github.com/canonical/cloud-init/pull/411  upload of new-upstream-release for groovy and SRU afterward
<Odd_Bloke> blackboxsw: Reviewing now.
<blackboxsw> thanks Odd_Bloke focal right after it https://github.com/canonical/cloud-init/pull/412
<blackboxsw> same changeset basically
<blackboxsw> I have to change the eoan bionic and xenial a bit more
<blackboxsw> aaand cloud-init status tim
<blackboxsw> aaand cloud-init status time
<blackboxsw> #endmeeting
<Odd_Bloke> blackboxsw: Cool; one note: we have a lot of stale Build-Depends now: unittest2 isn't required, six isn't required, and we've stopped using pyflakes and pep8 directly (and my local build didn't fail due to the absence of flake8, so I don't think we need to replace it).
<Odd_Bloke> (That build was on focal, I'm just bootstrapping a groovy build env now, to test the actual correct release.)
<blackboxsw> #startmeeting cloud-init status meeting
<meetingology> Meeting started Tue Jun  2 16:21:15 2020 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<blackboxsw> hi folks, time for another cloud-init upstream status meeting.
<blackboxsw> we use this meeting to provide a venue for any cloud-init interested parties to keep up to date on current development, release-related info and expedite distributed development where possible.
<blackboxsw> this meeting is a welcome place for interruptions, questions, requests and unrelated discussions at any point. so don't be shy :)
<blackboxsw> #chair Odd_Bloke smoser rharper
<meetingology> Current chairs: Odd_Bloke blackboxsw rharper smoser
<blackboxsw> The topics we generally cover in this meeting are the following: Previous Actions, Recent Changes, In-progress Development, Community Charter, Office Hours (~30 mins).
<blackboxsw> previous meeting minutes live here (and I just saw I forgot to publish last minutes so I pushed them now)
<blackboxsw> #link https://cloud-init.github.io/
<blackboxsw> #topic Previous Actions
<blackboxsw> nothing actionable brought up in last meeting on 05/19
<blackboxsw> Odd_Bloke: ahh we should fix devel with those pkg drops on next upload
<blackboxsw> we did drop that for Xenial, Bionic Eoan and maybe focal too?
<blackboxsw> so an oversight for groovy
<blackboxsw> next topic
<blackboxsw> #topic Recent Changes
<blackboxsw> the following are commits landed in tip of master found via git log --since 05/19/2020 : https://paste.ubuntu.com/p/QFvgWhjXY9/
<Odd_Bloke> blackboxsw: When you say "next upload" are you referring to the upload you're about to do, or the one after that?
<blackboxsw> Odd_Bloke: if you'd like we can adjust the current upload so that devel, focal, bionic xenial eoan all drop those stale deps
<blackboxsw> I think X, B E have all dropped them
<blackboxsw> so maybe I re-do ubuntu/devel PR Odd_Bloke ?
<blackboxsw> probably good/better/correct to keep all releases on the same footing.
<Odd_Bloke> blackboxsw: I think it's worth doing, we've uploaded without fixing it a few times before, and we've remembered this time around.
<blackboxsw> yeah sounds good Odd_Bloke I'll re-do that devel PR (and make sure focal drops it too)
<blackboxsw> if needed
<Odd_Bloke> And it should just be a case of pushing a new commit to your existing branch.
<Odd_Bloke> Thanks!
<blackboxsw> +1
<blackboxsw> things of note in the recent commits landed.  https://github.com/canonical/cloud-init/pull/358  Mattew Ruffell  improved cc_grub_dpkg to be more dynamic in matching disks instead of a hardcoded device list
<blackboxsw> thanks Matthew
<blackboxsw> and chef_license support https://github.com/canonical/cloud-init/commit/0919bd46bbd1b12158c369569ec1298bb000dd8a
<blackboxsw> thanks bipinbachhao  for the config extension there
<blackboxsw> #topic  In-progress Development
<blackboxsw> a couple of new notables in flight at the moment:
<blackboxsw> - falcojr: introduction of feature-flags for cloud-init upstream to give us a toggle to retain original behavior of #include failures on stable downstream releases. https://github.com/canonical/cloud-init/pull/367  . Upstream cloud-init will fail loudly and raise an Exception if someone tries to #include a url which fails. this differs from original cloud-init behavior which was to try our best to get a system up
<blackboxsw> and running, even amid not-critical failures
<blackboxsw> per the above, if downstreams (distributiions) would like to retain a more permissive warn on #include user-data issues, a cloudinit/feature_overrides.py file would need to be introduced in the downstream
<blackboxsw> - Also meena and Odd_Bloke and others have been working toward a refactor of cloudinit.net modules. Dan added a doc PR to capture this approach https://github.com/canonical/cloud-init/pull/391
<blackboxsw> beyond that, there are a number of PRs up from lucas on json schema additions for cloudinit/config/cc_* modules to get better validation of #cloud-config user-data
<blackboxsw> For ubuntu proper, we have started the StableReleaseUpdate process for cloud-init to publish master into ubuntu/xenial, bionic, eoan and focal releases
<blackboxsw> some of these changes will add the opportunity to enable 'new' features on platforms like Azure
<blackboxsw> and AWS
<blackboxsw> Azure (xenial) will be dropping walinuxagent support
<blackboxsw> AWS will now surface a datasource config option apply_full_imds_network_config boolean
<blackboxsw> if set true in an Ec2(aws) image network configuration from cloud-init can come completely from IMDS for every connected NIC. That config will include all secondary IPv4/IPv6 addressses configured for the machine
<blackboxsw> Upstream has  started the Ubuntu SRU process (which generally takes around 10-14 days). We plan to include every commit that has landed in tip of master as of commitish 5f7825e22241423322dbe628de1b00289cf34114
<blackboxsw> the bug related to this SRU work is here
<blackboxsw> #link https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1881018
<ubot5> Ubuntu bug 1881018 in cloud-init (Ubuntu Focal) "sru cloud-init (19.4.33 to 20.2-30) Xenial, Bionic, Eoan and Focal" [Undecided,New]
<blackboxsw> #topic Community Charter
<blackboxsw> upstream has signed up to get as much of the json schema coverage as we  can for cloudinit/config/cc*py modules since invalid #cloud-config user-data formats tends to have one of the highest incidence of errors (because writing YAML is something humans shouldn't have to do :) )
<blackboxsw> so we are chopping away at defining JSON schema for as many cloud config modules as possible . there are still plenty to choose from. Anyone can feel free to grab a JSON schema bug and help us with bettering cloud-init
<blackboxsw> bugs are filed for each config module which needs schema definition:
<blackboxsw> #link  https://bugs.launchpad.net/cloud-init/?field.tag=bitezise
<blackboxsw> a big thanks to lucasmoura for starting to grab a number of these
<blackboxsw> #topic Office Hours (next ~30 mins)
<blackboxsw> This 'section' of the meeting is a time where a couple of upstream devs will be available in channel for any discussions, questions, bug work or PR reviews.
<blackboxsw> In the absence of discussions/topics here we scrub the review queue.
<blackboxsw> since we are mid-stream on Ubuntu SRU at the moment, I'll be addressing review comments on some of the functional 'upload' branches we've put together
<blackboxsw> and, let's update the topic for next IRC meeting too while we are at it
* blackboxsw changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting June 16 16:15 UTC | 20.1 (Feb 18) | 20.2 (Apr 28) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> Odd_Bloke: just pushed ubuntu/devel dropping python3-six|unittest2|nose
<blackboxsw> and just re-pushed ubuntu/focal to drop python3-six
<blackboxsw> oops and missed you others. reworking
<blackboxsw> ok re-pushed. focal and devel PRs in shape
<blackboxsw> dropped the following build-deps: python3-six, python3-unittest2, python3-pep8, python3-nose, python3-pyflakes
<Odd_Bloke> blackboxsw: +1 on the ubuntu/devel upload.
<blackboxsw> whew, think we got all of the dropped deps between the two of us... thanks!
<blackboxsw> Odd_Bloke: thanks focal looks good and sbuilds
<blackboxsw> just finished eoan and building now to test
<meena> what? me??
<blackboxsw> well yes indeedy meena, just trying to keep you highlighted as participating in the cloud-init status meeting :) you've thankfully reviewed, pushed and prodded us to talk about cloudinit.net refactor and how best to address it I think :) credit due ;)
<blackboxsw> community notice: upload to Ubuntu groovy of cloud-init master accepted [ubuntu/groovy-proposed] cloud-init 20.2-45-g5f7825e2-0ubuntu1 (Accepted)
<Odd_Bloke> blackboxsw: One issue with https://github.com/canonical/cloud-init/pull/412
<meena> blackboxsw: i'm just waiting for Odd_Bloke to provide the basic infrastructure so i can start moving codeâ¦ without that, i have to bug other projects in my â¦ 2 hours of free time per day.
<meena> blackboxsw: yesterday, i tried to build an android app on my laptop and gave up after an hour.
<blackboxsw> nice review again Odd_Bloke, will reflect that patch to each series. as every other ubuntu/* is missing enabling various cloud datasources beyond just Rbx
<blackboxsw> Odd_Bloke: rharper so Xenial is interesting for datasource config via dpkg
<blackboxsw> We are missing: Hetzner, IBMCloud, Oracle, and  RbxCloud
<blackboxsw> one was an oversight on previous SRUs
<blackboxsw> but Oracle and IBMCloud, I'm trying to recall if there is a reason we didn't want to surface either of those datasources as configurable on Xenial
<blackboxsw> a little warning bell is going off in my head
<blackboxsw> Hetzner I thought was 'ok'
<blackboxsw> Oracle currently gets detected as OpenStack on Xenial.
<rharper> IBMCloud and Oracle are sensitive
<rharper> not sure about Hetzner or RbxCloud though
<blackboxsw> upstream Oracle datasource is 'good', but I wasn't sure if there was extra baggage associated with *not* backporting that functionality
<rharper> blackboxsw: I think you might want to check with CPC on those
<meena> Hetzner is also detected as OpenStack on FreeBSDâ¦ butâ¦ only thru cloud-init itself, not thru ds-identify
<meena> (i'm not sure how much of that is my fault having helped a lot with Hetzner and FreeBSD and ds-identify myself)
<knaccc> Odd_Bloke thanks for your reply. I managed to fix things in the end, but kinda by cheating. Now my /etc/netplan/50-cloud-init.yaml only contains the IP addresses configuration, and I make the nameservers and search domain apply in the "Global" scope (as reported by systemd-resolve --status) by simply modifying the /etc/resolv.conf file. All configuration survives reboot just fine, and I am no longer
<knaccc> scared that resolv.conf will be overwritten because I found a web page that said that "Note: The mode of operation of systemd-resolved is detected automatically, depending on whether /etc/resolv.conf is a symlink to the local stub DNS resolver file or contains server names." Although you said in your message that "cloud-init will regenerate /etc/netplan/50-cloud-init.yaml on each boot, so yes, you don't
<knaccc> want to modify that", the OVH instructions directly contradict that and tell me to edit it to add all IP addresses to my interface (see Ubuntu 18.04 section here: https://docs.ovh.com/gb/en/vps/network-ipaliasing-vps/). I'm therefore very confused about why OVH seem to contradict the instructions that are in that config file, and confused as to what other location I should be editing/creating instead
<ddstreet> knaccc why do you want to change resolved 'Global' section?
<blackboxsw> heh meena not at fault :) . Just need to make sure we move cloud-platforms to a better way of detecting the right datasource when we can.
<knaccc> ddstreet if I put the nameservers and search domain into the /etc/netplan/50-cloud-init.yaml file, it gets ignored completely (i.e. although those configurations show up in systemd-resolve --status against that specific "link", the "Global" nameservers and lack of any search domain in that Global section are taking precedence). Therefore I had to configure nameservers and search domain at the resolv.conf
<knaccc> level so that it appeared in the Global section, and then suddenly everything worked for the first time
<blackboxsw> I should tie off our cloud-init status meeting. Thanks folks for all who've attended
<blackboxsw> #endmeeting
<meetingology> Meeting ended Tue Jun  2 18:08:56 2020 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2020/cloud-init.2020-06-02-16.21.moin.txt
<knaccc> oh oops, sorry i didn't realise the meeting was still in progress when I interjected
<blackboxsw> knaccc: the meeting is a welcome place for any and all discussions
<blackboxsw> all good
<blackboxsw> I was just supposed to end it about ~20 mins ago to capture logs
<blackboxsw> we were in the "open discussion office hours"  part of the meeting
<ddstreet> knaccc i don't know what your exact config is, but you don't need nameservers in the global section for dns to work
<knaccc> ddstreet since the OVH config already specified a resolv.conf file, that seemed to be overriding my settings in the 50-cloud-init.yaml file. I should have thought to try deleting the resolv.conf file to see if that would stop the "Global" section for overriding the link settings
<ddstreet> yeah with systemd-resolved you don't want to modify the /etc/resolv.conf file
<knaccc> ddstreet do you disagree with the web page I found that says "Note: The mode of operation of systemd-resolved is detected automatically, depending on whether /etc/resolv.conf is a symlink to the local stub DNS resolver file or contains server names"?
<knaccc> (that's a quote from here https://wiki.archlinux.org/index.php/Systemd-resolved )
<ddstreet> knaccc no that's correct, but if you maintain your /etc/resolv.conf yourself, you need to 100% manage it
<knaccc> ddstreet ah yes, that's the impression i got. since this is a dedicated server, and that resolv.conf configuration will probably never change, that should be OK, right?
<knaccc> all resolv.conf contains is the cloudflare and google dns servers, and my search domain. so those will probably never change, ever
<ddstreet> well that depends on what exactly you did, i.e. remove the symlink and create the file yourself, and remove the 127.0.0.53 from it
<ddstreet> i.e. you want to bypass resolved entirely
<ddstreet> when you do that, it doesn't matter what systemd-resolve --status says, because you aren't using it
<knaccc> ah that makes sense. yes the resolv.conf file was already put there by OVH on a fresh install. so i'm guessing they did itthat way to make things easier for people who were used to just editing resolv.conf
<knaccc> i'll try deleting resolv.conf and then seeing if 50-cloud-init.yaml nameservers and search domain suddenly get picked up
<ddstreet> well you need to recreate the symlink to /run/systemd/resolve/stub-resolv.conf
<ddstreet> and make sure something has told resolved about your actual nameservers, i.e. networkd in most cases
<knaccc> ddstreet ah thanks, yes i see, i'm supposed to do: ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
<knaccc> i'll try that, when i'll be reinstalling another server in a few hours
<knaccc> although it seems like since i don't have some kind of liquid situation with frequently changing nameservers, maybe i should just stick to simplicity and just do things in resolv.conf
<knaccc> ddstreet if i'm not supposed to be editing 50-cloud-init.yaml file, could you point me towards what I should be ediitng please?
<knaccc> i wonder if OVH are using cloud-init in a kind of "one-shot" mode, where it's useful for them to just config a dedicated server the first time using cloud-init, but then i can then just edit the 50-cloud-init.yaml file myself from then on, because neither OVH nor any kind of system thing will ever overwrite 50-cloud-init.yaml
<ddstreet> i didn't say you shouldn't edit that file
<ddstreet> but in general, no you shouldn't, it's created from the cloud data
<ddstreet> you should edit your instance config from the cloud provider's contorls
<blackboxsw> so rharper/Odd_Bloke for this SRU, I'm thinking we hold on IBMCloud, Hetzner and Oracle datasource enablement and tackle that separately once we circle the wagons with regard to Xenial expected behavior
<blackboxsw> I'm good with adding RbxCloud as that datasource was just recently added and detecting it falls at the end of the list for ds-identify
<rharper> blackboxsw: that sounds good to me
<blackboxsw> this is all for xenial and bionic, only adding RbxCloud, xenial and bionic will still lack Oracle and IBMCloud support. Xenial will also lack Hetzner
<knaccc> ddstreet thanks for your advice. is cloud-init attempting to connect to OVH on every reboot to see if there are new settings available? as long as i'm confident that this dedicated server will never need an automatic network setting update or any other type of updates from OVH via cloud-init, do you see any problem with me just disabling cloud-init by doing "touch /etc/cloud/cloud-init.disabled"? is that
<knaccc> the best way to disable it? it is a bit creepy to me that OVH could just use cloud-init to mess with my dedicated server unexpectedly
<rharper> knaccc: cloud-init generally only operates on first boot; subsequent boots runs cloud-init to determine if it is on the same instance or if it has been captured and booted on a different platform; in which case it clears data and runs like first boot again
<rharper> knaccc: I'm not sure which Datasource OVH uses ( I think OpenStack, but not 100%s) most datasources do not generate network config on *every* boot; some platforms do (Azure for example);  if you are only worried about network config changes, you can configure cloud-init to not bother configuring networking at all;
<Odd_Bloke> blackboxsw: To be clear, the issue is not that we're missing DSes, it's that our two lists are inconsistent.
<blackboxsw> Odd_Bloke: sorry I figured that out later. but we also have missing datasources on Bionic and Xenial
<rharper> Odd_Bloke: do you think we miss these due to image builds for certain platforms which might include the DS automatically ?
<lucasmoura> blackboxsw, I am trying to write one of our manual tests for the next SRU. I am starting with focal, but I just want to verify something. Is it right that the cloud-init package to be tested is not on focal-proposed yet ?
<rharper> it seems like some of them would scream if cloud-init didn't work at all on certain releases
<rharper> lucasmoura: yeah, it's not uploaded to the pocket release yet
<lucasmoura> rharper, ack
<rharper> lucasmoura: instead you can: add-apt-repository -y ppa:~cloud-init-dev/proposed
<rharper> then apt update && apt install cloud-init
<lucasmoura> Oh right, now I remember that we commented that issue on the daily. The lxc-proposed-snapshot use the pocket by default right ?
<Odd_Bloke> rharper: blackboxsw: Given the uncertainty we have over it, and the lack of bug reports, I think we should proceed with the current set of DSes.
<blackboxsw> Odd_Bloke: I've specifically added RbxCloud (because it's newly added to tip) to X, B, E and F
<Odd_Bloke> That seems fine to me too. :)
<blackboxsw> ok, the others let's leave untouched
<knaccc> rharper yes OVH is openstack, but that's for their public cloud. I think they're just suddenly using cloud-init on their private dedicated servers too since it makes it easy for them. Just to clarify, are you saying that cloud-init is probably connecting to OVH on every reboot to ask OVH if any changes need to be made, and OVH is saying "no" each time? or is it my server that it is itself deciding not to
<knaccc> contact OVH on subsequent boots?
<blackboxsw> and RbxCloud is last detected datasource anyway, so no impact to other datasources unless nothing else matches
<rharper> knaccc: no; I can't say what they're doing without seeing some logs;  If they are using it on dedicated servers; it may use "offline" datasources like NoCloud which reads from a file, or ConfigDrive which reads from an iso
<rharper> knaccc: you can see which datasource was detected  looking at /run/cloud-init/cloud.cfg , would show something like  datasource_list: [ NoCloud, None ]
<rharper> knaccc: on platforms where there is a remote datasource (Like Openstack);  cloud-init does not reconnect to the metadata service on subsequent boots by default;  in some datasources, the only way to determine an instance's unique id is to fetch the values from the platfroms metadata server; for those platfroms, cloud-init does fetch the metadata on each boot; if the instance-id matches, no further work is done.
<Odd_Bloke> blackboxsw: +1 on focal.
<blackboxsw> lucasmoura: here's what we typically push to a vm under test https://github.com/cloud-init/ubuntu-sru/blob/master/sru-templates/manual/ec2-sru#L35-L40
<blackboxsw> which does the setup cloud-init/proposed  PPA
<blackboxsw> falcojr: too ^   so it's a good guideline on what to test until we ping the ubuntu SRU team to our my current work in progress ubuntu/* branches uploaded
<blackboxsw> thanks Odd_Bloke build-and-pushing ubuntu/focal then
<blackboxsw> just eoan bionic and xenial to go if lucasmoura falcojr wanted to look at those too I've captured the same type of new-upstream-snapshot changesets for those that I just pushed into ubuntu/focal  (though I haven't updated the PR doc/context for the drop of debian/control build-deps and the cloud-init.templates RbxCloud addition
<blackboxsw> eoan https://github.com/canonical/cloud-init/pull/409
<blackboxsw> bionic https://github.com/canonical/cloud-init/pull/409
<knaccc> rharper this is my /etc/cloud/cloud.cfg: https://pastebin.com/raw/YwTa8Cpb I think that since there are no datasources listed in that, that that means there are no updates being checked for on every reboot?
<blackboxsw> xenial https://github.com/canonical/cloud-init/pull/406
<blackboxsw> knaccc: you can /should also check /etc/cloud/cloud.cfg.d/90_dpkg.cfg
<blackboxsw> that is typically where the cloud image creator defaults that datasource_list
<lucasmoura> blackboxsw, thanks I will use this approach then
<knaccc> blackboxsw that has just this line, and I don't know what it means: datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, RbxCloud, None ]
<lucasmoura> And I can help with reviews, no problem. I will start with eoan
<blackboxsw> knaccc: so that list means on initial boot, ds-itentify script will try to detect each of those datasources in that specific order. so if cloud-init hadn't already detected the datasource for your image, it'd try to go through that list to find which one matches your platform. sorry I may have muddied the water with your previous discussion, I was just responding to your latest comment
<blackboxsw> I'm reading your backlog knaccc to respond more appropriately to your underlying concern (it was about whether cloud-init would try to redetect/reconfigure your instance across boots right ?)
<knaccc> blackboxsw yes that's right
<knaccc> blackboxsw i just want to make sure that now that OVH has used cloud-init once to auto-configure the freshly reinstalled private dedicated server, that cloud-init will never mess with my config again on reboot
<lucasmoura> blackboxsw, I know we can easily extract that from the changelog, but on the steps to reproduce the package, it is missing the 2.header part
<knaccc> blackboxsw in which case, do you agree that the best way to stop cloud-init from messing with my system is to just do "touch /etc/cloud/cloud-init.disabled"?
<Odd_Bloke> I'm looking at eoan now.
<blackboxsw> thanks Odd_Bloke, I'm uploading ubuntu/focal
<Odd_Bloke> blackboxsw: Still working through the patches, but I've already posted a Q, FYI.
<rharper> knaccc: no, you need to look at the /run/cloud-init/cloud.cfg ;  /etc/cloud/cloud.cfg is the default settings and it enables all of the supported datasource;  at boot time cloud-init uses a systemd generator to detect which platform (or which datasource) is configured, NoCloud looks for files in /var/lib/cloud/seed/nocloud-net/*  or a iso/filesystem label with 'cidata' for example and then if we find a datasource or platfrom, then we enable
<rharper> cloud-init to run (otherwise we don't run at all)
<rharper> knaccc: w.r.t disabling cloud-init, touching that file will disable cloud-init
<knaccc> rharper aha, /run/cloud-init/cloud.cfg says just: datasource_list: [ ConfigDrive, None ]
<blackboxsw> and knaccc cloud-init status --long will show you ultimately what cloud-init proper detected as the valid datasource.
<blackboxsw> so I'd expect cloud-init status --long to show ConfigDrive in the output
<knaccc> yes, that --long command only shows: DataSourceConfigDrive [net,ver=2][source=/dev/nvme0n1p4]
<blackboxsw> ok so the configdrive path source in that case was from /dev/nvme0n1p4
<knaccc> blackboxsw ok, now i'm really confused. my two SSD drives (as reported by parted) are /dev/nvme0n1 and /dev/nvme1n1, and i have no idea what /dev/nvme0n1p4 is
<blackboxsw>  /dev/nvme0n1p4  is just partition 4 on  /dev/nvme0n1
<blackboxsw> I think
<knaccc> i guess it could be a temporary partition that existed for a while during machine setup
<knaccc> and it gets wiped out when the disk is configured
<knaccc> ok, i think i'm starting to get comfortable understanding this enough to just disable cloud-init now and not worry about the consequences, you've both been a great help in enabling me to get some kind of mental model of what cloud-init is doing and where, so thank you blackboxsw rharper
<knaccc> i was terrified that cloud-init would suddenly overwrite things on reboot and brick the machine, but i'm feeling much better about how to handle this now
<knaccc> (by brick the machine, i mean suddenly revert the network config or something else unexpectedly, and cause things to break)
<rharper> knaccc: sure, glad we could help
<Odd_Bloke> blackboxsw: Did you see my comments on eoan?
<Odd_Bloke> Just realised I didn't ping you with them, now I'm done.
<Odd_Bloke> Haha, I see in my inbox right now that you have. :p
<powersj> FYI cloud-init + azure whitepaper: https://pages.ubuntu.com/rs/066-EOV-335/images/Cloud-init_on_Azure.pdf
<powersj> ^ AnhVoMSFT thanks for your help on that
<blackboxsw> wow excllent powersj and AnhVoMSFT !
<blackboxsw> Odd_Bloke: ok, thanks I see the issue with the quilt patch renderer-do-not-prefer-netplan.patch
<blackboxsw> lucasmoura: it'll be the same issue on bionic too. you can see Odd_Bloke's failure by running: quilt push -a; tox -e py3 tests/unittests/test_render_cloudcfg.py
<blackboxsw> just so we know best approach for verifying future  upload checks. # apply all quilt patches and test the world  with quilt push -a; tox -p auto
<lucasmoura> blackboxsw, okay
<blackboxsw> sorry gents, I neeed about 20 mins to fix this on the 3 branches
<blackboxsw> Xenial/bionic/eoan
<Odd_Bloke> blackboxsw: I caught this by performing a build locally, should we include that as part of the process before you submit the PR?
<blackboxsw> Odd_Bloke: you think new-upstream-snapshot should run sbuild?
<blackboxsw> or is it PR review requirement
<Odd_Bloke> I think the submitter (and soon-to-be uploader) should run it, I don't think that automation should Just Do It (because different people will have different package building setups).
<blackboxsw> Odd_Bloke: agreed. I'll add a PR to ubuntu-sru to mention that pre-requisite prior to PR.
<Odd_Bloke> Yeah, new-upstream-snapshot could just mention it in that block of commands it suggests running?
<blackboxsw> yes that works
<tgm4883> Trying to use cloudinit on a centos 7 image in vmware, feels like I'm close, but cloud init throws a warning that it's unable to get datra from the vmware source. When I use the vmware rpctool I can see and decode the base64 data so the VM at least knows about it.
<tgm4883> But nothing that I've told it to do actually happens, any pointers on where to look next? The logs only tell me that the data isn't found, but I can see the data when I query it
<rharper> tgm4883:   maybe test out a newer cloud-init version?  https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/  ;   vmware issues have been a challenge for the community to work with.
<rharper> tgm4883: there's also some use of a non-upstream datasource from vmware,  https://github.com/vmware/cloud-init-vmware-guestinfo/blob/master/DataSourceVMwareGuestInfo.py   which we don't directly support; I've seen users attempt to use this version (it works for some and not for others)
<tgm4883> Will do
<tgm4883> yea that's the one I'm using
<tgm4883> My *very limited* understanding of how this all works is that these datasources are essentially plugins for cloud-init correct? So maybe if I understood how the data gets from that plugin to cloud-init (or rather, if I could somehow run and debug the plugin directly) it might help
<tgm4883> I do see cloud-init trying to use it, but just says that getting data from that class failed
<rharper> tgm4883: you can rerun cloud-init manually if you can access the image;  typically we modify the image with a root password and ssh keys, etc when debugging;  once on a system, you can manually run 'cloud-init init --local' and 'cloud-init init'  (those are the first two stages which exercise the datasource);
<blackboxsw> eoan finally fixed for tomorrow, or any late birds still around https://github.com/canonical/cloud-init/pull/409
<tgm4883> rharper: thanks for the help. I didn't get a chance to try the new version of cloud-init yet, but I did notice that the datasource that is installed in vmware's RPM is older than the one installed via the install script, and the one in the install script at least returns data when running it by itself, so that gives me some more to test with
<rharper> tgm4883: ah, ok
<rharper> tgm4883: sure,  glad we can help
#cloud-init 2020-06-03
<lucasmoura> Hey everyone, do we have a script called "sbuild-it" ? I am reviewing https://github.com/canonical/cloud-init/pull/407 and one of the step is to run:
<lucasmoura>  sbuild-it ../out/cloud-init_20.2-45-g5f7825e2-0ubuntu1~18.04.1.dsc
<lucasmoura> Build I can't find this script in either uss-tableflip or qa-scripts
<Odd_Bloke> I'm not familiar with it.
<lucasmoura> Also, a question for a different issue, if the bug that I am doing the manual testing is not related to a launchpad bug, is there a convention to name that file ?
<rharper> lucasmoura: I think blackboxsw might have that;
<lucasmoura> rharper, ack
<rharper> lucasmoura: there's no convention yet; I suspect git log line <commish> replacing spaces with -
<lucasmoura> rharper, okay
<rharper> or, we can start using gh-<pr-number>
<rharper> Odd_Bloke: blackboxsw ^ ?   for fixes without lp bugs to match in ubuntu-sru test-cases ?
<falcojr> is "cloud-init devel schema" supposed to work with jinja templates?
<falcojr> I get a 'Cloud config schema errors: format-l1.c1: File jinja.yaml needs to begin with "#cloud-config"' but our docs say the first line should be the jinja comment
<Odd_Bloke> That sounds like a gap we'll need to address as part of the schema work this cycle.
<rharper>  falcojr you need both ##template: jinja\n#cloud-config\nruncmd: ['echo', 'Hi']
<rharper> falcojr: oh I see; you want to schema check a template ...
<falcojr> thanks. I had another issue with my config so wanted to verify it with that before fully launching it
<falcojr> yeah
<rharper> that seems like a new use-case;  as it's not*yet* fully formed
<rharper> until you render it
<rharper> the schema won't know what to do with {{ foo }} etc
<falcojr> right, makes sense
<rharper> let's file a bug/new-feature for that use-case;
<falcojr> I can do that (hopefully ;) )
<Odd_Bloke> rharper: falcojr: As for naming, I think we've used git-<abbreviated hash>.txt before, but I don't have a strong preference.
<falcojr> Odd_Bloke: sorry, for naming what?
<falcojr> bug here: https://bugs.launchpad.net/cloud-init/+bug/1881925
<ubot5> Ubuntu bug 1881925 in cloud-init "Schema validation fails on jinja template" [Undecided,New]
<Odd_Bloke> Oh sorry, that was lucasmoura not you.
<lucasmoura> Odd_Bloke, I have opened a PR with rharper naming suggestion. But I can rename it if the proposed convention should not be followed
<lucasmoura> Also, if my manual test references two distinct PRs, should I include both of them in the file name or just one ? In my PR I included just one, but I don't know if that is right
<rharper> cool
<blackboxsw> lucasmoura: thanks for the ping, I've added a PR which adds the sbuild-it script I use for package build  and validation. it's smoser's script originally, but I think it makes sense in our uss-tableflip repo
<blackboxsw> lucasmoura: https://github.com/canonical/uss-tableflip/pull/52
<blackboxsw> feel free to land and merge that into uss-tableflip if it looks good.
<Odd_Bloke> falcojr: I feel like we haven't reached consensus on my comment in https://github.com/canonical/cloud-init/pull/367/files#r434689535, gimme a shout if you want to jump on a call to get on the same page.
<falcojr> Odd_Bloke: now works for me. Standup room?
<Odd_Bloke> Sure, omw.
<blackboxsw> falcojr: nice schema devel gap/bug. the CLI tools wasn't full wired to the jinja part handling. so it's rudimentary. a fix would be helpful
<blackboxsw> landed https://github.com/canonical/uss-tableflip/pull/52
<blackboxsw> so an update of uss-tableflip will expose scripts/sbuilt-it for package upload verification
<blackboxsw> also eoan, xenial and bionic upload PRs are all updated for review (so we can officially kickoff the SRU process with the 'final' bits). eoan: https://github.com/canonical/cloud-init/pull/409 bionic: https://github.com/canonical/cloud-init/pull/407 xenial: https://github.com/canonical/cloud-init/pull/406
<blackboxsw> I think lucasmoura is on #407 already and dan on #409
<blackboxsw> lucasmoura: sorry on the sbuild-it script, that should now be landed via the following merged pull request:  https://github.com/canonical/uss-tableflip/pull/52
<lucasmoura> blackboxsw, no worries. I have just used to build the bionic package and now I was able to build the package
<lucasmoura> Also, I just finished reviewing the bionic PR
<Odd_Bloke> blackboxsw: One typo in the eoan changelog, then I think it's good. Thanks!
<blackboxsw> thanks Odd_Bloke, just pushed the changelog fix
<Odd_Bloke> Thanks, looking now.
<Odd_Bloke> blackboxsw: What does sbuild-it do for us other than save you from substituting the correct dist in?
<blackboxsw> hrm, travis seems to have queued and  not run the ubuntu/<series> branches for a while.
<blackboxsw> Odd_Bloke: sbuild-it just limits the # of params you need to pass, making it a simple sbuild-it *dsc
<blackboxsw> just helpful to cut down on the sbuild params. not really required  or necessary
<blackboxsw> maybe unnecessary  'sugar' as this could nearly be solved with a simple bash alias
<blackboxsw> momousta: in landing your branch can I ammend your squashed commit  message to the following: https://pastebin.ubuntu.com/p/7zRn9WpMqs/?
<blackboxsw> wanted to note that 410 is now being handle
<blackboxsw> wanted to note that 410 is now being handled
<momousta> Sure, please go ahead.
<blackboxsw> excellent, and done
<blackboxsw> thanks again
<momousta> Thanks
<blackboxsw> it'll be in our next upload to ubuntu groovy
<Odd_Bloke> blackboxsw: +1 on eoan
<Odd_Bloke> falcojr: So your PR is ready to merge, but I'm done for the day and don't have the time to write a good merge commit message (which I think is particularly important for this change).  Would you mind putting one together for me to copy/paste first thing tomorrow (or perhaps by blackboxsw later)?
<falcojr> Odd_Bloke: Sure...I can do the squashing too
<blackboxsw> thanks Odd_Bloke uploaded that. and thanks lucasmoura uploaded bionic
<blackboxsw> just xenial remains for SRU upload, then I can ping the vanguard in #ubutu-devel to accept the queued uploads
<blackboxsw> xenial is here https://github.com/canonical/cloud-init/pull/406
<blackboxsw> and I just uploaded 20.2-45 B, E and F to https://launchpad.net/~cloud-init-dev/+archive/ubuntu/proposed
<blackboxsw> so we should be able to test those bits directly in SRU validation
<lucasmoura> blackboxsw, I will review the xenial release right now
<lucasmoura> blackboxsw, we have a flake8 issue related to one of the existing patches in xenial
<lucasmoura> I have commented the issue on the PR
<blackboxsw> ugh thanks lucasmoura for both reviews
<blackboxsw> lucasmoura: for tomorrow I fixed and pushed the flake error
<blackboxsw> https://github.com/canonical/cloud-init/pull/406
#cloud-init 2020-06-04
<lucasmoura> hey everyone, I am setting up the ec2 manual test for the new SRU, but there is one bug that I am not sure how can I best verify that it was solved
<lucasmoura> This is the bug: https://bugs.launchpad.net/cloud-init/+bug/1876312
<ubot5> Ubuntu bug 1876312 in cloud-init "route metric on multihomed ec2 instances is based on mac address instead of device-number" [High,In progress]
<lucasmoura> Does anyone have any idea for a good approach to verify this on an ec2 instance ?
<rharper> lucasmoura: blackboxsw should have some steps, I think blackboxsw did the original work on ec2
<lucasmoura> rharper, ack. Thanks rharper
<rharper> lucasmoura: but effectively, you need to create an ec2 instance and attach multiple nics and multiple ips
<rharper> lucasmoura: I  think it's worth looking at the ec2 api on how to do that;  the cloud_test framework for ec2 shows some of the bits you  need (subnets, security groups etc)
<lucasmoura> rharper, Great, I will take a look on both. Thanks again rharper
<rharper> sure
<Odd_Bloke> rharper: meena: https://github.com/canonical/cloud-init/pull/391/files <-- now with an initial implementation!
<Odd_Bloke> rharper: (Also: https://media1.tenor.com/images/c9c3cfc8a9f1ca0e25deed0741a184a1/tenor.gif?itemid=5690077 ;)
<rharper> Odd_Bloke: =)
<meena> Odd_Bloke: neat!
#cloud-init 2020-06-05
<smoser1> hey. if someone other than me wanted to look at
<smoser1>  https://github.com/canonical/cloud-init/pull/413#pullrequestreview-425327276
<smoser1> i'd appreciate it. i dont like giving people the runaround . if someone else is happy withthe existing PR then i'm fine too.
<AnhVoMSFT> ugh I hate the github UI when posting comments - @Odd_Bloke I was trying to respond to your comment and somehow did cancel review by mistake and all my comments wiped
<Odd_Bloke> AnhVoMSFT: UGH, bummer.
<AnhVoMSFT> i posted my comment, happy to discuss further here if you would like
<AnhVoMSFT> PR: https://github.com/canonical/cloud-init/pull/369
<AnhVoMSFT> @OddBloke What we have been seeing from some of our investigation into some of the latency issue during boot is slow DHCP response from DHCP server. The output of dhclient's stderr was the one that helped nailed it. In particular, we could see multiple DHCPREQUEST from dhclient (due to DHCP server being overloaded and not responded timely to the first DHCPREQUEST). In addition, we also
<AnhVoMSFT> saw cases where DHCPOFFER was posted, but then DHCPACK arrived too late, which triggered dhclient to re-send DHCPREQUEST, etc...
<AnhVoMSFT> So yes, even in the successful case, you could tell how many attempts it took for dhclient to actually negotiate the dhcp leases successfully. Ideally we would want to see a timestamp PER message of dhclient, but this is only available through journalctl (unless we stream the stderr pipe to cloud-init.log, which is going to be not trivial how the current code is setup)
<AnhVoMSFT> I do agree we should log both success and failed path - I left a comment there about it as well.
<smoser> Odd_Bloke: thanks for comment on #413
<smoser> we really ought to make subp return a named tuple
<smoser> with exit_code, out, err
<smoser> or otherwise at least return exit code. because if you pass rcs= then it gets lost
<falcojr> do we have something like "lxc-proposed-snapshot" but lets you pass in an arbitrary deb? I'm wanting to run cloud init in a VM based off a particular commit
<falcojr> s/VM/container
<smoser> it feels like i've done that before...
<smoser> for sure you can point it at a ppa and it wil do the right thing
<smoser> slower for sure
<smoser> falcojr: https://git.launchpad.net/ubuntu/+source/open-iscsi/tree/debian/tests/patch-image
<smoser> thats where i've done it before...
<smoser> you shoudl add that to patch-image (which lxc-proposed-snapshot calls). and then it'd be functional in either lxc-proposed-snapshot or get-proposed-cloudimg
<smoser> sorry. above, s/patch-image/uipdate-root/
<falcojr> interesting, I'll have to play with that
<falcojr> thanks
<smoser> yeah.. so it looks like you'd have to add the "copy debs" stuff in lxc-propsoed-snapshot and get-proposed-cloudimg
<smoser> and then add some '--install-debs' or '--install-debs-dir' to update-root
<smoser> thanks for your work on cloud-init. from this point of view it looks like you're doing great.
<pleusmann> HI. I am trying to use cloud-init with CentOS 7. I need to use it for network-config: I need to get the ip for an interface by DHCP, but I need to manually set the nameservers.
<pleusmann> I added a network-config in /etc/cloud/cloud.cfg.d/ but it seems to get ignored. In the logfile I see it being written to the ifcfg-script but there are no nameservers set :(
<pleusmann> Sorry, I cannot paste the exact config easily as I am on a vmware console. I used networks v2, with ethernets, dhcp: true and addresses for two nameservers
<pleusmann> Is there anything I have to care about?
<pleusmann> It would also be fine for me to not have managed network by cloud-init, but disable-networking also doesn't have a noticeable effect.
<Odd_Bloke> rharper: Updated https://github.com/canonical/cloud-init/pull/391/ to address your comments _except_ for the pass to see if we can use any existing libraries instead of our own homegrown code, I'll do that now.
<rharper> Odd_Bloke: ok, cool
<rharper> pleusmann: can you share your v2 config you'd like to deploy with (or something similar) ?
<pleusmann> Odd_Bloke: I have to share a screenshot: https://ibb.co/SXDpnSS
<rharper> pleusmann: looking
<rharper> good enough
<lucasmoura> blackboxsw, I am working on the nic manual testing for ec2 instances and I fought that having at least 3 nic would make the test better. However, it seems that the t2.micro does not support more than 2 network interfaces
<powersj> yaml and spaces...
<lucasmoura> An error occurred (AttachmentLimitExceeded) when calling the AttachNetworkInterface operation: Interface count 3 exceeds the limit for t2.micro
<lucasmoura> Should we choose another machine or testing with only 2 network interfaces is enough ?
<powersj> lucasmoura, fwiw here are the limits: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
<powersj> rharper, does that yaml look valid? or should ens160 be one more space in?
<rharper> powersj: I think it's valid;
<blackboxsw> lucasmoura: for testing if you'd like to test 3 nics I think you can do t2.small for your test
<lucasmoura> powersj, thanks for the info =)
<lucasmoura> blackboxsw, ack
<blackboxsw> and lucasmoura bringing up a bigger instance temporarily to validate specific tests is ok as far as costs, we just don't want to leave bigger instances running long
<rharper> pleusmann: ah, that's right;  since you have DHCP configured, cloud-init does not render DNS entries ;;; the expectation is that DNS info will be provided by DHCP response
<pleusmann> rharper: It provides the wrong ones :(
<rharper> can you use static ip instead ?
<pleusmann> nope
<rharper> ok, so netplan does allow for dhcp-overrides section , I don't think cloud-init net rendering checks for dhcp-overrides when rendering different outputs though;
<rharper> you would want: to add a dhcp4-overrides: {'use-dns': 'false'}
<rharper> but cloud-init isn't yet looking at dhcp overrides so that it would properly emit the dns entries that are being ignored
<pleusmann> Ok, I understand I am on an edge case and hopefully will get that dhcp-server fixed. Nevertheless I guess the usecase is valid. Hope you can put it on the backlog
<rharper> pleusmann: please do file a bug if you have time
<pleusmann> rharper: Where is your bug database?
<meena> pleusmann: in the topic
<rharper> https://bugs.launchpad.net/cloud-init/+filebug
<pleusmann> found it. thx
<blackboxsw> lucasmoura: review on your chef schema for monday https://github.com/canonical/cloud-init/pull/375#pullrequestreview-425558648
<lucasmoura> blackboxsw, ack
 * blackboxsw goes down the active review queue for cloud-init the rest of the day (checking SRU PRs too)
<lucasmoura> blackboxsw, is it xenial that we should check the network config on a different path than /etc/netplan/50-cloud-init.yaml ?
<blackboxsw> falcojr: did you see that rharper already had a pr up for this https://trello.com/c/Jk4enIi0/43-09fea85fhttps-gitlaunchpadnet-cloud-init-commit-id09fea85f-1870421http-padlv-1870421-net-ignore-renderer-key-in-netplan-config-3
<blackboxsw> was that what you referred to in standup?
<blackboxsw> I just landed PR https://github.com/cloud-init/ubuntu-sru/pull/100
<blackboxsw> which I think mapped to that test
<falcojr> yeah, that's what I mentioned this morning
<blackboxsw> lucasmoura: yes xenial should be /etc/network/interfaces.d/50-cloud-init.cfg
<blackboxsw> falcojr: ok I'll put that rharper guy on that card and mark it done.
<blackboxsw> :/
<blackboxsw> :)
<rharper> blackboxsw: =)
<rharper> I need to grab another one,  but I can't see the board
<blackboxsw> rharper: yeah I mentioned that in standup, we might want to open up that board and make it public given that we have an amazing cloud-init community helping with SRUs ;)
<rharper> blackboxsw: hehe
<blackboxsw> not sure how to go about that at the moment.
<rharper> yeah, no worries about the board;  if you tag me in here with the bug, I can work through a few at a time
<blackboxsw> will take it up w/ rick_h and team on Monday  to see what folks think. in the meantime, some that look good for rharper:
<blackboxsw> sru verification requests: https://git.launchpad.net/cloud-init/commit/?id=46cf23c2 and https://git.launchpad.net/cloud-init/commit/?id=723e2bc1
<blackboxsw> figured storagey would be good ?
<rharper> ack
<pleusmann> rharper: https://bugs.launchpad.net/cloud-init/+bug/1882300
<ubot5> Ubuntu bug 1882300 in cloud-init "not possible to override dns servers when using dhcp" [Undecided,New]
<rharper> pleusmann: cool, thanks
<pleusmann> will cloud.cfg somehow be rewritten on first boot after installation of cloud-init? I added disable-network-config directives to the end of the file, but they are gone after first boot. on subsequent boot they seem to persist
<pleusmann> "network-config=disabled" kernel parameter doesn't seems to be respected, BTW
<rharper> pleusmann: cloud.cfg files are not removed/overwritten; it's a permanent config;
<rharper> pleusmann: typically, network-config is provided by platforms on a per-instance basis;  so on OpenStack or Ec2 or Azure, cloud-init queries the metadata service for a network-config that describes the config for this instance
<rharper> more static datasources, like NoCloud or ConfigDrive, include a network-config in their meta-data;  but statically; it has to be available locally (as a file, part of the datasource attached to the instance)
<rharper> pleusmann:  there was a recent bug w.r.t the kernel cmdline disable, you have to base64 encode it on older versions of cloud-init (not sure what you're running right now)
<rharper> if you're on centos7, you might try out our daily rpm builds to see if things are working better  there, https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/
<pleusmann> I am preparing a vsphere template. later they a running "nocloud", but I experience this behavior when running the initial container. no non-local datasources attached
<rharper> well, nothing in cloud-init itself deletes files in /etc
<rharper> so not sure what you're seeing
<rharper> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1862702
<ubot5> Ubuntu bug 1862702 in cloud-init (Ubuntu) "cannot disable cloud-init networking despite trying hard" [Medium,Fix released]
<pleusmann> rharper: Do I get it right, that "network-config=disabled" only works in quite recent builds, but base64-encoding the yaml-equivalent works in older releases?
<rharper> pleusmann: for kernel command line, yes
<pleusmann> Ok. Thanks. Probably I'll just get that damn DHCP-Server fixed ;)
<pleusmann> Thanks for your help, anyway
<lucasmoura> blackboxsw, should we terminate the instance after performing the test ? Currently, the ec2-sru template does not do that, but is there is a reason for not terminating ?
<falcojr> blackboxsw: where did you get those issues you just gave to rharper ?
<falcojr> I'm not seeing them on our board and I thought our board had everything we had accounted for after being scrubbed
<blackboxsw> falcojr: I grabbed them from https://github.com/cloud-init/ubuntu-sru/pull/100https://trello.com/c/Jk4enIi0/43-09fea85fhttps-gitlaunchpadnet-cloud-init-commit-id09fea85f-1870421http-padlv-1870421-net-ignore-renderer-key-in-netplan-config-3 and I put rharper's head on them, but I was going to chat with you lucasmoura and Odd_Bloke to see if we want to keep spinning out separate cards for each item in that
<blackboxsw> verification list or just assign each item from that verification checklist and "check" the checkbox once a PR lands related to it?
<blackboxsw> falcojr: Odd_Bloke lucasmoura what would you prefer, ejecting each item to separate cards in the SRU Verification lane, or separate cards we can mark as closed
<blackboxsw> sorry my cut-n-paste leaves a lot to be desired: if you open up the card https://trello.com/c/g8Wf4n6c/9-create-trello-cards-for-each-commit-that-could-represent-a-functional-change-to-ubuntu  you can see rharper's avatar on 2 checklist items
<falcojr> yeah, I just found the card cause that was a fun link :D
<blackboxsw> do you prefer this approach to allocating ownership, or "convert to card" to kick that item out of the list
<blackboxsw> I want to cut down on pointy clicky work related to this, so whatever's easier. And we'll talk post SRU with rick about the biggest pain points with this SRU (as it's a big one) to see how we can make this process easier/simpler
<falcojr> since we can put our heads on checklists, I'm fine with just the checklist, but don't have strong opinions either way
<Odd_Bloke> I'm happy to follow what you folks decide.
<lucasmoura> I am fine with the checklist as well
<blackboxsw> yeah the heads on checklists is a newer trello feature. ok let's do that, and when the PR is merged, we'll just tick the checklist item
<blackboxsw> then the lane stays smaller
 * falcojr thumbs up
<blackboxsw> and we have less copying
<blackboxsw> thx
<falcojr> and you pruned this list already, correct?
<falcojr> geez, I thought we were almost done with the manual tasks
<rharper> two paths, manual platform tests, and then *per bug fix* verification
<rharper> the latter is the long pole since the previous SRU was  a while back
<blackboxsw> lucasmoura: review on https://github.com/cloud-init/ubuntu-sru/pull/99/files#diff-92de51f24fc6c44bb6d06af7734d8e34R34-R45 for monday
<blackboxsw> falcojr: I did, but you have plenty of room to question whether something sounds like it may not need a verification test.
<blackboxsw> I'm thinking we can punt anything that generally should be execised by typical cloud-init successfully completing without WARNING/traceback in normal boot logs.
<blackboxsw> falcojr: as soon as I'm through SRU reviews I'll re-prune that list from top to bottom to see if I can eject any more from our SRU process
<blackboxsw> falcojr: lucasmoura biggest/most critcal for SRU verification are the big cloud manual verification test runs
<blackboxsw> above and beyond the individual verification
<blackboxsw> because failure to succeed on cloud boot/upgrade/verification affect all VMs not just ancillary  features (and would result in us blocking the current SRU and respinning the upload with a fix)
<blackboxsw> Whereas, with these individual feature verifications, if we find a bug in the feature during SRU validation, we have been known to categorize that as something which doesn't have to block the SRU and we can work it after the SRU is completes.
<blackboxsw> also I'm going to start grabbing items from this queue today too.
<blackboxsw> falcojr: minor change request for https://github.com/cloud-init/ubuntu-sru/pull/101#pullrequestreview-425635894
<blackboxsw> just to link all extra verification work to that SRU top-level readme
<blackboxsw> and we'll land it
<blackboxsw> lucasmoura: were you working on [70dbccbb](https://git.launchpad.net/cloud-init/commit/?id=70dbccbb) [#1876312](http://pad.lv/#1876312) DataSourceEc2: use metadata's NIC ordering to determine route-metrics (#342) ?
<blackboxsw> or something slightly different
<lucasmoura> I am, but not on the individual SRU test
<lucasmoura> both on the ec2 instance directly
<lucasmoura> using the ec2-sru template for testing it
<blackboxsw> sorry I don't follow?
<blackboxsw> ahh you are doing that as part of extending the ec2 manual cloud test template?
<lucasmoura> yes
<blackboxsw> ok then, I'll take my face off that card, since it sounds like you are going to tackle part of that in the manual Ec2 test.
<blackboxsw> and I'll grab a different item
<lucasmoura> Okay
* Odd_Bloke changed the topic of #cloud-init to: Migration to travis-ci.com in progress, PR merges blocked until Monday | pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting June 16 16:15 UTC | 20.1 (Feb 18) | 20.2 (Apr 28) | https://bugs.launchpad.net/cloud-init/+filebug
<Odd_Bloke> powersj: If you're around, you might have the permissions on the repo to change the status check which blocks PRs from the old Travis CI one to the new Travis CI one; that would fix the issue I just updated the topic with.
<powersj> Odd_Bloke, any idea where I would go to do that?
<powersj> I see a webhook for travis-ci.org
<smoser> https://github.com/canonical/cloud-init/pull/416
<smoser> it passes tox -e py3
<smoser> but dont know more than that at the moment.
<blackboxsw> holy moly. smoser going for karma :)
<blackboxsw> falcojr: landed https://github.com/cloud-init/ubuntu-sru/pull/101 thx
<powersj> Odd_Bloke, I changed the status check, but I see 2x Travis CI builds on my PR
<powersj> Odd_Bloke, actually this looks right now: https://github.com/canonical/cloud-init/pull/418 so we'll need to re-check old merge requests
#cloud-init 2020-06-06
<smoser> rick_h: i just signed contributors agreement.
<smoser> not just "horay for me", but because c-i is holding on that i think.
