#cloud-init 2014-02-18
<fandi> hello.. how to install cloud-init in centos ?
<fandi> cloud-init 7 :) 
#cloud-init 2014-02-20
<harlowja> ok i'm back
<harlowja> smoser` ^^
<harmw> back from the snow harlowja :)
<harlowja> def
<harlowja> i almost did, http://www.heitz.us/Corbet%27s%20Fabien%201.jpg but couldn't get myself to
<harlowja> lol
<harlowja> to much shitting my pants 
<harlowja> lol
<harmw> wtf
<harlowja> lol
<harmw> looks nice :>
<harlowja> ha
<harlowja> thats a well known scary part, lol, but i did some other more sane parts
<harmw> where is that anyway
<harlowja> jackson hole, wyoming
 * harmw fires up maps.google
<harlowja> *jackson, wyoming (jackson hole is the ski mountain)
<harlowja> http://www.jacksonholechamber.com/
<harlowja> pretty place, also where the rich and famous buy there own ranches and crap
<harmw> hehe
<harmw> hm, is that -the- salt lake city nearby?
<harmw> lol, I always kinda thought that was in canada :|
<harmw> damn
<harlowja> haha
<harlowja> 5 hours from SLC
<harmw> harlowja: you've got word back from Sean?
<harmw> yea well, 5 hours is nothing for you USA guys right :p
<harmw> (while it's more like driving from end of the country to the other over here :p)
<harlowja> harmw sean said he was gonna get back into it
<harlowja> after mail crap is now done afaik
<harlowja> let me bug him
<harlowja> and get it up into freebsdports and stuff
<harmw> ok, cool
<harlowja> messaging him, seems like he's not around
<harlowja> harmw i'll find him
<harlowja> lol
<harmw> ok :p
<harmw> I'm trying to listen in on the notifications.info queue (rabbit), any tool I can use for that?
<harlowja> hmmm, simple kombu listener script?
<harmw> yea, or just some cmdline tool like amqp-consume
<harmw> http://www.rabbitmq.com/management.html
<harmw> looks usable as well
<harlowja> yup yup, harmw i've used that before
<harmw> ah, and after rabbitmqctl set_permissions -p / harm ".*" ".*" ".*" it all comes to live :>
<harmw> nifty web interface
<harlowja> yup
<harmw> smoser`: wheren't you planning on a new cloudinit release this month?
#cloud-init 2014-02-22
<harlowja> smoser` harmw https://www.openstack.org/vote-atlanta/Presentation/askflow-this-time-with-rainbow-ponies 
<harlowja> vote, thx ;)
<harmw> and just wtf is taskflow :p
#cloud-init 2015-02-16
<Odd_Bloke> j12t: Are you looking at the logs in /var/log?
<Odd_Bloke> smoser: harlowja_away: Do we have a mailing list for cloud-init development?  I have a general-ish question that I'd like to discuss...
<harmw> Odd_Bloke: afaik you best bet is through here
#cloud-init 2015-02-17
<tennis_> hi guys. What does this mean? https://gist.github.com/anonymous/d07b97605b2deb8eb2aa
<Odd_Bloke> tennis_: What version of cloud-init are you running, and which data source are you using/where are you running it?
<tennis_> Odd_Bloke: Its on a CentOS 7.0 box running on Vagrant.  Getting cloud-ini version ...
<tennis_> Odd_Bloke: its version 0.7.5
<tennis_> Odd_Bloke: not sure what you mean by "data source" 
<tennis_> ?
<Odd_Bloke> tennis_: Can you pastebin /var/log/cloud-init.log?
<tennis_> Odd_Bloke: ok...just a sec
<tennis_> Odd_Bloke: here is is: https://gist.github.com/anonymous/f8df9e9903517df34a74
<Odd_Bloke> tennis_: So those warnings shouldn't be appearing; that's a bug.
<tennis_> I think I see part of the problem.  169.254.169.254 is the address for getting meta-data on cloud instances (aws).  That does not exist on Vagrant. Looks like i need to remove that part for vagrant instances (a hassle), or spoof the address if running under Vagrant (not sure if possible).
<Odd_Bloke> tennis_: But, yeah, they're just in exception handling.
<Odd_Bloke> tennis_: The exception is telling us that those metadata addresses don't exist.
<tennis_> Odd_Bloke: Still a bug then?
<Odd_Bloke> tennis_: Yeah, those exceptions are expected so they shouldn't be showing up as warnings.
<Odd_Bloke> (This might have been fixed since cloud-init 0.7.5)
<Odd_Bloke> tennis_: You can configure the data sources that cloud-init will use; that's probably what you want to do.
<tennis_> Odd_Bloke: ah, ok.  Afaik, this is the latest version in CentOS 7.0
<tennis_> Odd_Bloke: Can you point me to an example? :)
<Odd_Bloke> tennis_: Do you have anything in /etc/cloud?
<tennis_> Think so ... brb
<Odd_Bloke> (I only work with cloud-init on Ubuntu, so the packaging etc. might be quite different)
<tennis_> Here is the config file from /etc/cloud: https://gist.github.com/anonymous/8ad13661fa1be5a4cb44
<Odd_Bloke> tennis_: You can add a 'datasource_list: [ None ]' line to that (or, possibly, to a file in /etc/cloud/cloud.cfg.d) which will stop it from trying to use the default list of datasources.
<tennis_> ok.  Do you happen to know if anyone ever spoofed the 196.254... address for vagrant so the init files can be the same everywhere?
<tennis_> Hmmmm .... I wonder, why does datasource even try to run on vagrant, it says "Ec2" as a keyword. That would imply it should ignore this def on vagrant/virtualbox
<Odd_Bloke> tennis_: The only way cloud-init knows which data sources to use is by being configured.
<Odd_Bloke> tennis_: Otherwise it tries them all until it finds one that works.
<tennis_> Odd_Bloke: ok, but "ec2" is configured, and the environment is not ec2.  Just a thought. :)
<tennis_> That is, the ec2 keyword is mostly meaningless. :)
<tennis_> Not complaining, just saying.
<Odd_Bloke> tennis_: You'd have to speak to the package maintainer in CentOS; that's not in the upstream config file: http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/config/cloud.cfg
<Odd_Bloke> tennis_: (But, yeah, it's configuration that won't be used in your use case)
<tennis_> Odd_Bloke: not sure what you mean by it not being in the upstream config file. ? 
<Odd_Bloke> tennis_: That "datasource:\n  Ec2:..." stanza isn't in the cloud.cfg distributed by cloud-init; I assume it's been added by the CentOS packaging.
<tennis_> Odd_Bloke: ah, ok...
<Odd_Bloke> But, again, it doesn't really matter.  If you configure cloud-init to not use the EC2 datasource, it won't use that configuration.
<tennis_> Odd_Bloke: yup.  Or I figure out some way to emulate the services at 169.254.169.254 ... not sure which is the better solution.  
<Odd_Bloke> tennis_: Configuration.
<tennis_> Odd_Bloke: think so?
<Odd_Bloke> tennis_: Given it's a single line to drop in, yes.
<Odd_Bloke> Much better than re-implementing EC2s entire metadata service. :)
<tennis_> Odd_Bloke: true dat. :) Thanks
<ericsnow> what would it take to get vSphere (vmware) support added to cloud-init?
<ericsnow> would it be better to take the approach of using ConfigDrive? (Ben Howard recommended this to me)
<ericsnow> what about other cloud providers (rackspace cloud and softlayer)?
<harlowja_> fyi, go vote for https://www.openstack.org/vote-vancouver//Presentation/title-evil-superusers-howto-launching-instances-to-do-your-bidding :-P
#cloud-init 2015-02-18
<mhroncok> hi, I've seen Python 3 patches were merged, that's great, thanks
<mhroncok> smoser: is there a time estimate for new release containing Python 3 support?
<fish_> hi
<fish_> I probably just need a nudge in the right direction but how does passing the actual arguments to cloud-init work technically? on AWS it reads the config from the UserData stuff, but where does cloud-init pulls that from on the local system?
<fish_> is it fetching it from http://169.254.169.254/latest/meta-data/? or reading it somehow from the local instance?
<Odd_Bloke> fish_: It uses EC2's metadata server.
<fish_> Odd_Bloke: ok.. well then my next question would be how to mock that up for testing my cloud-init configs without spinning up a new instance every time
<Odd_Bloke> fish_: I _think_ if you put stuff in /var/lib/cloud/seed/ec2 then it will be picked up; not sure on the details of what would need to go there.
<Odd_Bloke> fish_: The docs helpfully have "TBD" in that section. :p
<fish_> Odd_Bloke: ah thanks. at least that's good enough to digg through the code :)
<Odd_Bloke> fish_: Try /var/lib/cloud/seed/ec2/user-data and /var/lib/cloud/seed/ec2/meta-data, maybe.
<Odd_Bloke> fish_: Line 54 of DataSourceEc2.py is the place to look.
<Odd_Bloke> harlowja_: I've added support for CloudStack password servers to cloud-init: https://code.launchpad.net/~daniel-thewatkins/cloud-init/cloudstack-passwords/+merge/250028
<Odd_Bloke> harlowja_: bzr blame suggests you might be a good person to tag for a review. :)
<harlowja_> uh oh, cloud-stack
<harlowja_> :)
<harlowja_> is there anyway u can add some tests for that
<harlowja_> i'll add some other comments to
<Odd_Bloke> harlowja_: Thanks! I'm about to finish my day here, so will pick them up tomorrow morning (UTC). :)
<harlowja_> :)
<harlowja_> kk
<Odd_Bloke> harlowja_: The CloudStack password server is terrible; it doesn't return any HTTP headers.
<harlowja_> :-/
<harlowja_> sad
<Odd_Bloke> harlowja_: It literally returns those strings.
<harlowja_> but those are my passwords!!
<harlowja_> haha
<Odd_Bloke> harlowja_: Which is also why url_helper can't be used; HTTP libraries break if they don't get HTTP headers.
 * harlowja_ pretty sure u can tell it to send some headers in
<harlowja_> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/url_helper.py#L179 (headers or headers_cb there)
<harlowja_> unless u are thinking of some other headers or somethign
<Odd_Bloke> harlowja_: Sorry; I meant that the password server doesn't return a status line or headers.
<harlowja_> thats a weird server, lol
<harlowja_> seems like someone forgot to read the http spec when making that, lol
<Odd_Bloke> harlowja_: Which means that HTTP libraries fall over when parsing the response.
<harlowja_> :-/
<harlowja_> weird stuff
<Odd_Bloke> Yeah, it's been a fun few days. :p
<harlowja_> so its like a sorta-half-baked http/not-http server, lol
<Odd_Bloke> So we use HTTPConnection to make the request, but then go in and read straight off of the socket it created.
<harlowja_> can u at least chop out the 'http_client.HTTPConnection' and make a little mini client that has information about this?
<harlowja_> like a PasswordServerClient
<harlowja_> and add docs saying this thing is weirdo, lol
<Odd_Bloke> Good thinking.
<harlowja_> why even use a 'http_client.HTTPConnection' ?
<harlowja_> vs just a raw sockt
<harlowja_> seems like its so custom that its not even http, lol
<Odd_Bloke> harlowja_: I did try, but it did complain when the HTTP request wasn't sensible.
<harlowja_> weird http server, lol
<Odd_Bloke> (Hypocritically, I might add :p)
<harlowja_> then u should reply back your http response isn't sensible iether, lol
<Odd_Bloke> I should probably open up a bug in CloudStack.
<harlowja_> def :-/
<Odd_Bloke> Subject: Are you nuts?
<harlowja_> or, don't make your own http servers (which it seems like someone did)
<harlowja_> i'm pretty sure java has something they should of just used, lol
<harlowja_> like one of the hundreds of http servers/libraries that probably works
<Odd_Bloke> I'm cloning the source to see if I can find out. :p
<harlowja_> anyway, a 'PasswordServerClient' server client mini-class would be cool, and docs saying why this is so weird would be cool :)
<harlowja_> and links i guess to any bug or whatever u uncover
<Odd_Bloke> Will do.
 * Odd_Bloke --> pub
#cloud-init 2015-02-20
<mmorais_> utlemming: can you tell me a little about cloud config for disk_setup?
<Odd_Bloke> mmorais_: What do you want to know?
<Odd_Bloke> harlowja_away: Your wish is my commmand; I've updated https://code.launchpad.net/~daniel-thewatkins/cloud-init/cloudstack-passwords/+merge/250028 with changes.
<Odd_Bloke> harlowja_away: Worth noting (in response to some of your comments), these passwords are always randomly generated by CloudStack.
<Odd_Bloke> harlowja_away: So there's very little chance that they will match the 'special' strings which indicate failure etc.
<mhroncok> smoser: ping
<Odd_Bloke> mhroncok: I think smoser is on holiday this week.
<mhroncok> Odd_Bloke: thanks
<jseun> heya! Quick peeking at the code, I don't find the python module responsible for final-message, any hints please?
<Odd_Bloke> jseun: cloudinit/config/cc_final_message.py?
<jseun> so true, thanks Odd_Bloke
<Odd_Bloke> jseun: :)
<jseun> Odd_Bloke: any clues how I can include export ENV variables in the final message? like $IPADDR
<jseun> hmm.. I could as well print that info from /etc/issue...
<jseun> thinking out loud
<Odd_Bloke> jseun: It looks like you have uptime, timestamp, version and datasource available to you there; don't think there's much else you can do...
<jseun> Odd_Bloke: that's what I thought, thanks for your input on that one
<harlowja> Odd_Bloke cool, random better than nothing, ha, although still weird :-P
<harlowja> Odd_Bloke ok, added a few more comments (nothing major)
#cloud-init 2016-02-22
<tg90nor> hi
<tg90nor> what would cause cloud-init to regenerate ssh host keys upon rebooting my ubuntu trusty server?
<Odd_Bloke> tg90nor: If cloud-init were to think that it were on a new host, it would regenerate them.  What environment are you seeing this in?
<tg90nor> Odd_Bloke: my server is running in an openstack cloud. the logs indicate there may have been a problem reaching the metadata server
<Odd_Bloke> tg90nor: cloud-init stores the data about instances it thinks it has run on in /var/lib/cloud/instances; is there more than one directory in there?
<tg90nor> yes, there are two directories
<Odd_Bloke> tg90nor: OK, so that's probably the issue you're seeing.
<tg90nor> Odd_Bloke: thanks! seems cloud-init detected it as an ec2 instance, probably after some temporary failure to talk to the metadata server
<Odd_Bloke> tg90nor: Ah, right; yeah, the assumption is that the metadata server will always be there. :)
<esierrap> hi, is it possible set gid of group?
#cloud-init 2016-02-23
<esierrap> hi, is it possible lvm with cloud-init modules?
<esierrap> or just with runcmd?
<esierrap> can you give a clue, please?
<esierrap> josputa!
<minfrin> Hi all, ran into another problem while trying to initialise volumes.
<minfrin> When I use cloud-init to try and format a block device provided by Azure, it fails as follows:
<minfrin> 2016-02-23 11:01:50,344 - util.py[WARNING]: Failed during filesystem operation
<minfrin> Failed to exec of '['/sbin/mkfs.ext4', '/dev/sdc', '-L', 'data']':
<minfrin> Unexpected error while running command.
<minfrin> Command: ['/sbin/mkfs.ext4', '/dev/sdc', '-L', 'data']
<minfrin> Exit code: 1
<minfrin> Reason: -
<minfrin> Stdout: '/dev/sdc is entire device, not just one partition!\nProceed anyway? (y,n) '
<minfrin> Seems that for this to work, the "-F" flag needs to be added. Is there a way to do this in cloud-init?
<minfrin> (I'm not using a partition because this is impossible due to this bug: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1523921)
#cloud-init 2016-02-25
<waldi> moin
<waldi> can i somehow disable per-instance modules? i want to do upgrades from instances without cloud-init, and they would start with overrides stuff
<smoser> waldi, ?
<waldi> smoser: i want to install cloud-init on already running systems and make sure it does not run for example things like ssh key generation
<smoser> ah.
<smoser> hmm...
<smoser> if you knwo what things you dont want to run, you can look at /etc/cloud/cloud.cfg
<smoser> and just comment stuff out
#cloud-init 2016-02-26
<smatzek> does anyone know if a cloud-init fork exists, and where it is, that will consume the network_data.json produced by OpenStack https://github.com/openstack/nova/blob/master/nova/virt/netutils.py#L173
<smatzek> ?
<smoser> smatzek, fwiw, that will land by 16.04 in cloud-init
<smoser> and really like ... next week :)
<smoser> or a week after that.
<smatzek> smoser: thanks
#cloud-init 2017-02-20
<saybeano> is anyone here proficient with cloudbase_cloud-init?
#cloud-init 2017-02-21
<raphink> Hello
<raphink> Is there a project to run acceptance tests on cloud-init snippets?
<raphink> maybe launching instances of several cloud providers and checking the resulting state with serverspec for example
<smoser> raphink, hey.
<smoser> the integration tests that have recently went in have that as a goal
<smoser> right now, the only "cloud provider" is lxd
<smoser> the next to go in would be NoCloud on kvm
<raphink> you mean there's code for that in cloud-init itself?
<smoser> raphink, well, the start of it.
<smoser> see doc/rtd/topics/tests.rst
<stanguturi> Hi all, I am working on 'curtin' related changes for one of the modules and encountered some issues. Can anyone provide some pointers to the 'curtin' usage / documentation. Thanks.
<nacc> stanguturi: https://media.readthedocs.org/pdf/curtin/latest/curtin.pdf ?
<smoser> stanguturi, fyi, there is #curtin
<smoser> ah. but you're asking about networking. that is more correct here
<stanguturi> @smoser, yeah I am asking more about the networking. Just how to specify the format in the datasource.
<smoser> stanguturi, ... i'm on a call now. can answer some questions in 1/2 hour
#cloud-init 2017-02-22
<GreatSnoopy> hello everybody
<GreatSnoopy> anybody here using cloud-init + microsoft azure + debian (credativ image) ?
<GreatSnoopy> It is never able to fetch its userdata
<GreatSnoopy> i think its not able to autodetect the platform
<larsks> GreatSnoopy: what version of cloud-init are you using?
<GreatSnoopy> loud-init --version cloud-init 0.7.6
<GreatSnoopy> what it does is gets to a point where it does
<GreatSnoopy> 2017-02-22 14:04:22,657 - url_helper.py[WARNING]: Calling 'http://168.63.129.16//latest/meta-data/instance-id' failed [0/120s]: bad status code [404]
<GreatSnoopy> and on and on
<GreatSnoopy> it looks like its not able to deduce it needs an azure datasource
<GreatSnoopy> i have waagent installed
<GreatSnoopy> interestingly, on a centos instance, it does not get stuck to "Calling ..."
<GreatSnoopy> also, waagent does get the userdata, but somehow cloudinit is not able to interact/fetch waagent data
<larsks> GreatSnoopy: I don't know how functional the azure data source was in 0.7.6.  There was a bunch of work done on it recently.
<larsks> Do you have the option of using a more recent version?
<GreatSnoopy> larsks: should i try with a newer one ?
<GreatSnoopy> sure,ill try
<GreatSnoopy> weirdly on centos, I have an OLDER version
<GreatSnoopy> and it works :|
<larsks> The other thing is to do is maybe inspect the logs from cloud-init on the debian system (maybe compare them to centos) and see if you can spot anything interesting.
<GreatSnoopy> except url_helper.py[WARNING]: Calling 'http://168.63.129.16//latest/meta-data/instance-id'  and the fact that cloud init is not able to find an instance id for itself
<GreatSnoopy> i see nothing relevant
<GreatSnoopy> i can even post that log maybe one of the devs can spot something i miss
<larsks> That's a red herring; that message means it's trying to access an EC2 data source...that is, it's not even trying to get metadata from azure.
<GreatSnoopy> # pwd ; ls -1 /var/lib/cloud/instances iid-datasource-none
<GreatSnoopy> yes, my assumption is it does not know its on azure
<GreatSnoopy> tried making sure we have dmidecode and virt-what to help it
<GreatSnoopy> does not help
<GreatSnoopy> can the debug turned up for the platform detection itself ?
<GreatSnoopy> or for the datasource interaction ?
<GreatSnoopy> maybe it's something trivial for which reason he does not properly interact with its environment, but i see no hint on where this would be
<larsks> Usually it's pretty verbose by default, I thought.
<GreatSnoopy> hm, found this
<GreatSnoopy> Looking for for data source in: ['NoCloud', 'AltCloud', 'CloudStack', 'ConfigDrive', 'Ec2', 'MAAS', 'OVF', 'GCE', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM']
<GreatSnoopy> i see no azure there
<GreatSnoopy> on the other hand on centos there is  not a line like that at all
<larsks> GreatSnoopy: does your configuration include an explicit list of data sources?  E.g. in /etc/cloud/cloud.cfg (or cloud.cfg.d)?
<larsks> Does the package include the Azure data source?
<GreatSnoopy> larsks: not as far as i know, its debian default
<larsks> There may be folks around later who are more familiar with the debian/ubuntu side of things.   I think they may be on california time...
<GreatSnoopy> Ill try to stick around or log back in later
<smoser> GreatSnoopy, 2 thingsn to check is if the Azure datasource is listed in 'datasources' defined somewhere under /etc/cloud/
<smoser> if its not, cloud-init wont look for that (which is kind of what it looks like)
<smoser> second, just make sure you *have* a DataSourceAzure.py
<GreatSnoopy> smoser: i added now, this datasource:   Azure:     set_hostname: False     agent_command: __builtin__
<GreatSnoopy> actually let me put something on a paste
<GreatSnoopy> moment
<GreatSnoopy> so this https://pastebin.mozilla.org/8979960
<GreatSnoopy> does not work, even if i added the two lines with datasource azure
<GreatSnoopy> i also have /usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceAzure.py
<GreatSnoopy> now
<GreatSnoopy> on this debian there is also
<GreatSnoopy> 91_waagent.cfg and 92_azure.cfg
<smoser> grep -r datasource /etc/cloud/
<GreatSnoopy>   /etc/cloud/cloud.cfg.d/91_waagent.cfg:datasource: /etc/cloud/cloud.cfg.d/90_dpkg.cfg:datasource_list: [ NoCloud, AltCloud, CloudStack, ConfigDrive, Ec2, MAAS, OVF, GCE, None ] /etc/cloud/cloud.cfg:# Example datasource config /etc/cloud/cloud.cfg:# datasource: /etc/cloud/cloud.cfg:datasource:
<GreatSnoopy> damn
<GreatSnoopy> https://pastebin.mozilla.org/8979963
<GreatSnoopy> this is how it looks
<GreatSnoopy> damn
<GreatSnoopy> i thing it is defined in 90_dpkg and missing from there
<rharper> smoser: re cloud-init dpkg deps;  in the merge request, you said that cloud-init doesn't want to depend on those packages directly;  can you elaborate on the rationale/issues ?
<smoser> well cloud-init doesn't Depend on systemd-networkd
<smoser> right ?
<GreatSnoopy> smoser: thanks for that hint, datasources was overriden somewhere i was not expected it to be
<rharper> smoser: it's one of those <some networking service to be defined by distro>
<rharper> but you're right in that we don't have a way to express the idea that *if* you're using networkd  as your $networking_service *then* you need systemd >= X and resolvconf >= y
<smoser> maybe slangasek would have a solution
<smoser> rharper, i'm gonna ask in ubuntu-devel
<rharper> smoser: ok
#cloud-init 2017-02-23
<pasasap_> Hi. How to run script once at boot time? Put that script to /var/lib/cloud/scripts/per-instance ?
<smoser> pasasap_, that should work,  yes.
<smoser> it will have to exist in the image
<pasasap_> I havent tested that, now I decided to use user-data.
<pasasap_> https://thepasteb.in/p/r0hwE8gDVqDCK It is part of log. I have prepared ubuntu 16.04 image with cloud-init, openssh. Sometimes it cannot configure interfaces, sometimes it works. I dont know, maybe I have sth wrong configured.
<smoser> pasasap_, can you get /var/log/cloud-init.log from inside ?
<smoser> and /etc/network/interfaces* ?
<pasasap_> I have deleted all files from /etc/network/interfaces.d/ with guestfish. In interfaces is only 'lo' interface.
<pasasap_> "Max retries exceeded with url" Hmm, maybe should I wait ;'|
<pasasap_> In Ubuntu should NetworkManager be enabled?
<smoser> no
<smoser> can you get that log file ?
<smoser> paste it ?
<pasasap_> I have waited and now it launch properly. So there should be too much http requests to metadata server.
<smoser> well, i suspect something was not quite right, and if you can paste that log it'd be helpful.
<smoser> just run
<smoser> pastebinit /var/log/cloud-init.log
<pasasap_> http://paste.ubuntu.com/24053660/
<pasasap_> Hmm, so there can be sth wrong ;'|
<rcj> smoser: just to check, with the cloud-init ds-identify changes we're limited to a single datasource specified via /var/lib/cloud/cloud.cfg.d/##-name.cfg, right?
<rcj> Asking because I was just about to propose something on a partner call that would require support for 2 different data sources
<rcj> OVF and NoCloud
<smoser> rcj, if there is more than one, then it will filter out non-present sources.
<rcj> smoser: great, so it will still work.  That's fantastic.  Thanks.
<smoser> rcj, i dont know whaty ou mean.
<smoser> if ds-identify runs and does not see a source (NoCloud) then it will remove it from the list.
<pasasap_> Huh, I got official ubuntu xenial server image and this error still appears.
<smoser> pasasap_, where are you running this ?
<smoser> need some more info
#cloud-init 2017-02-24
<pasasap_> I am back. I am running that on openstack of my team.
<pasasap_> http://paste.ubuntu.com/24053660/ btw
<pasasap_> If I pass to runcmd some commands, ie. Ansible, then even if I try to run as ubuntu user, then user is root. Can I force that cmd is launched as non root user?
<smoser> pasasap_, what openstack ?
<pasasap_> Neutron.
<smoser> pasasap_, ok. so in that paste above 2453660
<smoser> you have only Ec2 datasource enabled.
<smoser> i suspect you possibly have a config drive attached.
<smoser> and that cluod-init would/should get networking configuration from taht config drive.
<smoser> then... either you have a broken metadata service (which used to be quite common on openstack with neutron)
<pasasap_> It one time can get metadata, another time cannot.
<smoser> or you have no metadata service.
<smoser> in the log there, you have only configured cloud-init to look for Ec2.
<smoser> do you have a config drive attached do you know ?
<pasasap_> I dont know, possibly not.
<smoser> ie, there might be a small disk attached.
<smoser> so definitely  you need to fix cloud-init config so that it looks for OpenStack and ConfigDrive
<pasasap_> That was installed with packstack if I remember.
<smoser> (run dpkg-reconfigure cloud-init)
<smoser> i dont know about packstack, i dont particularly have an interest in knowing about image building tools...
<smoser> i generally think they're silly
<smoser> we make images that "just work", and if they dont then we will fix them.
<smoser> (i view building images the same way as I view building glibc or kernel... sure you can do it, but why?)
<smoser> so i'd much rather focus on trouble shooting the official ubuntu images to start.
<smoser> then you have somethign that works and you can compare.
<pasasap_> I tried official image, xenial-server-cloudimg-amd64-disk1.img also  has the same problem.
<smoser> with regard to running as non-root, cloud-init runs as root, so runcmd runs as root.  you can execute things as other users using 'su' or 'sudo'
<smoser> here is 'my-userdata' that i launch vms with generally
<smoser>  http://paste.ubuntu.com/24058888/
<smoser> see 'as_def_user' for a way to run things as non-root
<smoser> its more complicated then you need, but basically it goes looking for a user in a list (smoser , ubuntu, azuser....) and executes as that user... the reason for the searching is that some clouds have different default user and this just finds whatever that user is.
<smoser> pasasap_, the official image may well have a problem for you, but it will have more than just the Ec2 datasource enabled.
<smoser> so that is at least one problem that we can avoid trouble shooting.
<smoser> that make sense ?
<smoser> lets get *something* working, then you can have a reference that works and figure out the differences.
<pasasap_> OK, but which datasources should be enabled?
<smoser> well, in the official image, all of them are
<smoser> but most likely you need ConfigDrive and OpenStack
<smoser> but ... lets just go with the official image.
<smoser> i suggest:
<smoser>  a.) download that to an ubuntu system some where
<smoser>  b.) backdoor it: (run backdoor-image --user=backdoor --password-auth --password=passw0rd your.image)
<smoser>  http://bazaar.launchpad.net/~smoser/+junk/backdoor-image/files
<smoser> then upload that.
<smoser> then if it fails, you'll still be able to ssh in as 'backdoor' with 'passw0rd'
<smoser> and then you can poke around that way and see what failed.
<smoser> pasasap_, does that make sense ?
<pasasap_> Yes.
<rharper> smoser: around?
<smoser> here
<rharper> wanted to hangout for like 10 minutes to talk networkd during boot issues
<rharper> https://hangouts.google.com/hangouts/_/canonical.com/hangout-rharper?authuser=1
<smoser> ok
<smoser> can i have 5 m inutes ?
<smoser> htne meet
<rharper> yeah
<smoser> 3:30 (2:30 central)
<smoser> i'll join then
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/318282
<smoser> if you want to take a read of that i'd appreciate it.
<smoser> better commit messages needed, and some doc, and test :)
<rharper> ok
<rharper> smoser: I've got 5 mins then I'll be back at the top of the hour
<rharper> I'll re-ping in 30
<smoser> ok.s orry
<rharper> https://hangouts.google.com/hangouts/_/canonical.com/hangout-rharper?authuser=1
<rharper> smoser: ^^
<smoser> rharper, https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1667735
<stanguturi> I am trying to build cloud-init deb package and got 'Unmet build dependencies: python3-coverage' error. Any idea how I can get this resolved
#cloud-init 2017-02-25
<nacc> stanguturi: where are you trying to build it? and how?
<stanguturi> I am building in my ubuntu VM and I use 'tox and ./packages/bddeb'
<nacc> stanguturi: which version of Ubuntu?
<stanguturi> 15.04 VM
<nacc> stanguturi: uh, 15.04 is eol
<nacc> stanguturi: are you sure about that?
<stanguturi> ok. Will try another VM
<stanguturi> ok when I try to invoke 'cloud-init -h' I get "pkg_resources.DistributionNotFound: The 'argparse' distribution was not found and is required by cloud-init" error Any idea?
#cloud-init 2018-02-20
<sidx64> Hello! I'm new to cloud-init and I am using this with an openstack set up, to run some commands (create a specific user, etc) after VM is created. I was wondering if there was a way to run Cloud-init on the same VM without rebooting it ?
<sidx64> never mind, I found it here: https://gist.github.com/maoueh/8662b8e0da0ccd99296a9a9a6b67dad0
<brunobro_> I'm trying to find a good pattern for using cloud-init to provision Raspberry Pi devices.
<brunobro_> I found that the Raspbian repo had 0.7.9 and that lacked the more in depth NoCloud features. So, I figured out how to install from source. https://stackoverflow.com/a/48845732/117471
<brunobro_> So at this point I'm trying to use /var/lib/cloud/seed/nocloud-net/ because it is pretty easy to get some positive feedback. But, I'm struggling to get the solution I need.
<brunobro_> I really have 2 goals:
<brunobro_> 1. Have the `user-data` and `meta-data` files live on the volume labeled `boot` that is mounted at `/boot`
<brunobro_> 2. Have a script (/boot/on_boot.sh) called on every boot if it exists.
<brunobro_> That's it. But, I'm facing these challenges:
<brunobro_> 1. /var/lib/cloud/scripts/per-boot seems to do nothing for me.
<brunobro_> 2. Changes to `/boot/user-data` are not picked up by the `#include` directive in `/var/lib/cloud/seed/nocloud-net/user-data`
<brunobro_> 3. The `meta-data` file does not seem to support `#include` so I'm stuck with whatever goes into `/var/lib/cloud/seed/nocloud-net/meta-data`
<brunobro_> 4. Putting ` ds=nocloud[;seedfrom=/boot/;instance-id=raspberrypi]` at the end of `/boot/cmdline.txt` did not seem to make it locate `/boot/user-data` and `/boot/meta-data`. That is why I am using `/var/lib/cloud/seed/nocloud-net/`.
<brunobro_> I think that covers it.
<rharper> brunobro_: can you share your cloud-init.log ? /var/log/cloud-init*.log
<blackboxsw> sidx64: on cloud-init v 17.1 or later you can run "cloud-init clean --logs"   instead of those rm -rf lines. Also it'd be worth checking your initially provisioned datasource name in /run/cloud-init/result.json.  If it is a datasource with Local in the name you might have to run "cloud-init init --local" before "cloud-init init"
<sidx64> thankyou @blackboxsw
<sidx64> will tryu this
<blackboxsw> no prob: otherwise you risk using a cached local datasource which wouldn't update metadata (if your metadata has changed)
<sidx64> Oh okay!
<sidx64> Did not know that!
<sidx64> I actually have been changing the metadata
<sidx64> so this helps a lot!
<sidx64> thank you
<rharper> brunobro_: you may want to instead label your  volume 'cidata';  and cloud-init will find that volume and read it like the nocloud-net seed in /var/lib/cloud/seed/noncloud-net instead
<brunobro_> rharper: I can't relabel the volume. It is the boot volume. I'm not going to try to convince the Raspbian distribution to change their volume name. But, there is supposed to be a way to tell cloud-init somewhere else to look.
<rharper> brunobro_: ok
<rharper> brunobro_:  so, no wait to tell cloud-init to find a nocloud-net datasource somewhere else; but in the embedded seed, you should be able to refer to your volume for additional sources of user-data
<rharper> s/wait/way
<brunobro_> rharper: I added logs to https://gist.github.com/RichardBronosky/fa7d4db13bab3fbb8d9e0fff7ea88aa2
<rharper> thx
<brunobro_> what can I do about per-boot?
<rharper> brunobro_: looking at the logs;
<rharper> brunobro_: from the logs, I think it's not finding your nocloud seed in /boot;  it emits the "Using Fallback Datasource" which isn't going to run your user-data that was provided;
<rharper> next to see why the seedfrom=/boot didn't work for you
<rharper> I need to look at the cloud-init NoCloud source to see how it processes that
<brunobro_> rharper: What should a "nocloud seed" in /boot look like? I never can get clear info on the directory structure expectations.
<brunobro_> http://cloudinit.readthedocs.io/en/latest/topics/dir_layout.html says that "seed/" is TBD and "scripts/" is also pretty vague.
<rharper> brunobro_: /boot/{meta-data,user-data} ; I think you've that part correct;
<brunobro_> okay, good. I originally did /var/lib/cloud/seed/{meta-data,user-data} and that did not work. But I found a blog example that told me about nocloud-net
<rharper> so, it's /var/lib/cloud/seed/nocloud-net/{user-data, meta-data}
<rharper> and you may have something in there; the stacktrace looks like it was comparing the instance_id found in one of the two meta-data files, and they didn't match;
<brunobro_> I do have both of those populated.
<rharper> I've never tested having multiple locations and referencing them both
<rharper> so I suspect we need to pick one location and getting your config working
<rharper> I think if you remove /var/lib/cloud/seed/nocloud-net and then 'cloud-init clean' to remove any per-instance generated data from /var/lib/cloud and logs; and reboot; that should get things goign if your gist is accurate
<brunobro_> I don't intend to have multiple. I'm experimenting. And not remembering what artifacts I've left around.
<rharper> the seedfrom=/boot allows nocloud datasource to read that directory like it's a nocloud source; your files look fine w.r.t meta-data and user-data;
<rharper> if that works, then we can move to getting your scripts to run;
<rharper> I'm stepping out for lunch, but just let me know how it goes and I'll see what I can do to get your going
<brunobro_> thanks!
<brunobronosky> rharper this is brunobro_ from earlier. (I didn't have a ZNC container running before.)
<rharper> brunobronosky: hi
<brunobronosky> I'm updating that gist to make it more accurate. i will also purge the logs and start them fresh. I'll let update you shortly. But for now I'll lead with a question...
<brunobronosky> In the log I'm seeing "2018-02-20 18:05:39,432 - main.py[DEBUG]: No kernel command line url found." even though `cat /proc/cmdline` gives...
<brunobronosky> 8250.nr_uarts=0 bcm2708_fb.fbwidth=656 bcm2708_fb.fbheight=416 bcm2708_fb.fbswap=1 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  dwc_otg.lpm_enable=0 console=ttyS0,115200 console=tty1 root=PARTUUID=3e8f7ed9-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait ds=nocloud[;seedfrom=/boot/;instance-id=iid-raspberrypi-nocloud]
<rharper> brunobronosky: lemme look
<brunobronosky> that last KV pair is my interpretation of the docs http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
<rharper> brunobronosky:  I don't think that is related, the code is looking for  'cloud-config-url', 'url'  keys in the kernel command line for
<rharper> it's used in a read-only-root configuration with MAAS, a baremetal cloud product
<rharper> brunobronosky: you should see something like
<rharper> Looking for data source in: ['NoCloud', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM']
<rharper> 2018-02-15 16:23:32,828 - __init__.py[DEBUG]: Searching for local data source in: ['DataSourceNoCloud']
<rharper> 2018-02-15 16:23:32,828 - handlers.py[DEBUG]: start: init-local/search-NoCloud: searching for local data from DataSourceNoCloud
<brunobronosky> The use of "url" concerns me. Can it accept a local file path? (maybe with the file:// scheme?)
<brunobronosky> Okay rharper https://gist.github.com/RichardBronosky/fa7d4db13bab3fbb8d9e0fff7ea88aa2#file-cloud-init-log is current.
<rharper> brunobronosky: thanks, reading
<brunobronosky> Also, I'm seeing https://github.com/cloud-init/cloud-init/blob/master/doc/sources/kernel-cmdline.txt is consistent with your description. Though, both you and smoser are inconsistent with my understanding of http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
<brunobronosky> I didn't want to trouble you with my lack of understanding when you said, "I don't think that is related" earlier. I think that is more important than I originally presumed. I guess my follow up should be "You don't think what is related to what?"
<rharper> the message comes code looking for cloud-config-url, which is used separately from the NoCloud Datasource,
<rharper> it's not related to the current failure
<brunobronosky> Got it. So, the part of the docs that I am hanging onto... Are they valid? Does that mean what I think it means? Inigo Montoya?
<rharper> I think so, I'm looking at cloudinit/sources/DatasourcesNoCloud.py:parse_cmdline_data
<rharper> to see what, if anything it doesn't like
<rharper> I think the brackets
<rharper> [ ] means the contents are optional
<rharper> you can use: ds=nocloud   or ds=nocloud;seedfrom=;instance-id=bar
<rharper> adjusting it
<brunobronosky> GAH!!
<rharper> parse_cmdline_data('ds=nocloud', fill, cmdline=cmdline)
<rharper> True
<rharper> >>> fill
<rharper> {'instance-id': 'iid-raspberrypi-nocloud', 'seedfrom': '/boot/'}
<rharper> brunobronosky: I think it's worth an update to documentation to show an example there
<brunobronosky> Yes! Moar Examples everywhere!
<rharper> =)
<brunobronosky> woohoo! it created /var/lib/cloud/instances/iid-raspberrypi-nocloud that time.
<rharper> \o/
<brunobronosky> Now I'm wondering about this /var/lib/cloud/scripts/per-boot thing
<brunobronosky> for one. I noticce that it gets wiped out on `cloud-init clean`
<rharper> yes, clean wipes the system to make it like "new" to cloud-init
<rharper> right, per-boot;
<brunobronosky> holy carp! per-boot is working now.
<brunobronosky> This is working pretty smooth. I've got per-boot working as a symlink to /boot/per-boot.sh
<brunobronosky> Now I need some design advice.
<brunobronosky> I guess I should describe the greater project at this point. I'll try to keep it short.
<brunobronosky> In 6 days, this project will be 5 years old! https://www.raspberrypi.org/forums/viewtopic.php?t=35224 The basic goal is to have a script in the vfat /boot partition of a newly imaged Raspbian microSD card that will get run on boot.
<brunobronosky> I've done this lots of ways over the years (mostly using rc.local) building dozens of single purpose machines.
<brunobronosky> My projects have been forked and reused and I'm feeling kind of bad about spreading bad habits. I've decided finally do it right and get it accepted into Raspbian. My options were to create my own oneshot systemd service, or use cloud-init. I'd like to keep it as Standardâ¢ as possible. So, I'm leaning towards this.
<blackboxsw> well you're in good company here :) it's right in cloud-init's wheelhouse
<brunobronosky> Some other users have expanded my original intent to have both an "on-boot.sh" and a "run-once.sh". Both of them get run if they are present, but the later gets renamed run-once.$(date +%F@%H.%M.%S) after execution.
<brunobronosky> So here is the design question (finally): implementing my on-boot.sh with /var/lib/cloud/scripts/per-boot is pretty simple. Should I try to implement run-once.sh with /var/lib/cloud/scripts/per-once or stray from the cloud-init Standardâ¢ for that?
<brunobronosky> my meaning of per-once is "per once, every time it reappears", and cloud-init's meaning of per-once is "per once, ever, dammit". (at least from what I can tell)
<blackboxsw> brunobronosky: I'm wondering why not per-instance instead of per-boot? per-boot is every boot
<blackboxsw> per-instance would only fire again if your NoCloud seed directories provide a different instance-id
<blackboxsw> and it wouldn't fire again across reboots
<blackboxsw> unless that data changed
<blackboxsw> or per-once might actually be what you need right, sorry because you only want to do that once ever
<blackboxsw> as long as your workflow never runs "cloud-init clean"  per-once scripts will never fire again after that first boot
<brunobronosky> So, the target audience for this is students and teachers. I'm trying to both make it easy and teach good practices... fine line. I want there to a script that runs on every boot, because a LOT of users want that.
<brunobronosky> The per-instance is interesting. I didn't know the difference between per-once and per-instance. that gives me something to think about.
<brunobronosky> And yes, the per-boot is basically like rc.local, and that is the point. The difference is that a Windows (or Mac) user can mount the vfat partition and edit files in the /boot folder, but they cannot get to /etc.
<blackboxsw> if you don't expect metadata in the seed directory to change between boots when provisioning, per-instance effectively behaves like per-once.
<blackboxsw> yeah per-boot is probably what most folks would want;  in their own per-boot script they'd be able to change behavior to only perform an operation once if they want to.
<blackboxsw> well, it least if gives the most flexibility I suppose.
<brunobronosky> So, is it terrible of me to use the names /boot/per-boot.sh and /boot/per-once.sh even though my per-once is totally different? Am I doing the student a disservice?
<brunobronosky> I just really like the consistency of the per-* and that they will sort together for the student to find.
<rharper> brunobronosky: it could be useful to bikeshed a name for per-every-occurance.sh or something
<rharper> I do think the per-once (and only once per instance) is pretty straightfoward in the cloud-init space; though I think outside of cloud-init , the 'instance' concept is not quite as clear
<brunobronosky> rharper yes, instance is totally unclear outside of the cloud paradigm.
<brunobronosky> I'd love to have rpi clusters be a gateway for teaching cloud concepts... but that is a big bite for a beginner.
<rharper> brunobronosky: it may be useful then to include a script in your per-boot.sh which does an existance check on some directory, and if present call run-parts on it;
<rharper> so you can tell users, if they create a directory named something your per-boot checks, it'll then call run-parts on it (which just executes any scripts/programs marked executable )
<brunobronosky> nice!
<brunobronosky> (eveything is executable on vfat)
<brunobronosky> well, mount mask 0022 so...
<rharper> brunobron-afk: lol @ vfat
#cloud-init 2018-02-21
<brunobron-afk> Thanks again to rharper and blackboxsw for their guidance yesterday. Here is how things panned out. https://gist.github.com/RichardBronosky/fa7d4db13bab3fbb8d9e0fff7ea88aa2#result
<brunobron-afk> I will take that /var/lib/cloud/scripts/per-boot/00_run-parts.sh concept to the Raspbian community and see how they feel about it. https://gist.github.com/RichardBronosky/fa7d4db13bab3fbb8d9e0fff7ea88aa2#file-cloud-init-setup-sh-L46
<brunobronosky> I am worried about how to package it seeing as Debian is pegged to cloud-init version 0.7.9 for the foreseeable future.
<rharper> brunobronosky: hey;  we can probably help you work with 0.7.9; that's not that old despite the jump in versioning
<brunobronosky> rharper that would be great. My concern about 0.7.9 is that it seems to insist on a volume labeled cidata.
<rharper> it also supports the /var/lib/cloud/seed/nocloud-net; so you could switch to that;
<brunobronosky> That kernel cmdline flag doesn't seem to be introduced until 17.1
<rharper> the seedfrom should still work on 0.7.9 though; lemme see
<smoser> yeah, you can definitely seed from older than 17.1
<rharper> just may need to write a config file to /etc/cloud/cloud.cfg.d/datasource.cfg which specifices NoCloud and sets seedfrom: /boot/  etc
<brunobronosky> Oh. That's not too bad. Actually that may be more acceptable (upstream) than modifying the kernel cmdline.
<brunobronosky> smoser is that completely missing from the old documentation, or am I just not finding it? http://cloudinit.readthedocs.io/en/0.7.9/topics/datasources/nocloud.html
<smoser> probably missing
<brunobronosky> Okay, I guess I have to learn to grok https://github.com/cloud-init/cloud-init/blob/7fb6f78177b5ece10ca7c54ba3958010a9987f06/cloudinit/sources/DataSourceNoCloud.py
<smoser> blackboxsw: rharper
<smoser> http://paste.ubuntu.com/p/2VxttpZFjK/
<rharper> looking
<smoser> that is what i was describing with IFS and shell.
<rharper> smoser: I was somewhat confused, I didn't realize it was set -- which was dropping the duplicates
<blackboxsw> wow
<smoser> it  is documented.
<smoser> see https://linux.die.net/man/1/bash
<smoser> 'word splitting'
<blackboxsw> Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field
<rharper> ie, no empty words
<smoser> clear as mud
<blackboxsw> what does *that* mean? That if you set IFS to non-default whitespace characters it'll also observe standard whitespace chars in addition to provided IFS value?
<rharper> no, just that, you have to have a sequence of non-IFS chars between IFS
<smoser> bash and sh are at least consistent
<rharper> no empty words
<smoser> rharper: well no. see paste
<smoser> empty words are allowed with non whitespace as delim
<blackboxsw> yeah that paste looks inconstent based on what IFS you provide
<rharper> pipe and comma look like bugs
<rharper> I suspect something other than ifs wordsplitting is happening due the | being for setting up new fs and execs
<smoser> " Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field"
<rharper> not sure what comma means to bash
<smoser> *that is not IFS whitespace*
<rharper> hrm, so it should read, don't use a subset of the default IFS
<rharper> that's pretty strange thing to do
<smoser> blackboxsw or rharper https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/338470
<smoser> i got to run, will check back in later. and i will review stanguturi and blackboxsw mps then
<smoser> later
<smoser> rharper or blackboxsw pretty please ^ ? stanguturi would like it too .
<blackboxsw> will grab it smoser
<blackboxsw> sorry, was unsuccessfully heads down  on vsphere
<blackboxsw> smoser: approved your branch, stanguturi has a comment in there that I'll let you fix/merge
<stanguturi> @smoser. Thanks Scott. I tested your patch in my test environment and it worked.
#cloud-init 2018-02-22
<smoser> stanguturi: how quick can you check somethign?
<stanguturi> @smoser: Quickly. May be 5 minutes
<smoser> stanguturi: on top of your mp (vmware 64 bit)
<smoser> http://paste.ubuntu.com/p/5VTmHsrCPJ/
<smoser> the first hunk is just formatting
<smoser> the second i'mi 97% is just formatting :)
<smoser> stanguturi: just give feedback here or on the bug, and i'll be back in ~ 1h.
<smoser> and pull it
<smoser> and stanguturi did you have a bug for that one ?
<smoser> if so, edit the commit message and adda
<smoser>  LP: #XXXXXX
<smoser> thanks
<stanguturi> ok thanks Scott. I don't have a bug for that.
<smoser> ok.
<blkadder> Hey folks, looking for some advice on the best approach to using ansible w/cloud-init. Essentially I am using cloud-init for instantiation and am just using ansible for config file templating and providing secrets/credentials in those configs.
<blkadder> Timers seem so crude.
<stanguturi> one quick thing: $pre/${pkg}64/$pkg/$ppath" should be ideally $pre/${pkg}64/$ppath"
<blkadder> Like calling my spin up script waiting for x seconds then running ansible seems sub-optimal.
<blkadder> Are there any "hey I am done" hooks in cloud-init that I can provide notification on?
<stanguturi> @scott: Some fixes need to be done in that patch: like check for "${pre}64/$pkg/$ppath" and not "$pre/${pkg}64/$pkg/$ppath"
<powersj> blkadder: if you want to know if cloud-init is done, you can look at /var/lib/cloud/instance and look at the boo-finished file or in /var/lib/cloud/data at the result.json and status.json
<powersj> boot-finished file, rather
<blkadder> Ok, thanks.
<smoser> blkadder: still there?
<blkadder> Yes sir.
<smoser> blkadder: look in /run .
<blkadder> k
<smoser> blackboxsw: around ?
<smoser> and also, newer cloud-init have 'cloud-init wait'
<blkadder> Ooh.
<blkadder> That sounds shiny.
<smoser> yeah, trunk only at the moment. will be in 18.1
<blkadder> k
<blkadder> Will go look at it.
<smoser> if you look at /var/lib/cloud-init, you have potential of seeing stale data
<blkadder> Did you guys ever settle on a templating strategy?
<smoser> (from a previous boot)
<smoser> so /run/cloud-init/result.json and /run/cloud-init/status.json
<blkadder> I'm only resorting to ansible because I don't have a good way to template stuff otherwise
<smoser> wrt templting blackboxsw has a mp
<blkadder> Good deal.
<smoser> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/335290
<smoser> and described at https://trello.com/c/xyqxyOxg
<blkadder> Will take a look, thanks.
<blkadder> Well if it's jinga2 anyways at least I should have an easy time migrating to it if I stick to ansible for now.
<blkadder> So that's good.
<blkadder> s/jinga2/jinja2/
<blackboxsw> smoser: around
<smoser> blackboxsw: ya
<blackboxsw> but probably late for you
<smoser> i really shoudlt be here ;)
<blackboxsw> ohh, what's up? reading backlog now
<smoser> ah. was trying to get you to ack a change i made . i added a test to sankar
<smoser> and push'd
<blackboxsw> the ds-identify branch?
<blackboxsw> or sankar's branch. checking
<blackboxsw> ahh I see, push == push to tip
<blackboxsw> ok yep seem 'em. marking those branches as merged
<smoser> sankar's MP. thanks
<smoser> and i'im out
<smoser> see you tomorrow.
<blackboxsw> see ya
<blackboxsw> https://pastebin.ubuntu.com/p/rxQ6kZJnjg/ rharper smoser
<rharper> ah, yeah local-hostname
<smoser> ahasenack: did you reproduce
<smoser> https://bugs.launchpad.net/cloud-init/+bug/1751051
<ubot5> Ubuntu bug 1751051 in cloud-init "UnicodeEncodeError when creating user with non-ascii chars" [Undecided,New]
<ahasenack> smoser: twice, which was 100% of my tries
<ahasenack> but using subiquity
<smoser> i can't reproduce in lxc
<smoser> ahasenack: i can't reproduce in lxc
<smoser> lxc launch ubuntu-daily:bionic b6 --user-data='#cloud-config'$'\n''{"users": [{"gecos": "Andr\u00e9 DSilva", "name": "andre", "shell": "/bin/bash"}]}'
<smoser> (have also trie dwith exactly config provided)
<ahasenack> smoser: can you try subiquity in a vm perhaps?
<ahasenack> if you prefer, I can move the bug to subiquity, but the backtrace came from cloud-init
<smoser> its just odd.
<smoser> it looked obvious.
<smoser> but didnt' reproduce
<smoser> hmm.
<smoser> it feels like locale must be involved.
<ahasenack> smoser: with that config of yours, did you get a firstname of AndrÃ©, or a literal Andr\u00e9?
<ahasenack> you also missed the ' between D and Silva
<ahasenack> D'Silva
<ahasenack> sorry, vpn dropped
<ahasenack> going to repeat my last
<ahasenack> smoser: with that config of yours, did you get a firstname of AndrÃ©, or a literal Andr\u00e9?
<ahasenack> you also missed the ' between D and Silva
<ahasenack> D'Silva
<smoser> i dropped that by design
<smoser> the '
<smoser> just as it is not the problem
<smoser> and inside, it does work
<smoser> # grep andre /etc/passwd
<smoser> andre:x:1000:1000:AndrÃ© DSilva:/home/andre:/bin/bash
<ahasenack> ok
<rharper> ahasenack: fwiw, subiquity is writting the cloud-config
<rharper> so I suspect it's not escaping it like smoser  did
<rharper> if possible set the root password and then extract the user-data from /var/lib/cloud/seed/nocloud-net/user-data
<blackboxsw> hrm, no tracebacks on my cloud-init changes in juju's vsphere-deployed unit, but the ubuntu charm /var/lib/juju disappeared across system reboot. This is why my unit no longer talks to juju. /me digs at why that's happening
<mjh> hi, I need to encrypt a filesystem. I cant find anything usfull in the disk module. anybody tackled this before?
<Odd_Bloke> Is there a cloud-init PPA that I could use to test the latest cloud-init HEAD in an image?
<Odd_Bloke> smoser: rharper: ^
<rharper> yeah
<blackboxsw> https://launchpad.net/~cloud-init-dev/+archive/ubuntu/daily Odd_Bloke
<Odd_Bloke> Thanks!
<blackboxsw> sudo add-apt-repository ppa:cloud-init-dev/daily :)
<blackboxsw> no prob
<smoser> ahasenack: i diagnosed
<smoser>  https://bugs.launchpad.net/subiquity/+bug/1751051
<ubot5> Ubuntu bug 1751051 in cloud-init "UnicodeEncodeError when creating user with non-ascii chars" [Medium,Confirmed]
<smoser> it does somewhat seem like a bug to me that python3 does encode based on LANG down some C parts when .encode(encoding) defaults to 'utf-8'
<ahasenack> encodings, ugh :/
<smoser> chttps://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/338586
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/338586
<blackboxsw> smoser: will review for today's cut
<smoser> blackboxsw: i don tthink weh ave to race that in
<blackboxsw> ok, wasn't sure
<blackboxsw> will wrap up on vsphere then
<smoser> it only affecgts systems that use utf-8 commands (such as adduser) but do not have a utf-8 capable system default set.
<smoser> blackboxsw: i'm thinking i'm just going to cut release
<smoser> unless you have somtehing you think needs fixing
<blackboxsw> sounds fine smoser
<blackboxsw> let me know and I can queue the upload to bionic
<blackboxsw> to keep in the habit of it
<blackboxsw> one more piece of flare for upload rights :)
<rharper> heh
<smoser> ahasenack: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/338586 if you're interested.
<ahasenack> cool
<dpb1> rharper: so nogo for loopback huh
<rharper> dpb1: not without a systemd-resolved fix
<rharper> it seems to ignore it even if networkctl doesn't
<dpb1> nasty
<smoser> rharper: or blackboxsw
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/338588
<smoser> if you want to quick ack that
<blackboxsw> reading
<smoser> i just uploaded to launchpad the source tarball. about to ssend email
<smoser> (yes, the order is wrong :)
<rharper> done
<blackboxsw> smoser: change log fix
<blackboxsw> - docs: Fix typos in docs and one debug message. [aRkadeFR]
<blackboxsw> looking for other names that we had to manually fix
<smoser> hm.. suck. ok i'll fix.
<smoser> https://hackmd.io/MYEwRg7AHKCsC0wCMYCm8AsWCG8wDYBmDTEZMEVMAJm0LCA=?both
<smoser> that is what i'm following
<smoser> blackboxsw: i fixed that.
<blackboxsw> smoser: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/338591 for upload to bionic
<rharper> bug fix deluge !
<dpb1> rip my inbox
#cloud-init 2018-02-23
<mjh>    Hello, I am using cloud init to create an ext4 filesystem on an encrypted LUKS device and then mount it. Documentation for LUKS suggests that every block should be written to prior to filesystem creation. I think that `mkfs.ext4 -cc /path/to/device` will do this but cant find an option for this kind of thing in the documentation. Any suggestions?
<smoser> mjh: hm... i dont think you can really manage taht with cloud-init at the moment.
<smoser> you could probably use something in a boothook or bootcmd
<smoser> bootcmd:
<smoser>  - [dd, if=/dev/zero, of=/path/to/device]
<mjh> I havent looked at boothooks / bootcmds yet. I shall RTFM :-)
<gtmanfred> smoser: sorry about that license agreement, apparently i put the wrong email, i just responded to the person trying to verify me on the agreement to put the correct one
<gtmanfred> hopefully it should be good to go in a bit
<smoser> gtmanfred: great thanks.  your word here is good enough.  it does take a bit for a human to process.
<gtmanfred> :+1:
<smoser> can you reply in that mp ?
<smoser> (even t hough i just said your word here was good enough :)
<gtmanfred> yup, one second
<gtmanfred> done
<gtmanfred> i think
<gtmanfred> i can respond to launch pad emails and they post them right?
<gtmanfred> meh, commenteed
<smoser> yes.
<gtmanfred> i did both just to be certain :)
<gtmanfred> cool, thanks!
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 3/5 16:00 UTC | cloud-init 18.1 released (Feb 22, 2018)
<redkrieg> Hi, I'm using cloud-init on an openstack guest and have a need for static network configuration.  I can see in my cloud-init.log file that the network_data.json file was successfully requested from the metadata service and the code I've reviewed in the openstack helper seems like it should be generating a static config for ipv6 but the default interfaces file with dhcp4 is the only thing written.
<redkrieg> Can I force cloud-init to reconfigure networks?
<rharper> redkrieg: can you share your network_data.json and what your final network config is ?
<rharper> redkrieg: Also, if you want cloud-init to configure networking in Openstack, you need to specify configdrive at this time ;
<rharper> it's on the roadmap to read network config from the metadata service early enough that cloud-init can write out a network configuration like it does with configdrive, but that's not done yet
<redkrieg> the json comes out like this: https://pastebin.com/E9uJZRBn
<redkrieg> I didn't realize that wasn't implemented.  The docs seem to indicate that it is supported :(
<redkrieg> https://pastebin.com/ivRtvyJ4 is the config that it generates, but that makes sense if metadata isn't acquired early enough.
<redkrieg> how about forcing cloud-init to rebuild network config?  is that something that can be done?  it doesn't appear to be a module that I can find.
<rharper> if you change the instnace to use configdrive, it will work
<smoser> yeah.
<rharper> well, after networking is up, trying to re-apply could interrupt existing connections
<redkrieg> not worried about that, it'd be something a client would do from our custom control panel and will come with all the usual warnings about what modifying network configs does
<smoser> redkrieg: we do want to make this do the right thing.
<smoser> and its not really a difficult change for first boot configuration
<smoser> (hotplug is more difficult)
<smoser> just a matter of resources at the moment.
<redkrieg> yeah, once the user gets their hands on something they tend to create edge cases :D
 * smoser shakes fist at users
<redkrieg> config drive is working great for initial boot, thanks!  Is it possible to force cloud-init to reconfigure networking on the next boot or something similar?
<smoser> redkrieg: no. not really.
<redkrieg> ouch.  guess I'll have to cook something up to runcmd on boot and do it manually :\
<redkrieg> I think a neat feature would be testing for the absence of /etc/network/interfaces.d/50-cloud-init.cfg (or related files for other distros) and rebuilding.  Might take a swing at that over the weekend.
<rharper> smoser: powersj: so  both chrony and timesyncd happily don't run in containers unless you run them with the right capabilities to adjust time;  isc-ntp seems to run anyhow even if it can't adjust the time
<rharper> that does make the integration test a bit more challenging w.r.t asking the client itself how it's configured (versus just parsing the config file)
<rharper> thoughts ?
<powersj> what are the right capabilities?
<rharper> heh, !Container
<rharper> which is annoying
<rharper> lemme get the chrony one
<rharper> ConditionCapability=CAP_SYS_TIME was not met
<rharper> timesyncd probably should use the ConditionCapability check that chrony uses  but that's an upstream change
<powersj> ha
<powersj> rharper: so if someone tries to setup time via chrony with cloud-init with a container what is the expected behavior? nothing? warning message?
<rharper> from cloud-init perspect, we did what we were asked
<rharper> chrony service it self can't start for it's one restrictions
<rharper> I mean, it's not really any different than ntpd, which runs but cannot adjust the clock without capabilities
<rharper> chrony has a track time but don't adjust the system time mode; however, in unpriv container, it still wants to drop root privs, but that's a restricted capability so it can't do that
<powersj> then for the test validate the config and move on?
<rharper> well, do we do different tests on kvm vs lxd ?
<rharper> I need to run in a vm and see if I can get them to dump out any information anyhow
<rharper> systemd packages are notorious for not saying anything about their config
<rharper> the timedatectl --status shows NTP Sync=Yes
<rharper> that's it
<rharper> thanks
<powersj> and we do not run different tests for kvm vs lxd
<rharper> well, crud
<blackboxsw> rharper: I have an example  started
<blackboxsw> for the snap testing
<blackboxsw> as snap on container requires squashfuse
<blackboxsw> and non-container doesn't
<blackboxsw> our cloud_tests need to provide that platform information to the individual unit tests
<rharper> well, the thing is, I'm not really ok running priv container trying to sync time
<rharper> I think we need a way to skip tests on certain platforms
<rharper> if that's not there
<rharper> so, we'll collect the data, then run the verify on it; if it knows that it was collected on a particular platform we could raise SkipTest where we know it won't be accurate
<blackboxsw> rharper: it's not, but I agree we do need a skiptest decorator that chan check platform details
<rharper> well
<rharper> guess who get's to poke at that
<rharper> but not in the initial branch
 * rharper is going to push that up for review shortly 
<smoser> rharper: i think that cloud-init does what it was told
<rharper> yeah
<smoser> if that means installing a servie that will fail to start
<smoser> then we still did what we were told
<rharper> I agree; I'm mostly concerned about trying to verify if our template is correct
<smoser> its kind of over-zealous to have a service that simply says "i dont run in a container"
<rharper> we had that ntp bug about not having config on disk prior to install and restarting the daemon; we do those things now
<rharper> but our template could still be broken
<smoser> baking in assumption that the system clock is not namespaced
<rharper> smoser: yes, systemd is obnoxious  =)
<smoser> or the container platform doesnt provide some psuedo mechanism for that.
<rharper> a day hasn't gone by where i"m not swearing at something packaged in systemd, so no change there
<rharper> I suspect we didn't add an integration test for UbuntuCore which used the timesyncd path, or we'd see this (ie, we configure it, but have no way of verifying it uses the servers) since timedatectl doesn't emit useful information other than 'ntp sync=Yes' if it's synced
<rharper> I do think we can run this in KVM and at least for some clients (ntp, at least, possibly chrony) as the daemon for what servers it connected with ; and validate that matches what we configured ; I'll play with chrony in a VM
<blkadder> systemd is love.
<rharper> said no one ever
<blkadder> :-)
<rharper> until *now*
<blkadder> Still amazed at how quickly it got pushed through.
<blkadder> But I just try to hold my nose and focus on other things. :-)
 * blkadder eagerly awaits the LinuxRegistry
<smoser> rharper: do you remember the bug where cloud-init was hanging tdue to /dev/random / kernel-config missing some options ?
<smoser> can't find it
<smoser> cyphermox i think you filed ?
<rharper> smoser: yes
<rharper> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1727358
<ubot5`> Ubuntu bug 1727358 in python3.6 (Ubuntu) "cloud-init is slow to complete init on minimized images" [Wishlist,Triaged]
<rharper> smoser: lol, I know where you're going
<dpb1> rharper: blkadder: I ordered this shirt for you both: https://goo.gl/hzFdR3
<blkadder> HAHAHA
<rharper> (â¯Â°â¡Â°ï¼â¯ï¸µ â»ââ»
<rharper> dpb1: just you wait dpb1
<blkadder> Is there an option to order one bathed in tears?
<dpb1> I think they come with a first aid kit for all the fights you will get in after putting it on.
<blackboxsw> nerd fights, I think there is an ESPN channel with that kinda action
<blackboxsw> hrm one last manual test for ubuntu SRU https://pad.lv/#1731868
<blackboxsw> or rather bug #1731868
<ubot5`> bug 1731868 in cloud-init (Ubuntu) "cloud-id: enable ESXi 6.5.0" [High,Fix released] https://launchpad.net/bugs/1731868
<blackboxsw> I can upgrade my vphere bootstrapped env w/ latest cloud-init and the ds-identify warning is gone... but not sure if there is an easier test for that.
#cloud-init 2018-02-24
<Gaffel> I'm trying to have cloud-init configure my network interface on CentOS. I asked in #centos but there doesn't seem to be many with cloud-init experience so I thought that I could ask here to figure out what's wrong.
<Gaffel> The network-config looks like this: https://ptpb.pw/yvut
<Gaffel> And the log looks like this: https://ptpb.pw/SLWW
<Gaffel> cloud-init throws exception so I'm wondering if my config is at fault.
<Gaffel> I cloned the repository and built the 18.1 RPM, installed it and the exceptions have gone away but I'm still unable to configure the network.
<Gaffel> "stages.py[WARNING]: Failed to rename devices: Failed to apply network config names. Found bad network config version: None"
#cloud-init 2018-02-25
<brunobronosky> I discovered that installing cloud-init on an Raspberry Pi running Raspbian (RPi Debian) has a fatal flaw. It trows stuff into `/etc/apt/sources.list` this is inappropriate for this architecture. Where does the problem lie? I don't know who to report the bug to.
<larsks> brunobronosky: how did you install it?
<brunobronosky> apt-get
<larsks> brunobronosky: there's a cloud-init package in raspbian? Or was this a third-party repository?
<brunobronosky> 0.7.9 is in Raspbian.
<brunobronosky> I don't think anyone has ever used it, but it comes straight from debian.
<larsks> If the raspbian package is installed the wrong things in sources.list, I would call that a raspbian bug.
<larsks> Raspbian bug reporting info: https://www.raspbian.org/RaspbianBugs
<brunobronosky> yeah, I found that, but it's the "Before You File a Bug" part that has me second guessing.
<brunobronosky> I'm eventually going to ask them to have cloud-init installed by default. I'm trying not to be an annoyance to the maintainer.
<larsks> What are you doing with cloud-init on the Pi?
<brunobronosky> larsks https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=206034&sid=e154adc1892a1a0d3c32fda0a94f640c
<brunobronosky> actually, I guess my followup to the thread is more to the point. https://www.raspberrypi.org/forums/viewtopic.php?p=1276583#p1276583
 * larsks continues to read...
<larsks> That's an interesting application.
<brunobronosky> I thought so.
<larsks> What was the particular feature that was undocumented until 17.2?
<brunobronosky> but, now I'm doubting my decision to use a Standardâ¢ vs. just roll my own systemd oneshot service.
<brunobronosky> using `datasource: NoCloud: seedfrom: /path/`
<brunobronosky> Everything I was hearing was insisting that you had to use a volume labeled `cidata`.
<larsks> Huh. Pretture sure even earlier versions documented how to preconfigure a filesystem with metadata.
<larsks> But re: using cloud-init vs rolling your own: I guess it depends on what you're trying to do. I mean, cloud-init already has a bunch of modules to handle common system configuration tasks, which is maybe useful.
<brunobronosky> Even on this IRC channel at first I said "I need to have {user,meta}-data read from the partition mounted at /boot" and I was being told to change the label to cidata.
<larsks> On the other hand, if you have access to the sdcard to install cloud-init metadata, you also are able to make most of your configuration changes at that point.  Using something like qemu-user, you can even run Pi binaries and install packages, etc.
<brunobronosky> O_o "and then tell the kernel to boot from the cidata volume instead of the boot volume? Eh. No."
<brunobronosky> But using qemu is WAY beyond your average Windows using school teacher.
<larsks> E,g, here are the docs for 0.7.9 that describe pre-seeding: https://git.launchpad.net/cloud-init/tree/doc/examples/seed/README?id=0.7.9
<brunobronosky> (editing anything other than /boot is virtually impossible for Mac and Windows users)
<larsks> That's a good point.
<brunobronosky> Even that example you posted would have helped me. But sadly... http://cloudinit.readthedocs.io/en/latest/search.html?q=nocloud-net&check_keywords=yes&area=default
<larsks> Have you (or are you going to) put together a demo image with cloud-init installed?
<brunobronosky> Going to. Just as soon as I can get cloud-init to stop destroying apt.
<brunobronosky> I'm going to add cloud-init to my fork of https://github.com/RPi-Distro/pi-gen/blob/dev/stage2/01-sys-tweaks/00-packages and publish a cloud-init userdata file that can be used to compile the Raspbian image on an EC2 instance. (Since it is virtually impossible to build on Mac or Windows) How meta is that?
<larsks> brunobronosky: what sort of problems are you seeing?  Just install cloud-init on rasbpian stretch, it doesn't appear to have added anything erroneous to /etc/apt.
<brunobronosky> Hmmm. For me it overwrites /etc/apt/sources.list
<larsks> Upon installation? Or execution?  I just installed the package (apt install cloud-init) and my sources.list is unmodified.
<brunobronosky> do `sudo cloud-init init --local`
<larsks> So that would be "upon execution" :). Let me give it a try.
<brunobronosky> Keep this handy ;-) https://github.com/RPi-Distro/pi-gen/blob/dev/stage0/00-configure-apt/files/sources.list
<larsks> Still unmodified.
<larsks> /var/log/cloud-init.log looks like http://termbin.com/mgcs
<brunobronosky> Really! Hmm. Let me grab a fresh image. I've installed 17.2 from source on this one. (You probably just saved me from embarrassing myself (elsewhere))
<larsks> brunobronosky: note that this is 0.7.9!
<larsks> It's entirely possible 17.2 is doing something different.
<brunobronosky> I'd rather look like a fool here than on Raspbian. I'm not trying to get any PRs accepted to cloud-init.
<brunobronosky> (0.7.9 understood)
<brunobronosky> larsks do you have apt-configure in your /etc/cloud/cloud.cfg
<larsks> brunobronosky: looks like it...but it's in cloud_config_modules, so 'cloud-init init --local' wouldn't run it.
<brunobronosky> once you reboot, it will fill your sources.list with "## Note, this file is written by cloud-init on first boot of an instance"...
<larsks> Let me drop a seed file in place and see what happens.
<brunobronosky> I'd love some help coming up with a sane minimal /etc/cloud/cloud.cfg for this.
<larsks> I see what you mean.   It looks like the comments there tell you what to do to make it configure sources.list appropriately.
<larsks> ("if you wish to make changes you can...")
<larsks> brunobronosky: I have to take off for the night.  Good luck!
<brunobronosky> thanks!
<brunobronosky> I just don't like that the cloud.cfg by default is full of all kinds of things I don't understand and don't need.
#cloud-init 2020-02-17
<otubo> blackboxsw: I don't know the answer to those questions because I'm not directly involved to CentOS. But I'll find out.
<otubo> dustymabe: mhayden perhaps you guys know? ^^^
<mhayden> otubo: i don't unfortunately
#cloud-init 2020-02-18
<blackboxsw> ok Odd_Bloke so it's land 204 and then followup with a warning log if we need to after testing
<blackboxsw> ?
<Odd_Bloke> blackboxsw: Yeah, land #204 and then I'll do testing to see if we want to add any documentation/logging information about the change.
<Odd_Bloke> blackboxsw: Once landed, I'll also push through a PPA build and ask for an Azure image build for testing.
<blackboxsw> +1 Odd_Bloke
<Odd_Bloke> blackboxsw: powersj: I can't remember how to kick off a manual sync of GitHub -> Launchpad, can either of you remind me?
<powersj> blackboxsw, I don't think our status meeting was on Sunday (16) :P
<powersj> should we do one now?
<blackboxsw> hah
<blackboxsw> yes lets
<blackboxsw> #startmeeting Cloud-init bi-weekly status
<meetingology> Meeting started Tue Feb 18 17:35:26 2020 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<blackboxsw> o/  hi cloud-init folks. sorry I botched being able to read calendars last time.
* blackboxsw changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting February 18 17:15 UTC | 19.4 (Dec 17) drops Py2.7 : origin/stable-19.4 | 20.1 (Feb 18) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> let's kick off our cloud-init status meeting.
<blackboxsw> notes from previous meeting are here:
<blackboxsw> #link https://cloud-init.github.io/status-2020-02-04.html#status-2020-02-04
<blackboxsw> the topics we cover in this meeting are the following: Previous Actions, Recent Changes, In-progress Development, Community Charter, Upcoming Meetings, Office Hours (~30 mins).
<blackboxsw> today I'll add a topic for cloud-init's upstream release  20.1
<blackboxsw> #topic Previous Actions
<blackboxsw> Last meeting minutes show no carryover action items. So we can drop into recent changes
<blackboxsw> #topic Recent Changes
<blackboxsw> We have about 8 commits landed in master since last meeting: found with git log --since 02-04-2020
<blackboxsw> https://paste.ubuntu.com/p/28Y8jGTGwr/
<blackboxsw> some doc fixes, CI fixes for Azure integration testing , swap disk support for cc_disk_setup  and freebsd improvements.
<blackboxsw> thanks for contributions there all. I think we still have a long tail of improvements to review for FreeBSD and NetBSD so we'll try to keep the conversation going there.. Thanks meena and do3meli there
<blackboxsw> I mean Goneri
<blackboxsw> #chair rharper Odd_Bloke smoser
<meetingology> Warning: Nick not in channel: rharper
<meetingology> Current chairs: Odd_Bloke blackboxsw rharper smoser
<blackboxsw> forgot to set meeting chairs. sry
<blackboxsw> #topic In-prgoress Development
<blackboxsw> #topic In-progress Development
<blackboxsw> Odd_Bloke: is currently wrapping up any remaining py2/p3-isms in master branch. Dropping use of 'six' throughout the code. Paride is working on copr-build failures due to the shift to python3 packages. otubo, thanks for the ping back on finding various python3 CentOS packages. We'll also try sorting this this week so cloud-init py3 builds can work on CentOS 7 and 8
<blackboxsw> If folks didn't see the mailing list Odd_Bloke has a branch up to shift from nosetests -> pytest https://lists.launchpad.net/cloud-init/msg00245.html
<blackboxsw> we expect to land that after 20.1 releases. Thanks for the reviews there https://github.com/canonical/cloud-init/pull/211
<blackboxsw> #topic cloud-init upstream 20.1
<blackboxsw> So, today at EOD is upstream release day for cloud-init 20.1. Just another timed release of cloud-init which we strive to make quarterly thoughout the year
<blackboxsw> As mentioned on the mailing list, if there are any branches/PRs that folks really would like to get into 20.1, please raise them here. We will scrub the review queue today and see what makes sense to land for this release.
<Odd_Bloke> I'll be driving the 20.1 release.
<Odd_Bloke> We just landed https://github.com/canonical/cloud-init/pull/204 into master, which is a fix for a (low priority) CVE that we wanted in before cutting the release.
<Odd_Bloke> I'm going to perform some testing of that change before cutting the release, to determine if any doc changes are required for it and to check if it impacts boot on Azure instances that have a password provided by the Azure fabric.
<blackboxsw> I was just looking over the PR from fred in ec2 land about handling a disabled path for IMDSv2  that looks interesting, but it still needs unit tests https://github.com/canonical/cloud-init/pull/216/files
<blackboxsw> this would certainly help non-ec2 lookalikes
<blackboxsw> and is a fairly specific fix.
<blackboxsw> we can peek over it during the office hours and see if it makes sense.
<blackboxsw> Thanks Odd_Bloke for driving the 20.1 release.
<blackboxsw> So, again, plan is to cut 20.1 at end of day today.
<blackboxsw> What follows will be a tag and release to Ubuntu Focal to sync tip of master to Ubuntu development release
<blackboxsw> #topic Community Charter
<blackboxsw> This topic is a placeholder to remind folks of any project-wide development tasks that we are engaging the community in.
<blackboxsw> the general theme at the moment is cloud-config schema definitions for the config modules in cloudinit/config/cc_*py and improving/correct datasource configuration documentation
<blackboxsw> We've queued this work as separate bugs in cloud-init at the following link
<blackboxsw> #link https://bugs.launchpad.net/cloud-init/+bugs?field.tag=bitesize
<blackboxsw> we'll revisit this set of bugs/features and resent community charter goals near the end of 2020 at the next cloud-init summit. If there are suggestions/desires for community themed tasks please feel free to set the direction there.
<blackboxsw> these community tasks are grabbed by any contributor to cloud-init.
<blackboxsw> An example of the schema definitions we are looking to add is the PR in review here. https://github.com/canonical/cloud-init/pull/152
<blackboxsw> As always, everyone's review counts. As a project we are trying to also look to 'promote' more core-contributors, with commit rights to the cloud-init project. Reviews count just much as proposing pull requests to the project.
<blackboxsw> thanks again for all the contributions, reviews and bugs that are being contributed to date. It really helps improve this project's use
<blackboxsw>  #topic Office Hours (next ~30 mins)
<blackboxsw> During this topic, please bring up any questions, discussions, bugs or features or paper cuts that need attention. there should be a couple of cloud-init developers with eyes on the channel to actively respond.
<blackboxsw> In leiu of active discussions, we'll hit up the review queue for cloud-init at https://git.io/JeVed  and get ready for the 20.1  release
<blackboxsw> I'm going to see if I can
<blackboxsw> review  https://github.com/canonical/cloud-init/pull/216/files and propose the unit test changes there
<blackboxsw> I think that could be a valid addition for ec2-lookalikes to avoid an unnecessary 2 minute timeout
<blackboxsw> on boot
<blackboxsw> hrm on 2nd thought w/ 216, I think that patch set should  be more specific, such as actually testing HTTP status 403 instead of just checking if metadata was None and assuming it was disabled. I'll put a couple of review comments on that as I dig in, but probably not in a state that it could be landed today
<blackboxsw> thanks for tuning in folks.  See you next time
<blackboxsw> #endmeeting
<meetingology> Meeting ended Tue Feb 18 18:36:47 2020 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2020/cloud-init.2020-02-18-17.35.moin.txt
<blackboxsw> Odd_Bloke: powersj we are on a work trip Mar 3rd, which would have been our next scheduled cloud-init status meeting. Shall we shift it +1 week to March 10th?
<powersj> blackboxsw, yes please
<blackboxsw> ok folks. next status meeting Tuesday Mar 10th. same time, same channel
* blackboxsw changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting March 10 17:15 UTC | 19.4 (Dec 17) drops Py2.7 : origin/stable-19.4 | 20.1 (Feb 18) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> meeting minutes published
<powersj> blackboxsw, thanks!
<amansi26> I am trying to run cloud init 19.1 on ubuntu 16.04. The resolv.conf is not getting copied
<amansi26> can anyone guide a bit
<blackboxsw> amansi26: I'm not sure I know what you mean, copy resolv.conf from where? Ubuntu Xenial (16.04) uses the utility resolvconf. resolvconf likes to see dns-nameserver entries in /etc/network/interfaces or /etc/network/interfaces.d/*cfg files
<blackboxsw> amansi26: per the header that lives in /etc/resolv.conf on xenial # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
<blackboxsw> #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
<blackboxsw> so anything you are trying to copy into /etc/resolv.conf will likely be overwritten by resolvconf
#cloud-init 2020-02-19
<Odd_Bloke> Heads up: GitHub is having availability issues: https://www.githubstatus.com/
<Odd_Bloke> powersj: blackboxsw: So do we have a release process that I should be following?  (I don't _think_ I've cut an upstream release before.)
<blackboxsw> Odd_Bloke: I added you to the trello board
<blackboxsw> https://trello.com/b/obFWGwZY/upstream-cloud-init-201
<blackboxsw> I copied it yesterday from the template board https://trello.com/b/QQYFXpsA/template-sru-cloud-init-xy and deleted any "SRU" cards
<Odd_Bloke> Status update on 20.1: we've tested the change that we were worried about and it does not appear to be causing issues
<Odd_Bloke> blackboxsw: Cool, thanks!
<Odd_Bloke> rharper: Where are we on the IMDS change you wanted to land for 20.1?
<rharper> well, it's not a one liner =/
<rharper> writing a unittest (augmenting one to ensure we do the redaction)
<blackboxsw> Odd_Bloke: and the SRU process for that trello board template came originally from https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/upstream_release_process.md
<Odd_Bloke> rharper: OK, but in your view it's close enough that it's worth waiting for?
<rharper> it should be; but since it's not a one liner; I don't suppose we should wait
<Odd_Bloke> rharper: I'm going to go grab lunch, so you have a bit of time to get something up for review. :)
<rharper> Odd_Bloke: https://github.com/canonical/cloud-init/pull/219
<blackboxsw> minor volley back rharper
<blackboxsw> I might have to re-review the token interaction again. jussec
<blackboxsw> yeah X-aws-ec2-metadata-token-ttl-seconds header is just a seconds value, non-secret. so no need to redact
<rharper> blackboxsw: true but we don't need the value logged do we?
<rharper> I've replied; having the value won't aide in debugging; and if we lower it, we just end up obtaining a new token more frequently;
<blackboxsw> I guess we would if at some point we reduce the hardcoded AWS_TOKEN_TTL_SECONDS which afair I thought we would intend at at some point.
<blackboxsw> fair rharper I concede it's not a big deal, just recognizing that we are redacting things that aren't necessarily sensitive, by that measure where do we draw the line.
<blackboxsw> I have a couple more nits. we're doing a lot more work in that function than we need with every retry.
<blackboxsw> will finish my review in ~30. have to afk for a bit
<Odd_Bloke> rharper: blackboxsw: OK, that lunch break ended up longer than anticipated; I see that PR is Approved so I'll wait for it before cutting the release.
<blackboxsw> back as well now. a minor set of review comments to followup with
<blackboxsw> throwing together a paste to see what you guys think
<rharper> blackboxsw: on my errand I was thinking I'd like to see if I can just copy the redacted header value instead of all headers;
<blackboxsw> Odd_Bloke: rharper here's what I was thinking https://paste.ubuntu.com/p/NXXXyMZKFH/ . I think we should be hoisting the headers_cb calculation out of the retry for loop as those headers should not change across calls to the same url.
<rharper> blackboxsw: well, that's a separate fixup  no ?
<rharper> blackboxsw: an alternative approach I thought about was having the headers_cb return the headers and a list of keys to be redacted
<blackboxsw> as it is currently in the function, we are re-calculating and copying the dict of headers on each run. with rharper's change it gets more costly as we are now extracting each header on each retry and comparing it to the redacted list on every iteration of the liip
<blackboxsw> loop
<rharper> and I guess we don't pass back to headers_cb that we're retrying it just gets called a second time
<blackboxsw> rharper: that'd work, but it's also doing that work on every single retry against a url.
<rharper> so even if headers_cb wanted to vary based on retry it doesn't know
<blackboxsw> right, it just gets called again and again
<blackboxsw> soliciting the same response
<rharper> well, I like you suggestion but it seems like an optimization
<blackboxsw> rharper: it does seem like an optimization.
<rharper> and if I push this copy down to only if the key is in redacted list; we're copying a very small amount of data
<blackboxsw> rharper: well yes, we are copying a small amount and settting up a pointer to every key in req_args.
<rharper> blackboxsw: for now, the only time that it's hitting is if we do a retry anyhow; the happy path never does it more than once;  but I generally like it;
<blackboxsw> right, I think the optimization can wait. I'll add the paste there for reference in case we want to go that route in the future.
<Odd_Bloke> I'm not really following here, but dict operations are generally _very_ optimised in Python (because it's dicts all the way down, internally :p) so I would be surprised if this is a substantial slowdown.  Which is to say: I think we can land as-is and then follow with the optimisation.
<rharper> Odd_Bloke: I'm pushing up two changes in just a second, one blackboxsw wanted the string moved to module variable; and the other is to push the copy down to the matched keys rather than copying all headers even if we don't have anything to redact
<blackboxsw> yep I've pasted that comment to the PR and set approved from my end
<Odd_Bloke> rharper: OK, nice.
<powersj> Odd_Bloke, how's 20.1?
<Odd_Bloke> powersj: We're landing this IMDS change, then cutting it.
<Odd_Bloke> blackboxsw: Your Approved is from before rharper's latest push; are you still +1?
<blackboxsw> I'm still  +1 Odd_Bloke
<blackboxsw> yeah looks good
<Odd_Bloke> Cool, merged.
<blackboxsw> heh ohh well.
<blackboxsw> we didn't really need to deepcopy the entry and then reset the value to REDACTED
<blackboxsw> Odd_Bloke: rharper not needed for 20.1, but here's a low-hanging-fruit review. sorry I just missed that
<blackboxsw> https://github.com/canonical/cloud-init/pull/220
<blackboxsw> hrm strike that.
<blackboxsw> shame on me
<rharper> hehe
<blackboxsw> yes we did, otherwise we'd overwrite the actual headers
<rharper> it would have had this done a lot faster if I knew I was modifying my req_args
<rharper> and then we never pass tests... and the execption callback runs *forever*
<rharper> it's inifinite
<rharper> so I sat there... for some tim e
<blackboxsw> heh
<rharper> before realizing what was happening
<blackboxsw> yeah I was wondering the same for me. oopsie
<blackboxsw> closing that PR
<rharper> even after I realized that headers were a key *in* all of the args
<Odd_Bloke> powersj: rharper: blackboxsw: So I just opened up the card for 20.1 (I used shortcuts to assign and move it earlier), and it calls for us to land https://github.com/canonical/cloud-init/pull/54 before cutting.
<Odd_Bloke> What do we think?
<powersj> that needs to come after
<powersj> I don't want to land that right before focal goes out
<Odd_Bloke> After what?
<rharper> well
<rharper> it's non master
<rharper> it's on the xenial branch only
<powersj> oh right
<blackboxsw> rharper: hrm. gonna need a second glance at that..... we are in the middle of the for loop where we are setting filtered_reg_args[k] = value... if we aren't copying over filtered_req_args['headers'] = req_args[k], then we don't have to deepcopy I don't think.   Again, doesn't matter for 20.1, but I do think again we can drop that
<powersj> in which case, not an issue for 20.1, it is for the SRU
<Odd_Bloke> OK, so that's an SRU thing, cool.
<rharper> blackboxsw: I was thinking that was correct; it's just a single dict entry;  it *could* have a value of a dict; but I think we can reason that header values cannot be dict
<rharper> blackboxsw: do we know if we pay a heavier cost calling deepcopy() when we don't need it with such a small dict ?
 * rharper smells a timeit coming on 
<blackboxsw> rharper: ahh your branch didn't have my local changes. I did the if k != 'headers': filtered_req_args[k] = v     else: filtered_req_args[k] = REDACTED
<blackboxsw> I wasn't setting filtered_req_args[k] = v   if we were dealing with a 'headers' key
<blackboxsw> rharper: +1 on headers values also being a dict and being concerned about not blowing that away
<blackboxsw> but, I think our logic initially path setting that filtered_req_args[k] = v  and then dropping into if k == 'headers' to redact is what causes the need for the copy.
<blackboxsw> if we didn't set that in the first place, we wouldn't have to copy and overwrite
<blackboxsw> anyway, not for 20.1 but I'll reopen the appropriate cleanup for us to look at for later
<blackboxsw> and yes might as well add a timeit to the PR once up :)
<rharper> blackboxsw: ok, I can see a way without copy
<rharper> https://paste.ubuntu.com/p/2XQF6CPntK/
<rharper> however, I wonder if it's really faster than copying the headers dict
<blackboxsw> https://github.com/canonical/cloud-init/pull/220
<rharper> for small sized dict;
<blackboxsw> https://github.com/canonical/cloud-init/pull/221
<blackboxsw> yeah not much performance improvement I don't think
<blackboxsw> yeah and now we've spent more time on this than we'll ever likely get back in performance runs ;)
<blackboxsw> sorry about the rathole there
<blackboxsw> especially given that our only case of redacting is a simple string value and not a large dict of values
<johnsonshi> According to https://cloudinit.readthedocs.io/en/latest/topics/modules.html#mounts , "Swap files can be configured by setting the path to the swap file to create with filename". The docs imply that cloud-init creates a swap file at the specified filename if using the swap module.
<Odd_Bloke> blackboxsw: So I'm looking at the release process, and it assumes that I can push the commit I've created locally to master.  We, of course, can't do that with branch protection.  We can either revise the docs to document disabling branch protection temporarily (probably not?), or we can update things so that the tagging of the release happens _after_ merge (via GH button) into master.  However, I think
<Odd_Bloke> that will cause CI to fail, because it will look for a 20.1 tag (to determine versions) and fail.
<johnsonshi> My swap settings contain:
<johnsonshi> ...
<johnsonshi> swap:
<johnsonshi>    filename: /mnt/resource/swapfile
<johnsonshi>    ...
<johnsonshi> ...
<johnsonshi> When the machine is provisioned, cloud-init fails to create the swapfile. The cloud-init.log file shows that the "swapon -a" command failed. When I ssh into the machine and try running "swapon -a" myself, it says that "swapon: stat failed /mnt/resource/swapfile: No such file or directory".
<johnsonshi> However, I can confirm that the /mnt/resource path exists and is accessible, and I am able to read/write files within /mnt/resource. I can also see that cloud-init added the swap entry to /etc/fstab.
<johnsonshi> When I look through the cloud-init code, I can see cloud-init adds the swap entry to /etc/fstab if it exists, then runs "swapon -a" without creating the swapfile.
<johnsonshi> Is cloud-init supposed to create the swapfile by if it is instructed to do so (the docs seem to imply that)?
<Odd_Bloke> johnsonshi: I believe you need to specify "size" for the creation to occur.
<blackboxsw> Odd_Bloke: doc reference?
<Odd_Bloke> blackboxsw: https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/upstream_release_process.md
<blackboxsw> thnaks
<Odd_Bloke> blackboxsw: Specifically "Note: you need to push the tag or c-i will fail in check version."
<blackboxsw> Odd_Bloke: we have the ability to git push tags
<blackboxsw> right?
<Odd_Bloke> blackboxsw: But we don't have the commit to tag until after merge.
<Odd_Bloke> It never exists on my system until then.
<Odd_Bloke> Which means the 20.1 tag I have pushed (until I realised the conflict) points at the commit which will be squashed into master (and then discarded, of course).
<blackboxsw> ahh right, but we will merge you MP into master once you've bumped version to 20.1 right
<Odd_Bloke> Right.
<johnsonshi> swap:
<johnsonshi>    filename: /mnt/resource/swapfile
<johnsonshi>    size: 2147483648
<johnsonshi> I tried it again and the swapfile still isn't being created.
<Odd_Bloke> johnsonshi: Could you file a bug with `cloud-init collect-logs` attached to it and drop the link to it back in here?
<johnsonshi> cc_mounts.py does not seem to have code that creates a swapfile?
<blackboxsw> Odd_Bloke: ok, hrm, but your branch did pass ci https://github.com/canonical/cloud-init/pull/222
<Odd_Bloke> blackboxsw: Right, I pushed the tag.
 * blackboxsw checks it out again
<Odd_Bloke> As instructed.
<blackboxsw> with branch restrictions reduced?
<Odd_Bloke> But that tag points at a commit which will never be in master.
<blackboxsw> or tag push worked
<Odd_Bloke> Tag push is fine.
<blackboxsw> ahh ok. gotcha
<johnsonshi> Odd_Bloke: Sure thing. So missing swapfile creation is a bug due to the code not being in cloud-init?
<blackboxsw> not sure if that's a broken process that should be fixes. stale tags pointing to nowhere for previous releases
<Odd_Bloke> johnsonshi: What version of cloud-init are you using?
<johnsonshi> Odd_Bloke: The distro I am using it with (RHEL 7.7) comes with cloud-init 18.5.
<Odd_Bloke> Yeah, that should definitely have swap support, I don't know why you wouldn't be seeing that in cc_mounts.
<Odd_Bloke> There have been changes since 18.4, however, so you might be hitting an existing bug.
<Odd_Bloke> *18.5
<Odd_Bloke> johnsonshi: Are you able to try again with 19.4 somehow?
<Odd_Bloke> blackboxsw: It looks like I fixed up the 19.4 tag on the day it happened.
<Odd_Bloke> blackboxsw: (And 19.3 predated the GH move, I think?)
<blackboxsw>  johnsonshi: any of the copr dev builds work for you? https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/
<blackboxsw> Odd_Bloke:  you are right . ok so, branch restrictions causing some issues then with upstream releases.
<johnsonshi> Odd_Bloke: wait a sec
<johnsonshi> Yeah I'll try out 19.4
<Odd_Bloke> Thanks!
<Odd_Bloke> johnsonshi: Actually, if you're able, there have been further changes since 19.4, so a test of a daily build would be best.  (That cloud-init-dev link has those, so you're already doing that if that's how you're going to do it. :)
<johnsonshi> Thanks!
<Odd_Bloke> blackboxsw: I'm thinking that retagging after is probably easier in the short term.
<Odd_Bloke> But I think I'm going to remove the tag and re-run CI, to see if we actually need it before at all.
<Odd_Bloke> Just in case that the move to Travis has happened to obviate that need.
<blackboxsw> +1 Odd_Bloke
<rharper> johnsonshi: from I can tell, the additional paths to your file are the issue;  a work around would be to either use file in an existing directory; or use bootcmd to run an mkdir -p
<rharper> and the fix is for cloud-init to use util.ensure_file() on the swapfile path
<Odd_Bloke> powersj: FYI, we've run into some release tooling issues with the migration to GitHub.  I'm currently pursuing getting it right this time, so we don't have to revisit it, because I'm not aware of any pressing need for 20.1 to be out the door.  Am I missing some info, or are you happy for me to proceed along this path?
<powersj> Odd_Bloke, can you estimate how long this delay would take?
<Odd_Bloke> powersj: Tomorrow instead of tonight.
<powersj> that's fine
<johnsonshi> Odd_Bloke: Hmmm this is weird. The logs show that the swapfile creation command succeeds (for RHEL 7.7 cloud-init 18.4).
<johnsonshi> 2020-02-19 22:23:35,338 - cc_mounts.py[DEBUG]: swap file /mnt/resource/swapfile exists, but not in /proc/swaps
<johnsonshi> 2020-02-19 22:23:35,338 - util.py[DEBUG]: Running command ['sh', '-c', 'rm -f "$1" && umask 0066 && { fallocate -l "${2}M" "$1" ||  dd if=/dev/zero "of=$1" bs=1M "count=$2"; } && mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }', 'setup_swap', '/mnt/resource/swapfile', '2048'] with allowed return codes [0] (shell=False, capture=True)
<johnsonshi> 2020-02-19 22:23:35,400 - util.py[DEBUG]: creating swap file '/mnt/resource/swapfile' of 2048MB took 0.062 seconds
<johnsonshi> After rebooting (cloud-init clean -lr) as well as a fresh provision of another machine, the file still was not being created (despite cloud-init.log saying that it is created). When I ssh'ed into the machine and ran the command above, the command succeeded and the swapfile was created.
<johnsonshi> I just asked some of my teammates (I am in the Azure Linux provisioning team), and they said that we'd have to use this  specific version of cloud-init due to a host of reasons.
<Odd_Bloke> johnsonshi: What FS is /mnt/resource?
<Odd_Bloke> johnsonshi: And is /mnt/resource ephemeral?  I think cloud-init is going to assume that if it creates a swapfile, it stays there.
<johnsonshi> Odd_Bloke: ext4. Yeah it is an ephemeral disk.
<johnsonshi> Odd_Bloke: Oddly enough, when I included a runcmd directive, the swapfile was created.
<Odd_Bloke> johnsonshi: So that shell snippet has been replaced by Python code in cloud-init master.
<Odd_Bloke> So a one-off test with a recent daily build would give us some more information about what's going on here.
<Odd_Bloke> Because if the completely rewritten code _also_ exhibits failure then my guess is that this isn't actually directly related to the swap creation, and instead it's something more complex.
<Odd_Bloke> johnsonshi: I'm finishing up my day now, I'm afraid.  If you can reproduce this with cloud-init dailies, then please file a bug.  If it only exhibits in the version you're pinned on, then I think you'll need to talk to your team about how you've handled such issues in the past.
<Odd_Bloke> (And have a good evening once you're done with your day too, of course. :)
<johnsonshi> Odd_Bloke: OK I will try that out. Thanks for your help and have a good evening too! :)
#cloud-init 2020-02-20
<johnsonshi> Odd_Bloke: Here are the results of my experiment (RHEL 7.7, cloud-init 19.4 daily) https://pastebin.com/QiVZAbGu
<johnsonshi> Odd_Bloke: What seems to be happening is that cloud-init would run the swap directive first, then run the mount directive (towards the bottom of the pasted log).
<johnsonshi> Odd_Bloke: In the docs at https://cloudinit.readthedocs.io/en/latest/topics/modules.html#mounts , with the way that the example is laid out, it seems to hint that the swap directive is run after the mount directive.
<johnsonshi> Odd_Bloke: In the upstream code, what's happening is that the swapfile is created first at https://github.com/canonical/cloud-init/blob/87cd040ed8fe7195cbb357ed3bbf53cd2a81436c/cloudinit/config/cc_mounts.py#L465 , the swapfile is then activated at https://github.com/canonical/cloud-init/blob/87cd040ed8fe7195cbb357ed3bbf53cd2a81436c/cloudinit/config/cc_mounts.py#L511 , and finally entries in /etc/fstab are mounted at https://github.com/canonical/c
<johnsonshi> loud-init/blob/87cd040ed8fe7195cbb357ed3bbf53cd2a81436c/cloudinit/config/cc_mounts.py#L520
<johnsonshi> https://github.com/canonical/c
<johnsonshi> <johnsonshi> loud-init/blob/87cd040ed8fe7195cbb357ed3bbf53cd2a81436c/cloudinit/config/cc_mounts.py#L520
<johnsonshi> Ooops (sigh IRC formatting). and finally entries in /etc/fstab are mounted at https://github.com/canonical/cloud-init/blob/87cd040ed8fe7195cbb357ed3bbf53cd2a81436c/cloudinit/config/cc_mounts.py#L520
<johnsonshi> Odd_Bloke: By the way for context, our team's goal is to mount the Azure ephemeral resource disk at /mnt/resource, and then afterwards, create a swapfile within the mounted ephemeral resource disk.
<johnsonshi> Odd_Bloke: Apologies. The first pastebin was badly formatted. Here is a better one: https://pastebin.com/3cu364dD
<rharper> johnsonshi:  I believe you're an edge case here; the ephemeral device mount is not processed until until cloud-init runs the mount -a; however, we also attempt to create the swap , in your config you're referencing a path that doesn't exist yet (and even if we were to "mkdir -p /mnt/resource" that would *not* be on the ephemeral disk as it's not been mounted when the swap_create runs;
<johnsonshi> rharpher: Yes indeed :( The code runs swapfile creation and swapfile activation before it mounts drives.
<rharper> johnsonshi: what you could do is to partition the ephemeral disk into two partitions, one is the size of the swap you wanted, the second would be whatever else space you wanted mounted at /mnt/resource ;   during disk_setup, the ephemeral drive would be partitioned and mkswap would be run on the swap partition, and the fs partition would get formated with ext4 (or whatever you wanted)  then in cc_mounts, it will swapon -a (no need to create a swap
<rharper> since that's done via disk_setup/fs_setup; just an entry in fstab and swapon -a
<rharper> have you filed a bug for us yet ?
<rharper> I can see a number of things that would need changing to support your particular goal
<johnsonshi> Not yet. Is this a bug or is cloud-init intended to run swap-related stuff first before mount-related stuff?
<rharper> johnsonshi: it is a bug; but an edge case where the swap config is referencing a path that is to-be-mounted
<johnsonshi> rharper: OK I will file a bug in Launchpad. Thanks for your help!
<rharper> the other bug is in the swap create which does not ensure it has a path to the file (if you specify a directory that does not yet exist )
<rharper> johnsonshi: thanks!
<otubo> meena: I agree I can remove -? from the regex, but the wait it's written it work great for me.
<otubo> meena: I mean, I can compare kernel_version <= 4.18 and it works fine, not sure I really need to make that dance with tuples and stuff
<meena> otubo: but, float(4.4) > float(4.18), while Version(4.4) < Version(4.18)
<meena> otubo: when you compare it as (4, 4) < (4, 18) â as tuples, it works.
<otubo> meena: ohhhh you're right
<otubo> meena: stupid me, gonna fix that
<meena> otubo: i'm merely relaying what i've been taught a week ago; you must've missed it
<boolman> hi guys, am I looking at this wrong, but is the network setup being done to late? http://ix.io/2ce7
<Odd_Bloke> smoser: We ran into some issues with the version tooling and new upstream releases in GitHub; I've proposed a fix here: https://github.com/canonical/cloud-init/pull/223/ and your thoughts would be much appreciated.
<Odd_Bloke> smoser: (If you don't have time to look at this today, we may need to land it to cut the 20.1 release; later conversation would be welcome to make sure we have it right going forward.)
<Odd_Bloke> blackboxsw: rharper: Your reviews on https://github.com/canonical/cloud-init/pull/223/ would also be appreciated, of course. :)
<rharper> k
<boolman> anyone? im trying to get ubuntu bionic to work with cloud-init on vmware/vsphere. And it seems like during the time of execution. the network isnt setup yet
<blackboxsw> Odd_Bloke: looks good. running through the package build stuff I normally run to vet it.
<Odd_Bloke> boolman: Could you file a bug at the URL in the topic and attach the tarball that `cloud-init collect-logs` creates on a failing instance?
<blackboxsw> boolman that log shows it didn't detect the datasource properly. so it's probably using invalid network config
<blackboxsw> 2020-01-30 12:32:30,158 - cc_final_message.py[WARNING]: Used fallback datasource
<blackboxsw> but yes please file a bug https://bugs.launchpad.net/cloud-init/+filebug
<Odd_Bloke> smoser: Thanks for the review!  (And the trust. ^_^)
<blackboxsw> minor question  on 223 Odd_Bloke but otherwise +1
<Odd_Bloke> blackboxsw: Thanks; responded.
<blackboxsw> oops Odd_Bloke, glad you reminded me about that, forgot we added it last upstream cut, and didn't adjust the upstream trello template for it.
<rharper> meena: do you know if the free/net bsd images have util-linux available (in particular wondering if sfdisk is present) ?
<meena> https://www.freshports.org/search.php?stype=name&method=match&query=until-Linux&num=10&orderby=category&orderbyupdown=asc&search=Search&format=html&effort=1&branch=head
<meena> https://www.freshports.org/search.php?stype=pkg-plist&method=match&query=sfdisk&num=10&orderby=category&orderbyupdown=asc&search=Search&format=html&branch=head aaaaand, maybe
<rharper> meena: do the cloud images typically have that or do they use gpart ?
<meena> of course not
<rharper> ok, thanks
<meena> let's find out what you want from sfdisk, and how to do that with what we have
<rharper> oh, working on cc_growpart changes and was wondering whether freebsd images ever end up taking the path down use of growpart (which relies on either sfdisk or sgdisk); from the code, I would expect gpart to be used on bsd; and growpart on linux;
<rharper> hoping to confirm that
<meena> it's easier on zfs, which is what i use
<meena> but, yeah, gpart resize should have you set
<Goneri> rharper, I've some prebuilt images here https://bsd-cloud-image.org/
<Goneri> rharper, you may find that useful
<rharper> Goneri: thanks !
<Odd_Bloke> smoser: master failed as expected: https://travis-ci.org/canonical/cloud-init/builds/653174091
<Odd_Bloke> rharper: blackboxsw: Could I get one of you to review https://github.com/canonical/cloud-init/pull/224 please?  (ubuntu/devel for the focal upload.)
<blackboxsw> grabbing
<blackboxsw> comparing against mine
<Odd_Bloke> Thanks!
<blackboxsw> Odd_Bloke: +1 are you going to build-and-push then
<blackboxsw> I think we intentionally don't merge those PRs
<blackboxsw> specifcally:   /home/csmith/src/uss-tableflip/scripts/build-and-push  is what I was referring to
<Odd_Bloke> I've not used that script before, let me take a look.
<Odd_Bloke> I don't seem to have that script locally?
<Odd_Bloke> I'll just build/push manually as I have in the past.
<Odd_Bloke> blackboxsw: Do we have docs on how to do https://trello.com/c/5kz3z848/18-publish-to-copr-el-testing-repo ?
<blackboxsw> Odd_Bloke: only that checklist, which at the moment is broken because of our py3 switch and paride is intending to look at things after Utah py3
<blackboxsw> I think we might have to block that aspect of things as run-contraner centos/7 etc won't work at the moment.
<Odd_Bloke> blackboxsw: I don't see a checklist?
<Odd_Bloke> blackboxsw: (And what's the problem with `run-container centos/7` ATM?)
<blackboxsw> https://trello.com/c/oDCHFP8W/6-publish-upstream-version-to-copr-el-testing-repository
<rharper> Odd_Bloke: cent7 + py3 = broken build
<blackboxsw> Odd_Bloke: sorry I thought we were looking  at the same card. weird the card copy didn't contain that checklist
<rharper> Odd_Bloke: since the py2 drop, we don't have a building cloud-init daily on cent7 as cent7 doesn't have proper py3 support
<Odd_Bloke> Right, I'm aware of that.
<blackboxsw> Odd_Bloke: rharper right, and otubo mentioned he might be able to find us a contact in CentOS/7 space that could help us locate the correct missing python3 package deps for CentOS/7 and or 8.
<rharper> ISTR, we effectively want cent7 daily to be held at 19.4 (origin/stable-19.4)
<Odd_Bloke> Just wasn't sure what that had to do with run-container?
<rharper> and the fedora-build branch I proposed enables F28+ and centos8 to work on py3
<rharper> Odd_Bloke: nothing directly
<blackboxsw> Odd_Bloke: right, my magic missing checklist has those steps which rely on run-container centos/7 to build the rpm spec  file that we upload to copr
<Odd_Bloke> OK, fair enough.
<Odd_Bloke> blackboxsw: OK, and (hopefully) finally: https://trello.com/c/gYoMXSdz/30-bump-cloudinit-revision-to-upstreamnewversion looks to me like it's done as part of other steps now, can it be garbage collected?
<Odd_Bloke> Or am I misunderstanding it?
<blackboxsw> Odd_Bloke: did you happen to copy over the checklist into the trello card https://trello.com/c/oDCHFP8W/6-publish-upstream-version-to-copr-el-testing-repository just now?
<blackboxsw> oooooh we have duplicate copr upload cards
<blackboxsw> one in Cutting SRU and one in pre-sru lane
<blackboxsw> and the one in pre-sru lane doesn't have the steps
<Odd_Bloke> Haha, OK, that explains it.
<Odd_Bloke> So are we skipping the COPR step entirely, or am I building a SRPM and publishing it anyway?
<blackboxsw> Odd_Bloke: and yes that card on bumping upstream version needs garbage collection (and removal from template)
<blackboxsw> it was pre me writing that silly uss-tableflip/script/upstream-release  at end of last upstream
<blackboxsw> Odd_Bloke: I think we can skip and talk about that at standup tomorrow
<blackboxsw> I think it should block on us resolving our tools/run-container issue with centos/7 & 8 builds
<blackboxsw> per paride's work
<Odd_Bloke> Cool.
<Odd_Bloke> blackboxsw: Speaking of that script, I found a few hard-coded values: https://github.com/CanonicalLtd/uss-tableflip/pull/35
<blackboxsw> Odd_Bloke: for reference as well rharper wrote up this manual build process and discussion  for el-stable and el-testing if we decide eventually to drop some of tools/run-container support https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/copr-repo-el-repos.md
<blackboxsw> but I'm guessing  you've seen that already
<Odd_Bloke> blackboxsw: I'll pick up those documentation/script fixes tomorrow.
#cloud-init 2020-02-21
<boolman> how can I force netplan on cloudinit?
<meena> boolman: https://cloudinit.readthedocs.io/en/latest/topics/network-config.html#network-configuration-outputs it's default on Ubuntu 17.10
<meena> boolman: but you can override the default: https://cloudinit.readthedocs.io/en/latest/topics/network-config.html#network-output-policy
<meena> of course, that would require that the binaries be in placeâ¦ and that netplan can render for whatever the backend is, that your OS actually supports.
<meena> Netplan currently works with these supported renderers
<meena>     NetworkManager
<meena>     Systemd-networkd
<shibumi> Hi can somebody explain me this error here? udevadm test-builtin net_setup_link /sys/class/net/lo fails with No such file or directory
<shibumi> The error message is at this location: https://github.com/canonical/cloud-init/blob/13e82554728b1cb524438163784e5b955c7c5ed0/cloudinit/net/netplan.py#L256
<Odd_Bloke> shibumi: What version of cloud-init are you using, and on what distro?
<rharper> shibumi: looks like your OS/kernel does not provide a loopback network interface
<boolman> im trying to get cloudinit to work on vmware, but it works like 1/10 of the time. other times it either doesnt even get the user/metadata or configures the network to late. different hosts each time. seems so random.
<boolman> should mention that im deploying with terraform
<rharper> boolman: hi;  vmware + cloud-init is not in a great spot;  this is a known issue;  there are some discussions with VMWare cloud-init devs to work on supporting user-data, network-config and vmware customization;
<boolman> rharper: okey, so just not do it is the current solution?
<rharper> boolman: well it depends on what "mode" you're using; https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1806133
<ubot5> Ubuntu bug 1806133 in cloud-init (Ubuntu) "OVF does not read user-data if vmware IMC is used." [Medium,Confirmed]
<shibumi> Odd_Bloke: arch linux
<shibumi> Odd_Bloke: newest release
<shibumi> Odd_Bloke: cloud-init 19.3
<shibumi> Odd_Bloke: maybe a bug in the new arch linux implementation for cloud-init?
<shibumi> rharper: it should..i have no idea what's going on
<rharper> shibumi: me neither; I don't know what creates the initial loopback interface; I was fairly certain the kernel itself does this;
<rharper> which made me wonder what kernel was in use
<boolman> rharper: not sure what you mean, Im deploying a regular vm template, without vmware customization
<rharper> OVF has many places it "fetches" configuration from
<boolman> im not using OVF atleast
<rharper> then IMC ?
<boolman> Im not following, Ive manually installed a ubuntu bionic from cd, installed cloudinit and vmware's cloudinit guestinfo package. converted to template.
<rharper> if you like, you can run cloud-init collect-logs and the tarball will have info we look at what's going on;  it may be a known issue, or it could be a new one
<rharper> when you boot the template cloud-init needs a datasource; on VMware platforms, this is typically OVF;
<boolman> https://github.com/vmware/cloud-init-vmware-guestinfo
<rharper> the platform provides to the guest 1) an iso OVF format 2) OVF in the filesystem of the guest  3) vmware customization config in the filesystem of the guest
<rharper> we don't support that
<rharper> it's out of tree;  if VMware wanted to contribute that to cloud-init; that'd be nice
<rharper> it has 9 issues; maybe one of them is relevant ?
<rharper> https://github.com/vmware/cloud-init-vmware-guestinfo/issues/5
<boolman> no that doesnt seem relevant to my problem
<johnsonshi> Hi guys, I am trying to configure an Azure VM so that its per-boot scripts would persist across 'cloud-init clean -lr' invocations. I have dropped some scripts into /var/lib/cloud/scripts/per-boot, but when I issue cloud-init clean -lr, the scripts are not executed, and when I look into the per-boot scripts directory, it is empty. The cloud-init.log file also makes no mention of executing any script within the per-boot directory. That means
<johnsonshi> somehow per-boot directory gets cleaned when cloud-init clean -lr runs.
<johnsonshi> The docs mention that "Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots." does that mean that in order to configure per-boot scripts, I would need to configure the datasource (which in this case is the Azure datasource (since this is an Azure VM))
<Odd_Bloke> johnsonshi: `cloud-init clean` cleans out all of /var/lib/cloud (by design).  Perhaps changing your process to put those scripts in place _after_ you clean would work?
<johnsonshi> Odd_Bloke: I have just discovered that the write_files module runs first (during the first stage of cloud-init), while the per-boot module runs in the last module. I will try that out instead. Thanks!
<johnsonshi> Odd_Bloke: Perhaps I could get write-files to write the scripts into the per-boot directory.
<Odd_Bloke> I believe that would work.
<Odd_Bloke> blackboxsw: rharper: The updated release process doc/script: https://github.com/CanonicalLtd/uss-tableflip/pull/37
<Odd_Bloke> blackboxsw: Can you point me at the template Trello board?  I want to be 100% sure I'm modifying the correct thing. :p
<blackboxsw> Odd_Bloke: https://trello.com/b/QQYFXpsA/template-sru-cloud-init-xy
<Odd_Bloke> Thanks!
<blackboxsw> no prob
#cloud-init 2020-02-23
<shibumi> btw we've found one reason for the cloud-init issue on Arch Linux. Looks like cloud-init heavily relies on predictable names
<shibumi> https://github.com/archlinux/arch-boxes/issues/30
