#cloud-init 2014-04-07
<yann2> Hello! I am trying to use cloud-init with FOG and rackspace, but am failing miserably. It seems that my  mime encoded cloudinit script is properly copied to /var/lib/cloud/seed/nocloud-net/user-data  , but it doesnt appear to be run... not sure where I should look to see whats wrong... any idea? the rackspace examples I found on the web didnt work
<smoser> SpamapS, yeah, i have creds to that private key.
<smoser> but it is separate by design from my private key.
<smoser> i'll sign it and upload
<jclift> yann2: Does the script work when it's not mime encoded?
<jclift> yann2: Also, there's a log file in /var/log/cloud-init.log
<yann2> jclift, yes, actually I got it to run using nova client 
<jclift> k, so it sounds like the mime encoding is busting it somehow
<jclift> yann2: I've not tried using mime encoded version
<jclift> yann2: At a guess, you'll probably need to look through /var/log/cloud-init.log to see if there's anything there
<yann2> trying this now http://pastealacon.com/34262  - I think the problem is that rackspace only tries to get cloud init files from config disks by default and not from /var
<yann2> jclift, the only error seem to be unrelated and linked to a "ubuntu" user that is not found.. the rest seems to be rackspace stuf
<yann2> 2014-04-07 14:27:13,937 - __init__.py[WARNING]: Unhandled non-multipart userdata '   < maybe that
<jclift> yann2: Hmmm, I don't use Ubuntu so no idea there
<jclift> Yeah, that warning could be significant
<jclift> yann2: Are you able to put your proper #cloud-config file on a webserver somewhere, and then point to it using an #include?
<jclift> yann2: This is what I use as a #cloud-config file: https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/raw/master/remote_centos6.cfg
<yann2> There is also this file in /etc/cloud/cloud.cfg.d/90_dpkg.cfg that contains this datasource_list: [ ConfigDrive ]
<yann2> so I assume my only way to include additional cloud init config would be via the configdrive...
<jclift> yann2: I pass that remote_centos6.cfg in to the local VM via config drive, and then cloud-init goes and retrieves the external URLs itself
<jclift> yann2: Is the mime-encoded script showing up in /var/lib/cloud/instance/user_data ?
<jclift> Or /var/lib/cloud/instance/user_data.i ?
<yann2> jclift, cf my last paste, I am now trying to see if I can pass the cloud init file directly to the config file, a bit like the nova client does here  http://developer.rackspace.com/blog/using-cloud-init-with-rackspace-cloud.html
<jclift> yann2: I'm using cloud-init with Pyrax in rackspace
<yann2> but otherwise, yeah, when i used to copy files using personality, the file showed up mime encoded at the right place ('/var/lib/cloud/seed/nocloud-net/user-data' ) but I think cloud init is not configured to read it
<yann2> jclift, and where do you put the cloud init configuration?
<jclift> Unfortunately, it's not documented (so I should write up some docs for it to help the next guy)
<jclift> yann2: Are you ok with Python?
<yann2> I can read, yes, but I need ruby
<jclift> k.  This might be helpful for reading then.  It's kind of spagetti code atm though... I need to clean it up.  But it works. https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/master/create_servers.py
<jclift> yann2: Line 179 is where the new server instance is created
<jclift> yann2: One of the parameters to the call which creates the new server is "userdata".
<yann2> thanks a lot for your help, I ll read this :)
<jclift> yann2: I provide the #cloud-config file to that, which passes it automatically to cloud-init using config drive
<jclift> (note that I have config_drive=True as another parameter)
<jclift> yann2: The #cloud-config file I use, is that "remote-centos6.cfg" thing
<jclift> So, when cloud-init starts up, it gets given that remote-centos6.cfg file.  It reads that, noticing it starts with #include, then goes and gets the URLs on the next two lines.
<jclift> Those URL contain what actually needs to be done
<jclift> The first is a proper #cloud-config thing
<jclift> The seconds is a script to execute after the #cloud-config thing has fully run
<jclift> s/seconds/second/
<yann2> so you use config_drive as well
<yann2> and userdata = ci_config... should be very similar to what I m doig...
<jclift> yann2: As a thought, when I was getting my head around cloud-init for the first time last week, I spun up the various VM's remotely (like you), then logged into them to try and figure out wtf was happening
<jclift> yann2: Yeah, it sounds similar
<yann2> so ci_config is not mime encoded, nor base64 encoded
<jclift> Correct. Have you logged into a VM, and mounted /dev/xvde to a temp spot to see what's being passed via config_drive?
<jclift> yann2: The userdata= argument here can actually be just a newly opened file handle.  You don't even need to read in the contents.
<yann2> nope, good idea
<jclift> yann2: Yeah, mount /dev/xvde to a temp spot, and take a look through it's structure
<jclift> yann2: Helps to get a better conceptual understanding of wtf is going on :)
<yann2> indeed.. its xvdd here, where would my userdata be?
<jclift> Ahh yeah. xvdd is right
<jclift> xvde is the data drive
<jclift> Sorry
<jclift> Been using xvde a lot over the weekend :)
<yann2> mhhh
<yann2> so I have a xvdd with bunch of jsons, and no xvde
<jclift> That just means you're using a VM type without an additional data disk
<jclift> eg 512 Standard or 1GB Performance
<jclift> Don't get hung up on xvde thing.  That was my mistage
<jclift> mistake
<yann2> ok
<jclift> My typing sucketh today ;)
<yann2> cant see my config copied anywhere on the config disk though
<jclift> yann2: Is this similar to what you're seeing? http://fpaste.org/92259/96882152/
<yann2> yep
<yann2> is my  user_data supposed to be copied somewhere there?
<jclift> yeah
<jclift> New paste: http://fpaste.org/92262/13968823/
<jclift> That shows the same mount, but also shows the user_data file that was pulled in by the userdata= parameter in my Pyrax call
<jclift> That file gets placed into /var/lib/cloud/instance/user_data
<jclift> After the remote urls are grabbed, the result is /var/lib/cloud/instance/user_data.i, which is then run/processed/whatever
<yann2> yeah your ci_config variable actually contains the content of the file
<jclift> Yeah
<yann2> so this file is a cloud init config file
<jclift> Yeah
<yann2> and if i read it right you do not base64encode it nor is it mime encoded
<jclift> Correct
<yann2> I ll show you my script give me a sec
<jclift> I don't base64 or otherwise touch it
<jclift> k
<yann2> http://pastealacon.com/34263
<jclift> Does it work like that?
<yann2> I dont have a user_data in openstack/latest though
<yann2> nope
<yann2> I mean it runs and creates the vm, but no userdata in the config disk
<yann2> maybe its a question for fog...
<jclift> What happens if you try "userdata" instead of "user_data" ?
 * jclift noticed some inconsistency in naming around user_data and userdata
<jclift> Some parts of Nova require "user_data"
<jclift> But Pyrax definitely needs it as userdata
<yann2> https://github.com/fog/fog/blob/f531f0bdd715e6f0e3008b35535834ad8e854ec1/lib/fog/cloudstack/models/compute/server.rb 
<yann2> seems to be an alias, lets try it..
<yann2> still no success :'(
<jclift> Damn
<jclift> Hmmm, maybe time to ask the Fog guys then
<yann2> yep..
<jclift> Do they have an example of working Fog code with userdata you could copy to experiment with?
<yann2> dont know, I just asked on #ruby-fog
<yann2> found this, but its old http://pastebin.com/n6fRuF0C  :D 
<jclift> yann2: Heh, well "Good Luck" I guess.  I'm not into Ruby... hopefully the Fog guys can help. :)
<yann2> yeah thanks, I think it's supported by the openstack compute but not the rackspace one...
<yann2> I ll try my luck with a bug report
<yann2> opened a bug there https://github.com/fog/fog/issues/2824  let's wait and see
<yann2> jclift, trying my luck patching fog, wish me luck ;)
<jclift> :)
<jclift> Definitely good luck!
<yann2> yeah man, epic skills here :) I ll do a pull request this evening
<jclift> :)
<sauce> i am using cloud-init for the first time. using it with EC2. i created a very simple user-data #cloud-config file (see here http://pastebin.com/gE8C0M1d).  i am using centos6.4 AMI's from cloudmarket.com that say "with cloud-init installed".  when I do "ec2-run-instances -f myuserdata.yaml", it boots up fine, i can see cloud-init is installed, but my user-data file was not taken into consideration. anyone know why?
<smoser> sauce, what is the version of cloud-init ?
<yann2> https://github.com/yannh/fog/commit/e335084b200e88263f739a867366afd97fc6b0cb  jclift , for what its worth
<yann2> testing a bit more before sending my pull request :)
<harlowja> smoser so when we switching cloud-init to git? ;)
<smoser> harlowja, i try to avoid flamewars.
<harlowja> lol
<SpamapS> smoser: cool. I just fetched the key from keyserver.ubuntu.com and it does not have your signature on it.
<SpamapS> really any signatures on it
<yann2> https://github.com/fog/fog/pull/2826/files now just need a nice soul to merge it and I should be good
<smoser> SpamapS, i can sign it. thats fine. the thing i didn't know is what i should sign.
<smoser> its a subkey
<yann2> alright, off for today, thanks a lot for all your help jclift that was immensely useful
<yann2> have a nice day/evening
<smoser> SpamapS, i signed now. and put onto keyserver.ubuntu.com . 
<SpamapS> smoser: sign the main key. The point is to allow downloaders to verify the key without prayer. ;)
<smoser> SpamapS, yeah. i did, right?
<SpamapS> smoser: indeed, well done :)
<sauce> smoser cloud-init-0.5.15-68.el6_bashton1.noarch
<sauce> smoser nevermind, cloud-init did run. i tried it with some low level stuff like touch /tmp/file1 and it did.  now i gotta figure out why the puppet module didnt work
<sauce> while i am here, whats the best way to add a host to /etc/hosts with cloud-init?
<smoser> sauce, that is old
<smoser> i really dont know what support there would be in something called that version.
<smoser> without looking. so that i'd be my first guess.
<smoser> wrt /etc/hosts, you can modify the apprpriate template file with a boothook.
<SpamapS> smoser: note that it would be quite useful now if you started either signing the images, or producing SHA256SUMS
<SpamapS> smoser: MD5 is broken
<smoser> i am signing the images.
<smoser> http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:download.sjson
<smoser> thats the preferred way.
<SpamapS> preferred by who?
<harlowja> 0.5.15, woah
<smoser> err.. wait. bad link.
<SpamapS> I can use jq to grab those sha256sums tho .. so that works
<harlowja> sauce u are on rhel6?
<smoser> http://download.cirros-cloud.net/streams/v1/net.cirros-cloud:devel:download.sjson
<sauce> 6.4, about 1 year old
<harlowja> http://repos.fedorapeople.org/repos/openstack/cloud-init/ should be newer :)
<harlowja> although i thought u could install a newer one
<harlowja> 0.7.2 is much better ;)
<SpamapS> smoser: color me old fashioned.. but a .asc file for each image would be um.. a bazillion times preferrable to that for scripting. :-P
<SpamapS> smoser: but I can work with it. :-P
<smoser> i agree that having FILE.asc is simpler for scripting.
<SpamapS> jq has at least made json available to bash
<SpamapS> smoser: would be quite helpful if this format actually had a _list_ for the products. :-P
<smoser> list ?
<smoser> its a dict
<smoser> because the products is a key
<SpamapS> smoser: so to find latest version, I should still use /version/released ?
<SpamapS> smoser: Yeah, but I have to sort the keys to find the latest. :-P
 * SpamapS will call the whambulance
<SpamapS> smoser: just whining. :)
<jclift> sauce: There's modern cloud-init in EPEL isn't there?
<sauce> jclift im just being lazy and using someone elses AMI
<sauce> rather than create my own
<smoser> SpamapS, http://paste.ubuntu.com/7218123/
<jclift> Heh.  Doesn't seem to be working for you in this instance. :(
<smoser> that is secure and allows you to easily point it at a mirror if you want.
<smoser> and '--max=1' does the sorting for you.
<harlowja> sauce i'd recommend at least upgrading the AMI cloud-init version :)
<SpamapS> $ sstream-query
<SpamapS> sstream-query: command not found
<SpamapS> smoser: no package suggestion. :(
<SpamapS> smoser: also I have a really giant set of Fedora users so I have to be mindful of hard it will be for them to use.
<smoser> $ dpkg -S `which sstream-query`
<smoser> simplestreams: /usr/bin/sstream-query
<smoser> SpamapS, that giant set of users installs random garbage from pip all the time.
<smoser> so... not being packaged isn't a *huge* thing.
<harlowja> bad users
<SpamapS> smoser: oh we're quite happy to abuse pip :)
<smoser> so i should get a simpelstreams into pip.
<sauce> ubuntu's official EC2 ami's use: 0.6.3-0ubuntu1.10
<SpamapS> smoser: yes. but for now, jq will suffice
<smoser> jq ?
<harlowja> http://stedolan.github.io/jq/
<harlowja> those bash people
<harlowja> lol
<SpamapS> --===/o/
<harlowja> lol
<smoser> its C ?
<SpamapS> smoser: also, you have files that are '.gpg' but they have .asc content in them
<SpamapS> smoser: yeah tiny little utility, we use it everywhere to process json
<smoser> thats nice.
<smoser> SpamapS, i dont think i started that.
<smoser> the little i know came from modelling ubuntu
<smoser> http://cdimage.ubuntu.com/ubuntu-server/daily/current/MD5SUMS.gpg
<SpamapS> sure
<SpamapS> they did it wrong too :)
<SpamapS> it confuses gpg, you have to tell it that it is ascii armored detached
<harlowja> i left my ascii armor at home :-/
<smoser> SpamapS, well that sucks.
<smoser> my damned ignorance keeps getting in the way.
<smoser> the sstreams code only really handles the inline signed anyway.
<SpamapS> smoser: Yeah, and the inlined signed is kind of hard to use w/ jq.. as I have to strip off the signature.. :-P
<smoser> right. so we need to improve jq for inline signed json
<SpamapS> smoser: jq example:
<SpamapS> $ jq '.["products"]["net.cirros-cloud:standard:0.3:i386"]["versions"] | keys | sort | max' net.cirros-cloud\:released\:download.json
<SpamapS> "20140317"
<SpamapS> see how hard that is. :-P
<SpamapS> $ jq '.["products"]["net.cirros-cloud:standard:0.3:i386"]["versions"]["20140317"]["items"]["disk.img"]["sha256"]' net.cirros-cloud\:released\:download.json
<SpamapS> "f0803c2d179c8a02d029239d35fc3e752cc81ad3436ea52b757e11685ca7c074"
<smoser> see how much easier it was with the tool i gave you ?
<SpamapS> smoser: is it on pypi yet?
<smoser> its not. i can put it there, although i've never done that before.
<smoser> so again with the damned ignorance getting in my way.
<SpamapS> smoser: I've got a jq solution. It's not horrible. But if/when you do get it on pypi, please let me know so I can switch.
<smoser> SpamapS, are you looking to put that into devstack ?
<smoser> or elsewhere
<SpamapS> devstack is dead to me :)
<SpamapS> but it should go there too yes
<SpamapS> smoser: TripleO devtest yes
<SpamapS> smoser: https://review.openstack.org/#/c/83347/
<smoser> well, its interesting actually. both of them are.
<smoser> from the "continuous integration" perspective.
<smoser> in that I can randomly change / break things. 
<sauce> where can i get full cloud-init docs other than http://cloudinit.readthedocs.org/en/latest/?  the Modules section is empty, and i feel that the examples don't list everything.
<sauce> i.e. it's not clear how to add a 3rd party apt repo
<sauce> i would even settle for code to look through.  where are the modules stored?
<smoser> sauce, you get docs for the version you'r using in the package you're using.
<smoser> the config modules are stored in cloudinit/modules/cc_*
<smoser> (except they might be different in 0.5.x)
<smoser> to add a 3rd party apt repo is trivial 
<smoser> and is shown at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L21
<harlowja> SpamapS u say devstack is dead??
<harlowja> intersting
<harlowja> SpamapS u know u want to use anvil, lol
<SpamapS> harlowja: I say that devstack is dead _to me_
<SpamapS> harlowja: we use devtest.. because multi-node matters. :)
<harlowja> lol
<harlowja> ya
<sauce> thanks smoser !
<sauce> smoser you say the docs come with the package, i have no man cloud-init though
<smoser> sauce, in ubuntu they're in /usr/share
<smoser> in centos or whatever, id' rpm -qa | grep -i doc
<sauce> thanks sir
<sauce> i switched to ubuntu for now. i want to learn cloud-init in ubuntu first, then i'll go to centos
<smoser> or just stay on a sane OS :)
<smoser> although, to be fair, 2.6.18 was a good kernel.
<smoser> sauce, i was making fun of centos.
<sauce> i know i know
<smoser> :)
<sauce> we are working on building a new app at work, and i have the opportunity to choose the linux distro
<sauce> i want to use ubuntu, but that's just the kid in me talking
<sauce> hey smoser, i'm gonna paste somethin in chan, forgive me
<sauce> apt_sources: - source: deb http://apt.puppetlabs.com $RELEASE main dependencies
<sauce>    keyid: 4BD6EC30    # GPG key ID published on a key server
<sauce>    filename: puppetlabs.list
<sauce> i can't get that to work. i troubleshooted down to the "keyid" line. if i comment it out, cloud-init works
<smoser> sauce, indentation is important.
<smoser> youc an see that what you're going to get is sane by just doing somethin glike:
<smoser> python -c 'import yaml, pprint, sys; pprint.pprint(yaml.load(open(sys.argv[0])))' my.file
<sauce> also maybe yamllint.com ?
<smoser> http://paste.ubuntu.com/7218786/
<smoser> sure. 
<sauce> yamllint.com says the yaml is OK but outputs it vastly different than it appears in examples
<sauce> lemme try python
<sauce> i don't think indentation is the problem here
<smoser> sauce, its not that its valid or not.
<smoser> its that the result is int he right format
<sauce> i hear ya
<smoser> apt_sources is a list of dictionaries
<sauce> i don't see how i can be messing up the indentation
<sauce> its just a few spaces, everything is lined up
<sauce> i deleted the line and rewrote it char by char
<sauce> "filename" works with the same indentation
<sauce> keyid doesn't
<smoser> hm.
<sauce> i know right
<smoser> pastebin /var/log/cloud-init.log ?
<sauce> absolutely hang on
<sauce> welp there ya go: Apr  7 20:42:26 ip-10-30-4-54 [CLOUDINIT] cc_apt_update_upgrade.py[WARNING]: Source Error: deb http://apt.puppetlabs.com precise main dependencies:failed to get key from keyserver.ubuntu.com
<smoser> sauce, its actuall preferable from both security and reliability perspective to insert the full key
<sauce> running this again, lemme check
<sauce> smoser ok i will try that
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L120
<smoser> hte example there.
<smoser> its more reliable because you dont depend on the keyserver
<smoser> and int his case all you're really using it for is a object store to map <short_key_id> to <public key>
<sauce> i gotcha.  i pasted the key block and it validated as yaml. running it now.
<sauce> many ec2 instances have died at my hand stoday
<sauce> hands today
<smoser> sauce, for playing, you can use lxc or kvm 
<smoser> lxc you can use quite performantly inside a ec2 instance
<smoser> see example here: http://ubuntu-smoser.blogspot.com/2013/08/lxc-with-fast-cloning-via-overlayfs-and.html
<sauce> while this is running i have another question: can i specify the order that things are run? i.e. i want to install puppetlabs repo, then use the puppet module
<sauce> nice the key worked!!  
<sauce> and it installed in the right order, that was a 50/50 chance probably right?
<smoser> well, no. :)
<smoser> puppet module runs after apt_config
<smoser> and that is by design
<smoser> you can specify the order of things
<smoser> just define the list in your user-data
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/config/cloud.cfg
<smoser> the easiest thing to do is just re-define the whole list
<smoser> cloud_init_modules runs first, then cloud_config_modules, then cloud_final_modules
<smoser> nd the listst there in order.
<smoser> you really should take a look at that blog post, it makes testing this stuff really easy and much faste rthan waiting on instances in amazon 
<sauce> awesome smoser 
<sauce> just awesome
<smoser> thanks.
<sauce> why do i even need puppet, if i have cloud-init?
#cloud-init 2014-04-08
<yann2> hello! I am trying to manage my /etc/hosts file in rackspace - and as soon as I set manage_etc_hosts to True, the lines for eth0, eth1 do not get created anymore
<yann2> I found this bug :  $dev_eth0 $fqdn $hostname  from this bug report on launchpad: https://bugs.launchpad.net/cloud-init/+bug/1020695  but I m not sure, does this mean I need to change the hosts.tmpl from cloud init, at the very beginning?
<lipinski> Is there any known problems to passing multiple CloudConfig resources to cloud-init via HEat?  I'm trying to write two files separately via write_files, but only the second actually appears
#cloud-init 2014-04-09
<smoser> harmw, hey. just thought i'd share : http://paste.ubuntu.com/7227342/
<smoser> thtas just a "test cirros"
<smoser> harlowja, 
<smoser> 2014-04-09 19:04:16,975 - url_helper.py[DEBUG]: Calling 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' failed [10/-1s]: request error [(<urllib3.connectionpool.HTTPConnectionPool object at 0x7f1339863a10>, 'Connection to 169.254.169.254 timed out. (connect timeout=10.0)')]
<smoser> 2014-04-09 19:04:16,975 - url_helper.py[DEBUG]: [0/1] open 'http://169.254.169.254/openstack/2013-04-04/meta_data.json' with {'url': 'http://169.254.169.254/openstack/2013-04-04/meta_data.json', 'headers': {'User-Agent': 'Cloud-Init/0.7.5'}, 'allow_redirects': True, 'method': 'GET', 'timeout': 10.0} configuration
<smoser> 2014-04-09 19:04:26,986 - url_helper.py[DEBUG]: Calling 'http://169.254.169.254/openstack/2013-04-04/meta_data.json' failed [20/-1s]: request error [(<urllib3.connectionpool.HTTPConnectionPool object at 0x7f1339863c50>, 'Connection to 169.254.169.254 timed out. (connect timeout=10.0)')]
<harlowja> why u timing out :-P
<smoser> because its not there.
<smoser> the -1 is what i was annoyed at
<smoser> [10/-1s]
<harlowja> ya, thats odd, lol
<harlowja> are u in a time machine
<harlowja> will investigate
<smoser> well that is maxwait
<smoser> oh.
<smoser> it seems like it should timup should have happened
<harlowja> did ntpd jump?
<harlowja> looking at code, one sec
<smoser> i think there is no timeout set (or it is actually set to -1)
<smoser> for that datasource
<smoser> but for some reason its trying agiain
<harlowja> smoser ya, its trying the provided urls, but failing after all of them don't work
<harlowja> http://169.254.169.254/openstack/2012-08-10/meta_data.json, http://169.254.169.254/openstack/2013-04-04/meta_data.json'
<harlowja> then after none of those work, it gives up 
<smoser> ah. i see. you're right.
<harlowja> https://code.launchpad.net/~harlowja/cloud-init/no-negative-max-wait/+merge/215020
<smoser> so i guess ideally if we got a timeout
<smoser> then we wouldn't try other entries.
<smoser> as opposed to a 404
<harlowja> well should we :)
<harlowja> might be different hostname in other urls
<harlowja> right?
<harlowja> bb, got a friend coming over
<smoser> well it hink its urls ar ll just the MD url
<smoser> but with different versions
<smoser> unlikely a.) that you have a friend coming over :) , and b.) that you'd find one metadata version on the same metadata server after timing out on another i think
<smoser> ie:
<harlowja> lol
<harlowja> he's not here yet :-P
<smoser> wget http://$IP/2012-08-10/meta_data.json
<harlowja> but i agree
<harlowja> in this case we can abort early
<harlowja> *or likely can abort early
<smoser> so i'm not really opposed to the -1
<smoser> that seems not entirely unreasonable
<smoser> i just didn't realize htat it was trying different urls
<smoser> i thought it was re-trying, waiting for some future time when -1 seconds had passed :)
<harlowja> back to the future
<smoser> if i sit around and wait long enough, it will be -1 seconds from now.
<harlowja> lol
<harlowja> let me do the change to abort it if its all the same hostname
<harlowja> bb
<harlowja> smoser https://code.launchpad.net/~harlowja/cloud-init/early-abort/+merge/215036 
<harlowja> think that should do it
#cloud-init 2014-04-10
<cppking> hello guys , i'm a openstack SA , i don't want use dhcp-agent , then i tought i can use cloud-init to assign static IP for VM instances,  can anybody show me some document about how to use it ?
<cppking> Also, i don't want to use metadata server , so i have to configure cloud-init not to curl metadata-server ,how to do it?
<smoser> cppking, cloud-init needs some metadata s ource
<smoser> but anything recent should be able to use config drive
<smoser> so configure openstack to always use config drive
<smoser> and then cloud-init wont hit the MD server because it will have already found a datasource.
<smoser> regarding static Ip, in general I would just say not to do that
<smoser> but you maybe an manage it if your images are configured properly.
#cloud-init 2014-04-13
<harmw> smoser: nice little script :)
#cloud-init 2015-04-06
<gamename> Hi.  After updating to the lastest CentOS7 release (1503-01) I cannot get the Eth0 interface to come up on AWS.  Anyone seen this?
<alexpilotti> claudiupopa: can you join the #heat channel?
<claudiupopa> alexpilotti: yes.
<cn28h> I'm looking at datasources doc at the http://cloudinit.readthedocs.org/en/latest/topics/datasources.html ... the top section where it describes the interface, is that meant to say that I can somehow add custom data sources to my cloud-init config?
<cn28h> or would I have to hack cloud-init itself to do that?
<harlowja> not likely in cloud-init config
<harlowja> thats just an interface defintion
<cn28h> hm, ok'
<cn28h> so, basically my situation is that I want to inject user data into an instance under openstack
<cn28h> however, the usual avenue for passing user-data in openstack (via the nova metadata service) is being used by another application and is not compatible in format with cloud-init
<cn28h> so I'm trying to find some alternate way to inject user data
<harlowja> hmmm, unsure, thats gonna be a tough one to get around
<cn28h> will cloud-init try every datasource or will it stop at the first one that works?
<cn28h> (in other words, is it even worth exploring my own datasource, even if I can figure out how to make that happen?)
<gamename> How does cloud-init know to pick eth0 as the preferred network interface?
<harlowja> cn28h yes it stops at the first one, so u could build your own if u really want
<cn28h> hm ok, thanks
<nk121> this had been working for me
<nk121> apt_sources:
<nk121>   - source: "ppa:mc3man/trusty-media"
<nk121> but now i'm getting errors http://pastebin.com/pMqJPape
<nk121> when i add it by hand it works
#cloud-init 2015-04-07
<smoser> cn28h, it stops at the first.
<cn28h> aha ok, hm
<Odd_Bloke> cn28h: You can configure which data sources it will look at, though.
<smoser> gamename (who is gone), cloud-init doesn't currently "pick" eth0. it starts running when the OS's networking is up.
<smoser> right, Odd_Bloke you coudl tell cloud-init to not use the openstack, but another entirely.
<smoser> or you could (i think) tell cloud-init to use the openstack one, but with a different url (ie, not 169.254.169.254, and use a URL you have control of)
<smoser> those things have to be configured prior to cloud-init running.
<Odd_Bloke> (I guess you'd also need to skip EC2, as OpenStack provides an EC2-compatible metadata service)
<smoser> ie, you can't tell cloud-init where to find data in the data that you're providing it.
<smoser> it unfortunately can't read your mind.
<smoser> openstack is before EC2 in the order, so that cloud-init will prefer the openstack MD to ec2.
<smoser> but, yeah, Odd_Bloke you're right. youd' have to tell it not to look ther also.
<smoser> nk121, you should 'apt-get install pastebinit'
<smoser> and use pastebinit  :)
<smoser> then i dont have to go to pastebin.com urls and look at all those adds.
<smoser> er... ads.
<cn28h> smoser: you mean, there may be a way to configure the openstack datasource to look elsewhere? (that would be great -- I'd been thinking I'd have to actually go subclass the implementation and replace the user data logic)
<smoser> yeah, that may be configurable
<smoser> the openstack one is.
<smoser> er.. the ec2 one is.
<cn28h> ah, very interesting
<smoser> or you could use "nocloud"
<smoser> which is pretty simple also.
<smoser> you just need a base-url with a 'user-data' and 'meta-data'
<smoser> harlowja_away, what "archive" format would you want for multipart
<cn28h> yeah that's an interesting idea -- we *are* using other features like key injection via openstack, but I suppose I can use a shellscript to grab that from the metadata service and stage it.. less elegant but workable
<smoser> :)
<smoser> yeah.
<smoser> cn28h, look at doc/examples/cloud-config-datasources.txt
<smoser> thats how you confgiure a datasource... the key 'metadata_urls' that you see in Ec2 is also valid for openstack
<smoser> per my reading of the code
<smoser> it looks like it reads:
<smoser>  metadata_urls
<smoser>  max_wait
<smoser>  timeout
<smoser> nk121, i think you're using vivid ?
<nk121> smoser: no trusty
<cn28h> nice, yeah that will be great.  Now I just need to think of how to correlate it so I can return the right user data for that instance, hm
<smoser> right.
<smoser> that does suck.
<smoser> you oculd look at source ip
<smoser> as you should be albe to get the instances ip address
<smoser> from openstack
<smoser> that is essentially how the opestnack metadata service works
<cn28h> ah, yeah that makes sense
<cn28h> map it backwards by talking to neutron and figuring out which instance has that IP or some such
<smoser> right.
<smoser> cn28h, if you do this , it might be somewhat useful generically.
<smoser> ie, a opensta-ck-metadata-aware proxy 
<cn28h> hm
<smoser> s/sta-ck/ck/
<smoser> fairly simple given openstack admin credentials.
<smoser> you just take input of a IP address and render the result.
<smoser> you may even be able to re-use the openstack one.. but that might be difficult
<cn28h> yeah, I will have to do some digging
<cn28h> and discussion with the rest of my team -- they may well just tell me it's too crazy for their tastes and thatI need to find a different way ;p
<cn28h> but it sounds like an interesting possibility
<cn28h> really it would be nice if the other service currently camping on the user-data field was cloud-init compatible
<cn28h> because it's just a handful of key-value pairs that could easily be written to a file and read from there instead ...
<cn28h> but that belongs to a codebase we don't own
<smoser> cn28h, so... if you're talking to that other service...
<smoser> tell them they should support mime-multipart and pick out the pieces they're interetsed in
<smoser> or use cloud-config-archive format (which i woudl admit is less "standardy")
<smoser> this is waht cloud-init does... it pays attention only to things that are destined for it and ignores other things.t
<cn28h> yeah, that makes much more sense so that you can share the user data
<smoser> right.
<cn28h> maybe I will open a JIRA with a feature request
<smoser> nk121, i'm not sure what i'm seeing there really... 
<smoser> nk121, /var/log/cloud-init-output.log might hvae more info
<gamename> Hi.  How does cloud-init know which device (e.g. eth0)  as the preferred network interface?
<Odd_Bloke> 14:45:33 < smoser> gamename (who is gone), cloud-init doesn't currently "pick" eth0. it starts running when the OS's networking is up.
<gamename> Odd_Bloke: Eh? Gone?  Is the status showing me offline?  Re: the question. Ah, ok.  cloud-init is agnostic about the network. It just uses whatever interface comes up. All correct? 
<Odd_Bloke> gamename: You were gone when smoser said that, ~3 hours ago. :)
<Odd_Bloke> gamename: I believe your conclusion is correct, yes.
<gamename> Odd_Bloke: Got it.  I didn't get the response. Sorry for the confusion. I'll check my client (or change the damn thing).
<cn28h> smoser: so I had an interesting thought and figured I'd share.  If I create a really minimal web app that proxies *locally* I can leave all the heavy lifting in terms of correlation up to openstack because I can just forward requests off to the real thing locally. Then, I can intercept user-data and send that off somewhere else, along with fields I can pull from the metadata service myself to help correlate on the remote end. Should work as
<cn28h> long as I can guarantee it's up and running before cloud-init runs.
<harlowja> smoser just some simple directory structure would seem like it could replace the whole MIME stuff
<harlowja> file + directory struture
<harlowja> ^ is also then easily examinable...
<harlowja> smoser https://code.launchpad.net/~harlowja/cloud-init/write-files-fetch-from-somewhere should be fixed up
#cloud-init 2015-04-08
<smoser> cn28h, hm...
<smoser> actually.
<smoser> you might be able to use 'vendordata'
<smoser> to accomplish what you want.
<smoser> harlowja_away, directory structure ? in a file ?
<smoser> it has to be multi-part file. tar would work, but it has to be some sort of archive as user-data is only single thing.
<cn28h> smoser: is that an alternative datasource? Trying to find details in the docs
<smoser> vendor-data is user-data for vendors
<cn28h> hm
<smoser> cloud-inti reads from those 2 places
<cn28h> ah, I see what you are saying
<smoser> the user wins
<smoser> but your user (the ap thats using user-data) wont trump anything you're putting into vendor-data.
<smoser> you can set vendor-data in openstack
<smoser> as implemented in openstack, its static, but the class that does it allows it to be dynamic if you needed that.
<smoser> i think you're in the position where you could do that, right ?
<smoser> where you could set vendor data 
<cn28h> potentially -- the other wrinkle is that the service that's launching the VM instance doesn't allow us to have much control over the details of how they get launched (it's intended to be an abstraction layer with jclouds, but right now only supports openstack so mostly just gets in the way :x)
<smoser> cn28h, right. you dont have to.
<smoser> vendor-data goes to every instance.
<cn28h> oh I see
<cn28h> actually that could be pretty useful
<cn28h> and vendor data is per tenant I'm hoping?
<smoser> well, its global as implemented in openstack.
<smoser> basically there is a config option
<smoser> that says what class to load
<smoser> and by default it loads the one that just reads a static json file on the nova compute host
<cn28h> ah, hm
<smoser> but you can replace that class with whatever you want and it returns a python dictionary that is then proviced.
<smoser> it'd be pretty easy to add your own implementation there, and it has access to stuff it would need to make it per-tenant.
<cn28h> yeah, that could be interesting, hm
<smoser> https://review.openstack.org/#/c/37964/14
<smoser> https://review.openstack.org/#/c/37964/14/nova/api/metadata/base.py
<smoser> the class configured in the config file is provided the instance, address, extra_md, network_info.
<smoser> i suspet from the instance you can get the user
<smoser> tenant
<cn28h> aha, cool
<suro-patz> With some locally committed modification, I built the rpm using 'make rpm' - Now when I am trying to install the rpm onto RHEL6 system, it is complaining about pyserial, even though I manually installed pyserial-2.6, as the local python version is 2.6.6
<suro-patz> Has anybody encountered something similar?
<harlowja> suro-patz do u have any more information than that?
<harlowja> like what is the error...
<harlowja> suro-patz how did u manually install pyserial?
<harlowja> smoser maybe a tar file; why couldn't the sturcuture just be userdata/data and child directories if that includes others...
<harlowja> with some kind of sequencing of those children so we know the order
<harlowja> suro-patz if u manually installed it via pip, well pip and rpm aren't aware of each other (they are different package managers)
<smoser> harlowja, for sure we could support tar as the userdata/data format.
<smoser> but we'd still want to have a mime-type like info in there.
<harlowja> sure; so that could also existing in a data.metdata file or something
<harlowja> if we wanted
<smoser> right? ie, so your '#!/bin/sh' could be 'boot-hook' or 'user-script'
<harlowja> agreed
<smoser> we can definitely do that. also there is the other archive format
<harlowja> just thinking that regular files with some defined layout could do the same
<smoser> cloud-config-archive
<smoser> which is something to that effect.
<smoser> but tar is an option also.
<harlowja> ya, something to think about
<harlowja> tar would be nice
<smoser> or , as we support and embrace our windows bretheren , zip
<smoser> :)
<harlowja> hmmm
<harlowja> i try not to embrace windows to muhc
<harlowja> bill gates and i don't embrace to well
<smoser> on interesting thing about mime is that it *is* streaming
<harlowja> haven't hugged in a while
<alexpilotti> harlowja: I can hear you! :-)
<harlowja> :)
<smoser> each pice can be operated on and reacted to as its reaad
<harlowja> true
<smoser> tar is too, but not if you shove an index in it :)
<harlowja> alexpilotti do u and billy g. hug alot?
<harlowja> or i guess its the new CEO guy
<harlowja> whats his name
<alexpilotti> Satya
<smoser> billy g gives more hugs though
<smoser> still
<alexpilotti> :-)
<smoser> my favorite bill g clip: https://vimeo.com/70498601
<harlowja> smoser nice
<harlowja> lol
<harlowja> billy g so mean, lol
<harlowja> smoser alexpilotti claudiupopa u will be in vancouver?
<claudiupopa> harlowja: no, I won't be there.
<harlowja> durn
<suro-patz> harlowja: Yes I had installed pyserial manually using pip. Finally I could go around using ânodeps option for rpm, as I could all the dependencies were present
<harlowja> suro-patz ya, u can't mix pip packaging and rpm and expect them to be aware of each other
<harlowja> they don't know of each other
<harlowja> *since they are independent packaging systems
<suro-patz> The error was a simple/generic one - "error: Failed dependencies:
<suro-patz> 	pyserial is needed by cloud-init-0.7.7-bzr1083.el6.noarch"
<harlowja> yup, thats because of all that i mentioned above
<alexpilotti> harlowja: I will be in Vancouver
<harlowja> cool
#cloud-init 2015-04-09
<harlowja> smoser shall we make a tag/branch for 0.7.x on https://github.com/stackforge/cloud-init ?
<harlowja> now that i see it has the full history there :-P
<harlowja> didn't know it had that, ha
<harlowja> i guess the last bzr move-over is https://github.com/stackforge/cloud-init/commit/d90bfd6268e8 ?
<harlowja> and then i guess stop accepting bzr changes, ha
<smoser> harlowja, hm..
<harlowja> and/or create a 0.7.x branch from that commit; update it
<harlowja> and have master be 2.x
<smoser> i think keep 0.7.x on bzr
<harlowja> seems like its on stackforge though :-P
<smoser> what do you mean ?
<harlowja> https://github.com/stackforge/cloud-init 2678 commits, ha
<harlowja> so there is some history there :-P
<harlowja> afaik nobody has done that many commits yet, lol
<harlowja> so seems useful to have some kind of branch at least
<harlowja> also then removes the need for https://github.com/number5/cloud-init 
<harlowja> ^ which is doing similar mirroring...
<harlowja> ^ whoever that is...
<smoser> hm..
<smoser> well, i basically consider 0.7.x in maintenance
<smoser> and figure just as well to do that in bzr on launcpad.
<smoser> but id ont know.
<smoser> yeah, i think keep it there for now.
<harlowja> ok, i'd make it easier for yahoo, but sureeee, since we wanted to mirror bzr -> internal git
<harlowja> makes it real easy if stackforge already does it :-P
<smoser> :)
<harlowja> sooo how about that :-P
<smoser> well, i'm not really opposed to it i guess.
<harlowja> i can push up a branch for the current 0.7.x on git (which seems to be from revision 1046 from bzr)
<smoser> do we know of anythign that can mirror for us ? so that github.com/cloud-init/cloud-init is a mirror of stckforge/cloud-init ?
<harlowja> and get someone to update that branch
<harlowja> unsre
<harlowja> poke #openstack-infra folks?
<harlowja> i would think they would know something (some tool or other)
<harlowja> so i'll push up 0.7.x branch on stackforge
<harlowja> ok dokie?
<smoser> so you'll push 0.7.x branch to stackforge... how do you do that ?
<harlowja> magic
<harlowja> lol
<smoser> that wont force review ?
<harlowja> clone the stackforge repo
<harlowja> nope, no reviews for this
<harlowja> $ git checkout d90bfd6268e8
<harlowja> $ git checkout -b 0.7.x
<harlowja> $ git push gerrit 0.7.x:0.7.x
<harlowja> ^ is the sequence
<harlowja> arg, didn't work
<harlowja> ha
<smoser> hnm.
<smoser> well punt for now
<smoser> but if we wanted, i think we can make bzr (on launchpad) be a mirror of github easily enough
<smoser> launchpad supports that
<smoser> harlowja, so please confirm... python in rhel 6 is 2.6 ?
<smoser> if thats the case, i think it kind of forces 2.6 for cloud-init 2
<harlowja> yup
<harlowja> that is the case
<harlowja> stupid rhel6
<harlowja> lol
<smoser> ok. so 2.6 requirement for cloud-init 2. :-(
<harlowja> boo
<harlowja> lol
<harlowja> ya
<harlowja> not that hard, just pita, lol
<harlowja> guess i need https://review.openstack.org/#/c/172179/ to go in before branches can be made...
<harlowja> arg
<harlowja> guess we'll need that anyway so might as well get the ACLs over with...
<smoser> claudiupopa, if i wanted to write what versions of windows we'll support
<smoser> what would i write ?
<smoser> harlowja, can you make the above not refs/heads/stble/*
<smoser> but refs/heads/0.7.X ?
<smoser> or somethign to that affect.
<smoser> ie, at the point when there is sstable 2.0.0, we dont want you going out of control
<harlowja> i think so
<harlowja> updated; let's see if people are ok with that
<harlowja> ha
<harlowja> u go out of control first, not me, ha
<smoser> whats the yum equivalent of 'apt-get update'
<smoser> ie, not upgrade
<harlowja> hmmm, forgot, thought it did the scanning automatically or something
<harlowja> yum clean && yum list (?)
<harlowja> i think that has similar effect
<smoser> makecache
<smoser> i think
<harlowja> could be
<smoser> how do i get python3?
<harlowja> in rhel6?
<harlowja> u don't, ha
<harlowja> :-P
<harlowja> u compile it, lol
<harlowja> then make altinstall or whatever
<claudiupopa> smoser: we currently support windows 2003+ upwards, but it will be EOL after 17 june (if I remember correctly). So starting from vista should be fair to document.
<harlowja> XP ftw!
<harlowja> lol
<claudiupopa> ;-) a decent os nevertheless.
<harlowja> def, my mom still runs it afaik
<harlowja> ha
<harlowja> guess i should fix that someday, lol
<smoser> claudiupopa, for the windows novice (i've just heard about this cool thing from seatle) what is that list ?
<smoser> ie, i dont knwo windows server names othe rthan the ones you have just told me about above :)
<claudiupopa> one moment. ;-)
<claudiupopa> https://msdn.microsoft.com/en-us/library/windows/desktop/ms724832%28v=vs.85%29.aspx
<claudiupopa> Everything starting with Vista.
<claudiupopa> from Vista*
<harlowja> forgot all the 9x versions.... jeez
<harlowja> lol
<harlowja> or millenuim (the best version ever)
<smoser> claudiupopa, thanks.
 * smoser saw windows 8.X for the first time this weekend on sisters new laptop
<harlowja> http://en.wikipedia.org/wiki/Windows_ME  was the best version ever ;-P
<harlowja> lol
<smoser> now i've used it for 3 minutes. uninstalled the antivirus software (norton or something) that was on there, and set windows defender instead.
<harlowja> last of the weird crappy windows versions, ha
<harlowja> ya, remove the crapware if the first thing i usually do, lol
<harlowja> *is the first
<smoser> oh, you'd have liked this, harlowja
<smoser> my sister was using yahoo to search for videos (youtube).
<harlowja> man, thats weird, ha
<smoser> i made a comment about not knowing i'd stepped into a time warp and being thrown back to 2002
<harlowja> :)
<harlowja> whattttt
<harlowja> haha
<smoser> i guess yahoo paid HP some money (or someone some money) to make it the default search engine in IE there.
 * harlowja can no longer associate with smoser 
<harlowja> nice, ha
<harlowja> well thx for supporting me smoser 
<harlowja> every click u do i get 1 piece of rice
<harlowja> so thx!
<smoser> hoohoo. 
 * harlowja has to feed all my kids...
<harlowja> who would otherwise starve...
<smoser> wow: http://www.ec2instances.info/
 * smoser remembers the days of m1.small, m1.large and m1.xlarge
<harlowja> ha
<harlowja> ya
<harlowja> crazy amount of choices there
<harlowja> to many...
<harlowja> way to many...
<harlowja> ^ is not what the cloud is supposed to be, lol
<harlowja> (for my version of cloud, ha)
<bodik> hi
<bodik> in our new version deployed by cloudguys at our grid site, they introduced a power_state: mode: reboot, which should reboot the machine at the end of the cloudinit work
<bodik> but at that time runcmd and bootcmds are already executed, nebula is reporting vm as running
<bodik> so i wonder how should i detect that vm is really ready after final reboot ? so i can start to really use the machine ?
<bodik> anyone has some trick beside sleep X to do it ?
<harlowja> use /var/lib/cloud/data/status.json
<harlowja> and examine that
<harlowja> ex http://paste.ubuntu.com/10786345/ 
<bodik> great thanx
<bodik> meanwhile i'm trying to use write_files to modify rc.local which is executed after the final reboot ;)
<bodik> but it that would not work i will try the status file
<bodik> thnakx
<bodik> write_files does not work ;( rc.local is still executed after cloudinit but before reboot so the file is there too soon
#cloud-init 2015-04-10
<Odd_Bloke> smoser: How can I tell that cloud-init has finished running?
<Odd_Bloke> smoser: Is the existence of /var/lib/cloud/data/result.json sufficient?
<smoser> Odd_Bloke, yeah, result.json wont be written until finished. and status.json contains state as it goes.
<Odd_Bloke> smoser: Cool; and do you have any thoughts about using /run/cloud-init/... over /var/lib/cloud/data/... ?
<smoser>  /run  is what you want
<Odd_Bloke> smoser: Ah, these files aren't around on precise at all; is there an equivalent on that version?
<smoser> no.
<Odd_Bloke> utlemming: ^ is probably why you had the /tmp/cloud-init.done trick.
<Odd_Bloke> smoser: Thanks for the help! :)
<smoser> yeah.
<kwaping> harlowja hola amigo
<harlowja> kwaping hola
<kwaping> I will try to be here more often, as suggested
<harlowja> cool
<harlowja> thx man
 * harlowja has to go poke infra to get https://review.openstack.org/#/c/172179/ in
<harlowja> so we can start fixing up the repo
<harlowja> kwaping smoser  ok https://review.openstack.org/#/c/172179/ is merging, so i'll try that branch create again soon
<harlowja> then kwaping  do u want to try to do the sync of that branch with bzr?
<harlowja> the ~10 patches that are missing
<kwaping> cool
<harlowja> thennnnn no more bzr?? ;)
<kwaping> you want me to do the bzr -> git thing?
<harlowja> sure, if u want
<smoser> bzr isn't so bad.
<smoser> :)
<harlowja> probably just need to last X patch files and submit them up for review
<harlowja> ;)
<harlowja> nice try smoser 
<harlowja> lol
<smoser> there are a couple nice things.
<harlowja> hmmmmm
<harlowja> smoser kwaping https://github.com/stackforge/cloud-init/tree/0.7.x is up
<harlowja> sooo feel free to start updating it ;)
<harlowja> thats revision 1046 afaik in bzr
 * harlowja would be nice to also have the right tags (or at least the 0.7.x tags)
<kwaping> woohoo
<harlowja> but need to know which git sha matches which bzr tag, ha
<harlowja> then can replicate them
<harlowja> kwaping when i run that bzr -> git thing; it appears the repo that it converts has the tags, sooo just need to push them up to gerrit, will do that
<harlowja> less work to do :-P
<kwaping> nice
<harlowja> ok, all 0.7 tags except for 0.7.6 (which i think is the most recent, which needs more commits) is up
<harlowja> i'll leave the 0.6.x tags off
<harlowja> since meh
<kwaping> should I still be building from that specific 1046 commit you mentioned?
<harlowja> kwaping yes
<harlowja> unless u push up the other 0.7.x patches and want to use those :-P
<harlowja> otherwise there is no other commits (1047...) in that repo, lol
<harlowja> but now at least u have a place to put said patches (on the 0.7.x branch)
<harlowja> vs having no place at all
<kwaping> ok
<harlowja> JayF ^ u should be happy with all that
<harlowja> ha
<harlowja> https://github.com/stackforge/cloud-init/releases shows up also, nice
<JayF> harlowja: so cloud-init v1 is in stackforge now?!
<harlowja> JayF ya, all commits are there, branch is now there, tags for 0.7.x are there...
<harlowja> i'd like to say no more bzr, but let's see ;)
<harlowja> need to update the 0.7.x branch there first
<JayF> harlowja: tl;dr: http://bit.ly/1axYBco ?
<JayF> :P
<harlowja> ha
<harlowja> def
<harlowja> https://github.com/stackforge/cloud-init/graphs/contributors is intersting to look at
#cloud-init 2015-04-11
<harmw> lol harlowja_away , I'm 5th on that page :>
#cloud-init 2016-04-11
<elsonrodriguez> Is there a way to define an environment variable in cloud-init that is visible by all child processes?
<smoser> elsonrodriguez, no. :-(. that woudl probably be usefu,
<elsonrodriguez> yeah :(
<elsonrodriguez> I'm one of those dopes behind a proxy
<elsonrodriguez> Was just making sure I wasn't lying: https://bugs.launchpad.net/cloud-init/+bug/1089405/comments/13
<jfcastro> hi all! how can I configure cloud-init to using a device as swap?
<jfcastro> I saw that openstack do it but I'm trying to do manually without luck :(
<larsks> smoser: for testing out cloud-init, how do I tell it "just read this file, don't try to contact a data source"?  Just using --file doesn't seem to do that.  I'm looking at a solution to https://bugs.launchpad.net/cloud-init/+bug/1424710
<smatzek> larsks:  When testing cc module changes I've modified cloud.cfg to say datasource_list: ['None']
<larsks> smatzek: that seems reasonable.  Let me give that a shot...
<smatzek> once you do that, the semaphores you need to delete to allow things to re-run will be under  /var/lib/cloud/instances/iid-datasource-none/sem
<larsks> That I knew.  I was just looking for something like a --datasource cli option or something.  You're idea seems to work just grand, thanks.
<larsks> Does anyone know if a 0.7.7 release is imminent?  We are discussing cloud-init packaging in RHEL and I would hate to get things into a distribution channel just before a new version drops.
#cloud-init 2016-04-12
<dmsimard> Hi ! I was looking to get an OpenStack environment going with CentOS guests with config-drive provided network configuration. It looks like I would need the RHEL version of http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1189?start_revid=1189
<dmsimard> harlowja: ^ I was told you might be able to help ?
<smoser> harlowja, ^
<harlowja> sup
<smoser> harlowja, dmsimard wants you to fix centos for network config
<smoser> and config drive :)
<harlowja> :-P
<harlowja> ya, damn
<harlowja> lol
<harlowja> i knew it, ha
<dmsimard> oh hai
<dmsimard> well, I'm willing to help and all
<dmsimard> friend here mgagne pointed me to an (older?) iteration of that implementation I guess, https://github.com/mgagne/cloud-init-fedora-pkg/blob/epel7/cloud-init-0.7.5-network-info-support.patch
<dmsimard> But the way it was just merged (three weeks ago, talk about timing) looks similar yet different
<mgagne> harlowja: original version here: https://github.com/racker/cloud-init-fedora-pkg
<dmsimard> cloud-init will be in core OS packages in RHEL 7.3 and CentOS 7.3 this year so it'd be great to have a version cut with that support baked in
<harlowja> mgagne dmsimard ya, i'd like to have that in also
<harlowja> do u guys want to work on that, or should i?
<harlowja> i moved companies recently, so i might have to do some of the legal CCLA crap again smoser
<harlowja> i gotta aks on that
<harlowja> *ask
<larsks> harlowja: which goes back to my question yesterday asking about a 0.7.7 release... :)
<dmsimard> I can help testing but I'm not knowledgeable enough about those bits to do it myself
<harlowja> smoser is mr.release ;)
<harlowja> he's gonna do git soon also
<harlowja> lol
<harlowja> i just know it
<harlowja> i believe
<harlowja> i'll send smoser a cookie if he does git
<harlowja> lol
<larsks> I would contribute to that cookie fund.
<harlowja> :)
<smoser> harlowja, git will come
<smoser> git willi come
<harlowja> smoser tommorow?
<harlowja> lol
<harlowja> today?
<smoser> after 16.04 releases (next thursday)... not before.
<harlowja> 16.04
<harlowja> damn it
<harlowja> another LTS
<harlowja> i just reinstalled 14.04
<harlowja> lol
<smoser> if spandhe joins maybe we can convince her to do it.
<smoser> she's still yahoo, right ?
<harlowja> ya
<harlowja> dmsimard mgagne if u guys want to get a patch working
<harlowja>  https://github.com/mgagne/cloud-init-fedora-pkg/blob/epel7/cloud-init-0.7.5-network-info-support.patch i guess ?
<harlowja> just maybe a few more tests ?
<harlowja> cause that patch doesn't seem to have to many
<harlowja> then maybe by the time those tests happen, smoser will have git
<harlowja> and we can all have cookies
<harlowja> i believe!
<harlowja> lol
<mgagne> harlowja: let me check if have a more up2date version
<harlowja> mgagne  np
<mgagne> because we are reading network_info.json now, not vendor_data.json
<harlowja> right
<harlowja> that's fine, some network_info.json test cases, that give a sample network_info.json in and check the output would be cool
<harlowja> *and/or multiple sample inputs ---> check the outputs
<mgagne> that's scary, I think I fixed it for debian but not rhel
<mgagne> harlowja: for trusty, for what it's worth: https://gist.github.com/mgagne/46748012efa1ff3389b380a25bedb14d
<harlowja> cool, ya, tests for these would be nice ;)
<harlowja> like a bunch of input data files and expected output files
<harlowja> for debian and ubuntu and redhat and ...
<harlowja> maybe to far-fetched, butttt
<mgagne> I might not be the right person to ask to invest time on it atm ^^'
<harlowja> kk
<harlowja> dmsimard ?
<mgagne> don't know, doesn't work with us anymore =)
<harlowja> delegation not working
<mgagne> should patch be targeted for v2 or 0.7.x is fine?
<harlowja> must delegate harder, lol
<mgagne> still waiting for my clone to arrive from mail
<harlowja> UPS lose it?
<harlowja> lol
<mgagne> unfortunate yes =( they also say we can't ship bio stuff :P
<harlowja> ah
<harlowja> good point
<klindgren> I hear fedex or DHL is looser on the restrictions
<harlowja> nice nice
<harlowja> dmsimard let me know if u have time for those kinds of tests
<harlowja> if not, i can try, but no guarantees on time for that :-P
<harlowja> or smoser can get git and then we can accept patch, and i can start adding tests
<harlowja> and cookies
<larsks> Out of curiousity, what is the effort involved in moving to git?  Is it more than just import-from-bzr-export-to-github?
<harlowja> larsks we already have that
<harlowja> but its not in sync with bzr
<harlowja> larsks https://github.com/openstack/cloud-init is that (export from bzr, import to github)
<larsks> Right, I have noted that :).  Is the goal to maintain the bzr repo and keep it in sync automatically?
<harlowja> but bzr has a native mode
<larsks> Or why not just perform the manual sync once, then kill the bzr repo and go on our merry way?
<harlowja> i delegate question to smoser
<harlowja> :)
<larsks> I am using git locally (via git-remote-bzr), which works great for me but confused the heck out of dmsimard yesterday when I forgot and gave him a git commit id from my local repo...
<harlowja> i think the idea was to keep 0.7.x on bzr (and bzr-git) and anything new using that git above
<larsks> That seems clunky, because people are going to continue to make patches against the 0.7.x code for packages, and it would be nice to have it in the same repo as newer stuff for ease for cherry-picking, etc.
<harlowja> larsks  i know, its either that or we talk about killing bzr again :-P
<smoser> larsks, it would be git on launchpad.
<larsks> smoser: yeah, that's fine.
<harlowja> but only for 0.7.x
<harlowja> and then i think i'd ask the openstack-infra guys to kill the 0.7.x branch on  https://github.com/openstack/cloud-init version
<smoser> right.
<harlowja> which hopefully they would do
<smoser> the other thing that is needed is the revision number thing...
<larsks> Which revision number thing?
<dmsimard> Sorry was afk getting food, reading backlog
<smoser> harlowja, had something there. but basically i quite often make use of the bzr revision number for things (snapshots) and want some equivalent
<smoser> i forget what it is... theres a apython module that gives snapsthos numbers.
<harlowja> there's a few
<harlowja> pbr
<harlowja> a few others
<harlowja> https://github.com/habnabit/vcversioner (another one)
<harlowja> and i'm sure a few more
<smoser> that wasnt it i dont htink
<larsks> Or just use short git commit ids (which are nice because they can be used directly in git commands, etc)
<smoser> yeah, but they dont increase.
<dmsimard> ok, I read all that
<larsks> True.
<dmsimard> I definitely can't write python tests to save my life, sorry :p
<larsks> That would be a great movie.  "Do you expect me to write a unit test? No, Mr. Bond; I expect you to die."
<harlowja> lol
 * smoser writes down not to trusty larsks's movie recommendations
<smoser> s/trusty/trust/
<smoser> (fingers type a y if i type trust)
<larsks> Yeah, Trusty is old news.
<dmsimard> harlowja: so was that patch a good fit for what landed in trunk, then ?
<dmsimard> minus the debian bits
 * harlowja hasn't looked over to much yet
<harlowja> lol
<harlowja> dmsimard i'll get into looking at it more
<dmsimard> no emergencies or anything, I resorted to using dhcp for the time being :(
<harlowja> k
<dmsimard> Do let me know if you'd like to test anything though, this is something I can help with.
<harlowja> k
<harlowja> thx
#cloud-init 2016-04-14
<larsks> smoser: harlowja: are either of you familiar with the new systemd configs introduced between 0.7.6 and 0.7.7?  There are a couple of new targets, some udev configs, a systemd generator...are there docs on how all that is supposed to work?
<smoser> larsks, i do want to write some docs on taht :)
<smoser> but yeah, i'm familiar.
<smoser> the generator really makes it so you can completely disable cloud-init easily
<larsks> That seems useful.
<smoser> just by: touch /etc/cloud/cloud-init.disabled
<smoser> then it never is even considered by systemd in boot
<smoser> also by cloud-init=disabled on the kernel command line
<smoser> so it wont do any of the annoying bottlenecks in boot that it woudl otherwise impose.
<smoser> the udev stuff stops networking from coming up until cloud-init-local is done
<smoser> it blocks the hotplug events
<smoser> and then when cloud-init has rendered the desired networking, it allows it to go on through
<larsks> Okay, that helps.
<smoser> that way we're not dealing with a system that started ifup on a device that cloud-init re-wrote the configuration for
<larsks> Quick question about dependencies in the unit files:  there is a dependency on "networking.service", which I'm assuming is an ubuntu-ism.  Would it make sense to simply replace that with a dependency on network.target (which is a standard systemd target)?
<larsks> Otherwise I need to muck about with the unit files for packaging, and I would like to minimize mucking as much as possible.
<larsks> Or make the dependency part of a drop-in rather than part of the main unit file, I guess (and the ubuntu packages could include the drop-in).
<smoser> larsks, yeah, networking.service is an ubuntu-ism (well, an ifupdown-ism)
<smoser> network.target isnt really good enough
<larsks> What exactly does networking.service provide?  Because there are *also* dependencies already on network-online.target...
 * larsks is obviously ubuntu-ignorant w/r/t network setup.
<smoser> well, it is really quite specific.
<smoser> networking.service is guaranteed to run before network.target is reached
<smoser> if we were just After network.target, then 'cloud-init' would run in parallel with all other things that are really only blocked on network.target
<smoser> which is a lot of things.
<smoser> since cloud-init basically consumes user-data at this stage, we want it to block as much of boot as possible
<smoser> by consumes, i mean it runs the boot-hooks
<larsks> Okay.  What about moving that dependency to a drop-in?  That would get you exactly the same behavior, but if we could keep the main unit files generic that makes packaging for non-ubuntu enviornments easier.
<smoser> o
<smoser> i'm not opposed to it if it works.
<smoser> or some other way that we could do it.
<larsks> I think drop-ins are the systemd approved way of doing this sort of thing.
<smoser> the generator could actually probably determine what the system will use and dtrt
<larsks> Oh, huh.
<larsks> That would be interesting.
<smoser> what do you mean by drop-in?
<larsks> You create a directory /etc/systemd/system/cloud-init.service.d, and you can drop files in their to extend the unit file without having to modify it.
<smoser> thats what i thought you meant.
<larsks> So from systemd's perspective, you end up with same final configuration, but you get to keep distribution-specific dependendecies out of the main unit file.
<smoser> yeah. i'm not opposed to that.
<smoser> your'e looking to get this functional on centos/rhel ?
 * smoser would like that too
<larsks> Yeah.
<larsks> I guess we will need the network setting stuff, too.
<smoser> so, for that...
<smoser> right now in cloud-init on ubuntu it basically works like:
<smoser> a.) read openstack networking configuration format
<smoser> b.) convert it to "internal" format
<smoser> c.) render it
<smoser> c.) render it to /etc/network/interfaces and systemd.link files
<smoser> there is code i think in clod-init that will take a /etc/network/interfaces file as a "networking configuration format" and render it to redhat style networking format
<smoser> so that might be the easiest way to do this.
<larsks> Yeah, that should already exist.
<smoser> although ENI is kind of ...  sucky as a network config format
<smoser> ie, i'm afraod the conversion might be lossy
<smoser> afraid
<larsks> I will try to take a look at that soon unless mgagne or dmsimard gets to it first.
<dmsimard> yeah but "/etc/network/interfaces" format != network_data.json
<dmsimard> and network_data.json is the agreed way upstream to do it
<larsks> dmsimard: right, but smoser  was saying there exists network_data.json format -> ENI, and we also have ENI->redhat already.
<larsks> So you could use ENI as an intermediate format.
<dmsimard> ah, yes.
<larsks> Modulo smoser 's comments re: lossy conversion.
<smoser> because there exists ENI -> RH for the old crappy path where openstack declared networking in ENI format
<dmsimard> so that patch added the "translator" and ubuntu support
<dmsimard> we can definitely re-use the "translator", just need to add RHEL support
<smoser> but doing it well probably isnt that much work.
<larsks> Yeah.
<smoser> you have a fairly clean networking config format declared to you, and we can fix the "internal" stuff in cloud-init to be better if we need to
<smoser> the harder part is the next step..
<dmsimard> I can help testing an implementation for RHEL/CentOS but adding the support is a bit out of my league
<smoser> where we want to take a hotplug event and then hit the datasource and say "hey, i just got a network device, what should i do with it"
<smoser> and apply that.
<larsks> I need to fix https://bugs.launchpad.net/cloud-init/+bug/1424710 first before I can play with 0.7.7 much :)
<dmsimard> TIL about hostnamectl
<smoser> rharper, ping when youre in.
<rharper> smoser: here
<smoser> hey.
<smoser> https://bugs.launchpad.net/maas-images/+bug/1570142
<smoser> the one difference between cloud-init fallback rendered networking config (which doesnt work on iscsi root) and cloud-initramfs-dyn-netconf (which *does*) is 'auto eth0'
<smoser> rather than manual
<rharper> smoser: really here: was at the car shop, back home now
<smoser> k.
<smoser> so theres 2 things there really
<smoser> a.) we need a way for config to say 'manual'
<smoser> b.) cloud-init fallback needs to determine "this is manual" for the ip= fallback path.
<rharper> can you expand on (a); as it sounds like you mean more than just iface foo inet manual
<rharper> which can already be expressed in network config
<smoser> allow-auto
<smoser> or allow-hotplug
<smoser> rharper, http://paste.ubuntu.com/15831923/
<smoser> that'd be one suggestion, arbitrarily picked name 'control' to mean how should this interface be broought up or down
<rharper> smoser: anything that's not 'auto' (which is alias for allow-auto) or 'hotplug' (which we don't support in network config yet) get's ignored by ifquery as "manual"
<rharper> so the default is manual control
<rharper> certainl fallback code can use ifquery to determine which ifaces are present but manual (i.e. not in ifquery)
<smoser> not really.
<smoser> fallback is dictating which are present but manual
<smoser> it can't ask ifquery that
<rharper> fallback runs when networking didn't come up (at all or just one expected iface);  in the case that there's an eni, we can use ifquery vs. the set of nics we find from the kernel; and apply heuristics w.r.t what we select first and what we ignore;  in the case that there is no eni; then  we just have the heuristics and ignore list.
<rharper> I don't quite follow how a new field in eni would help
<smoser>  brb
<smoser> http://paste.ubuntu.com/15832136/ is what i think is sane.
<rharper> AFAICT, you're just excluding them from allow-auto (or auto); which means ifquery won't pick them up by default;  allow-manual removes it from the list of ifaces that ifquery looks for by default for bring all ifaces up;
<larsks> Slightly off topic, but...does anyone here have Azure experience?
<waldi> some
<larsks> waldi: in particular, do you have experience convert disk images to vhd format and uploading them successfully?
<larsks> convert*ing*
<waldi> yep
<waldi> code: https://gitlab.credativ.com/de/azure-manage/blob/develop/azure_manage/vhd.py
<larsks> waldi: thanks!  Looking.
<waldi> the code in qemu-img is broken unfortunately. i need to write a bug-report about it
<larsks> Interesting. At least that means I'm not crazy.  Or at least, not because of the conversion failures...
<waldi> qemu-img alligns the images to cylinder/head boundary. this is futile on Azure, which expects images alligned to whole MiB
<larsks> Do you know if earlier versions of qemu-img worked?  The microsoft docs seem to assume that there is a working version out there somewhere.
<waldi> not sure
<larsks> waldi: already fixed upstream; staged for next qemu-release: http://serverfault.com/questions/770378/problems-preparing-a-disk-image-for-upload-to-azure/770425#770425
#cloud-init 2017-04-10
<smoser> larsks, here. reading
<larsks> smoser: thanks. Any thoughts? Mostly I was confused by the "vendor-data is just like user-data" in the docs vs. the whole discussion about json blobs and a "cloud-init" key in the bug report (e.g., in comment #6).
<smoser> larsks, https://public.etherpad-mozilla.org/p/cloud-init-vendordata maybe helps ?
<larsks> smoser: but it this correct? http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html
<larsks> "Vendordata is handled exactly like user-data." E.g., pass it a '#cloud-config' file and everything is happy...
<larsks> That seems different from "The ideal situation is for vendor-data to come in a dictionary with a top level 'cloud-init' key."
<smoser> passing just a 'cloud-config' blob should work, yes.
<smoser> as the datasource woul d just load that as a string
<smoser> and pass it to convert_vendordata
<smoser> which would return that string
<larsks> But *also* a JSON document (with no #<type> header)?
<smoser> i'm not sure what you're getting at.
<larsks> Can the vendor-data blob be a JSON document? (instead of, e.g., a #cloud0-config document)
<smoser> if its a '#cloud-config' blob that will work. but it will just be a single 'part'
<smoser> it could be, yes.
<smoser> but thats kind of up to the datasource code to know that this vendor might send json
<larsks> I see.  So for openstack, at least, the assumption is that there is a top-level 'cloud-init' key in the JSON, and the value of that key is a string that is a user-data style format of some sort.
<smoser> a string or a list
<smoser> but yeah, thats best. as it is then multiple parts (which is nice) and namespaced to cloud-init
<larsks> Sure.  Thanks. I'm trying to reproduce a problem someone reported and just trying to understand how vendor-data is handled. Thanks!
<smoser> yeah, those docs are not clear :-(
<utlemming> smoser: hey, sorry I just got back from vacation and was looking at https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/321623
<utlemming> it looks like that conflicts with my MP at: https://code.launchpad.net/~utlemming/cloud-init/+git/cloud-init-1/+merge/321183
<utlemming> specifically, my MP allows for arbitrary nics
<smoser> utlemming, well, i did drop the nic renaming. but that is by desing.
<smoser> so the scenario that you were trying to allow for (i think) should never occur.
<smoser> you were papering over an problem where the metadata described nics attached that were not attached.
<smoser> (by mac address)
<smoser> i think
<smoser> yeah. if digital oceans metadata gives me bad data, cloud-init randomly picking a "first nic" is not really going to do anyone any good.
<utlemming> sure, that makes sense
<utlemming> I'm double checking my MP because I think yours and mine does the same thing
<utlemming> smoser: so https://git.launchpad.net/~utlemming/cloud-init/tree/cloudinit/sources/helpers/digitalocean.py?h=lp-1676908&id=ca7acab88e42d8861ea00b2e1ca12fb58ea7a7bb is what I proposed
<utlemming> and that drops the entire warn thing
<utlemming> let me fix that up real fast to give you the explicit check
<smoser> which warn thing ?
<utlemming> https://git.launchpad.net/~smoser/cloud-init/tree/cloudinit/sources/helpers/digitalocean.py?h=feature/digital-ocean-more-strict&id=09b7035d3f448b1f8b7fab29d3884c903bce8d97#n148
<smoser> utlemming, so that warn happens when cloud-init sees a 'nic type' that it doesnt understand.
<smoser> ie, 'public' or 'private'.
<smoser> if d.o added a new 'semi-private' type
<smoser> then cloud-init would warn there.
<utlemming> right, and I've dropped that logic
<smoser> which i think is sane. we could raise a RuntimeException
<smoser> but at very least we should warn.
<utlemming> I dropped it for a couple of a reasons, mostly be because we'
<utlemming> we have seen problems on our private management network with it
<utlemming> because we have a 'management' network that we use
<utlemming> and it breaks in that space
<smoser> well, what is the right thing to do there ?
<smoser> a.) handle correctly the 'management' network
<smoser> b.) WARN "hey, there is this thing called management i dont understand!"
<smoser> c.) FAIL "hey, there is this thing...."
<smoser> in my mp, i think taht its just doing 'b'
<smoser> d.) add minimal knowledge of 'management' and change that to just do nothing but with a DEBUG message
<smoser> but doing nothing doesnt seem right
<smoser> utlemming, i have to run, i'm sorry. i'll be back in tomorrow AM
<erick3k> hello
<erick3k> nacc i ended up installing ubuntu 16 manually
<erick3k> 0 problems
<nacc> erick3k: rather than using cloud-init?
<utlemming> smoser: sure thing, I'll look at it and send something up :)
<utlemming> thanks for looking
<erick3k> nacc i mean rather than using cloud image
<erick3k> i isntalled ubuntu with the iso
<nacc> erick3k: ah
<erick3k> works like a charm man
<nacc> erick3k: but you are using the 14.04 cloud image?
<erick3k> except for the 5 mins timout that i removed is perfect
<erick3k> yes 14.04 is cloud image and works
<erick3k> anyways
<erick3k> i installed debian 8 (iso cloud image also does not work properly)
<erick3k> and installed cloud-init but the / partition does not grow
<erick3k> tried installing cloud-initiramfs-growroot but does not exist
<erick3k> any idea whats the package needed ? i thought cloud-init came with it
<nacc> erick3k: no idea, maybe smoser or rharper know
<erick3k> ty
<erick3k> found it apt-get install cloud-initramfs-growroot i think
<rharper> growroot requires specific partitioning
<nacc> erick3k: ah, typo before?
<rharper> it basically calls resize2fs
<erick3k> nacc i guess haha
<rharper> so, if you're root device has no extra partitions past where root is (like say root is on sda1 and no other partitions, but free space)
<rharper> then resize can grow
<erick3k> rharber by specific you mean a single / partition with ext4?
<rharper> y
<nacc> rharper: oh that makes sense -- just a general wrapper around that normal use-case?
<erick3k> yes thats how i install my templates so should work now
<rharper> nacc: yes
<rharper> openstack for example can have arbitrary sized block devices
<rharper> so they lay down the cloud-image, and upon first boot, will grow the rootfs to fill the disk
<nacc> rharper: right, that makes sense
<nacc> and means you don't need to know the disk size up front
<rharper> unless you inject your own config (cloud-init can also do disk partitioning and such)
<rharper> nacc: exactly
<erick3k> now it worked, wonder why apt-get install cloud-init doesn't install the growpart utility
<nacc> erick3k: presumably because installing that package changes functionality which should be opt-in not on by default?
<rharper> it's part of the cloud-guest-utils
<erick3k> nacc i agree but i think it is installed on ubuntu and not on debian
<rharper> so it's in the *guest* image by default;  the server ISO is not a cloud image itself
<erick3k> ok
<rharper> there's always a tradeoff w.r.t default package lists
<powersj> smoser: I found a bug with a reference to "ci-tool". Appears to be https://bazaar.launchpad.net/~smoser/cloud-init/ci-tool/view/head:/ci-tool
<powersj> Is this in cloud-init under a new name now or just a handy utility you had?
<rharper> powersj: I've not heard of ci-tool
<rharper> under bazaar means it's likely and older tool
<powersj> rharper: ok, thanks!
<rharper> looks like a handy util;  it could be merged under cloud-init/tools  if it's still useful
<powersj> rharper: there was a suggestion to put it in cloud-utils as well
<rharper> ah
<rharper> powersj: that's also a possibility
#cloud-init 2017-04-11
<sambetts> Hi cloud-init team, I have recently been experimenting with cloud-init 0.7.9 and Centos to try to make OpenStack network_data.json work and it seems I've found and issue when processing bonded interfaces in the sysconfig renderer, what is the process for getting this resolved? and/or how can I contribute?
<smoser> sambetts, please open a bug if you dont see any existing.
<smoser> if you're looking at helping out, https://github.com/openstack/cloud-init/blob/master/HACKING.rst
<smoser> powersj, just a utility.
<smoser> ideally i think that functionality gets into cloud-init as a sub command.
<sambetts> smoser: is the openstack/cloud-init repo the right repo to commit too? That repo is out of date with the launchpad.net one, launchpad is one 0.7.9 the openstack/cloud-init repo is only on 0.7.6
<smoser> shoot
<smoser> i gave bad uurl
<smoser> follow HACKING. there is a push mirror of launchpad on github
<smoser>  https://github.com/cloud-init/cloud-init/commits/master
<smoser> but the openstack/cloud-init should probably get deleted.
<smoser> https://github.com/cloud-init/cloud-init is the github, but launchpad is where merge proposals are done so really use that.
<smoser> make sense ?
<sambetts> Yup thanks! whats the release process for cloud-init, e.g. if a bug fix goes in how long does it take to get tagged?
<smoser> well... it has varied a lot :)
<smoser> but recently coming more frequently, and i'd really like to be in the realm of 3 months or less between releases.
<smoser> we are nearing one now.
<smoser> utlemming, i'm around if you want to chat some about your MPs
<sambetts> awesome :) whats the cloud-init project's relationship with packaging? I noticed that the version included on centos7 is quite old
<utlemming> smoser: yup, I'm here
<smoser> sambetts, larsks i think is the best contact with centos packaging. We're working on getting integration tests flushed out so we can run tests on centos and such.
<smoser> and would like to get closer to sane packaging available in trunk so that i was easier for maintainers.... not sure if thats totally possible or desireable though
<sambetts> makes sense, thanks for answering my questions, hopefully I can get permission to contribute to the project :)
<utlemming> I also filed this little gem: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1681531
<utlemming> I was planning on taking a look at it later today. tl;dr is network.service fails because the nic's have been ifup'd already
<utlemming> (just realized that ifup'd looks like a dirty word...)
<smoser> utlemming, cloud-init should not 'ifup'... did you see it specifically doing that ?
<utlemming> smoser: yup
<utlemming> smoser: pulliung a log now
<smoser> powersj, https://bugs.launchpad.net/cloud-init/+bug/1636531
<powersj> smoser: yes
<smoser> can you recreate that?
<powersj> if I modify my path then yes
<smoser> i think ideally we're not calling blkid... but even then if we are, id' like to know what your change was there
<smoser> you said what you set it *to*
<powersj> :/sbin: is what was added
<smoser> odd that that was not in your path
<smoser> that is almost a bug itself
<smoser> ie, ubuntu has /sbin in path by default, so s390x ubuntu *not* having it is just asking for other people to find that
<smoser> dont worry about recreate, powersj : http://paste.ubuntu.com/24361950/
<powersj> smoser: I wouldn't say it was an ubuntu thing, rather a jenkins not having /sbin in the path by default
<smoser> if you made that change on alkl other jenkins, then i agree. but one should typically not have to set PATH ... why would jenkins not just inherit its default PATH as your user does ?
<powersj> smoser: I did update all slaves to that path as I put in the bug. the slaves run by default on a more limited path, I'm trying to find the default
<utlemming> smoser: three MP's that break up the imrpovments for DigitalOcean. One for the resolvers, one for configuring all Nics, and one for the NIC userd for the meta-data selection. All tested and confirmed to work.
<smoser> utlemming, do those overlap too much to submit them without the others ?
<smoser> i have comments on each
<smoser> and the second and third have the first 2 so reading the diff is combined rather than individual
<utlemming> smoser: the first two definately overlap
<utlemming> I can break the third up
<smoser> do you know how this affects sysconfig rendering ?
<utlemming> I don't....but I suspect that it will fix some of the problems that I've seen. I can try and get a test fix. Right now only Ubuntu and Debian use that DS for now.
<smoser> powersj, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/322386
<powersj> smoser: I like it, thank you
<utlemming> smoser: https://code.launchpad.net/~utlemming/cloud-init/+git/cloud-init-1/+merge/322385 is broken out from the others. Nice and clean.
<smoser> utlemming, i'm not sure ifindex is what you want
<smoser> https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net
<smoser> i dont see any indication on order or anything
<utlemming> smoser: in testing it, it represent the device order....starting with lo being 0, and ens3 being 1.
 * utlemming digs further on it
<utlemming> smoser: I actually think that its safe. The ifindex is incremented by 1 as the devices are added, regardless of the name. Since we are on the virtio-bus and there is a check to make sure that the interface is physical, the first nic will always have a lower index, even if its not 1.
<utlemming> i.e. if a user has defined 'dummy0' it wouldn't be a phyisical nic and therefore excluded from the check.
<smoser> rharper, ^ ? we were trying to kind of use ifindex to indicate order on the virtio bus
<smoser> utlemming, you have an instance up that i can ssh into ?
<smoser> or shall i launch one on digital ocean
<utlemming> smoser: root@138.197.119.118
<smoser> utlemming, where did you find '-1' ?
<smoser> utlemming, http://paste.ubuntu.com/24362776/
<smoser> that is basically "get nic with lowest ifindex". although i' still not sure if that is the right thing.
<smoser> powersj, would you +1 https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/322386 ?
<powersj> smoser: done
<utlemming> smoser: -1 was to indicate that a nic hadn't been tested yet
<utlemming> smoser: +1 on that....its clean
<smoser> utlemming, so update your MP then with that... and i guess drop the 'LOG.warn' that i added, just return None.
<smoser> as the caller there is going to raise RuntimeError
<utlemming> smoser: ack, doing so now
<cmart> Hi folks - I'm troubleshooting a particular OpenStack image that runs cloud-init 0.7.5 (old, I know). The image will launch on an older OpenStack deployment (running version "Havana") but not a newer OpenStack (running version "Newton"). Looking at cloud-init.log: on the newer cloud, cloud-init seems to be blocking the instance boot process. the last thing it does is finish the mode "init" (setting SSH configuration, host keys, authorized_keys, etc). it doe
<cmart> it appears that the process has hung but I don't have a usable console or way to log in locally and check -- at the moment I can only inspect the filesystem of the failed instance. Looking for troubleshooting tips :)
#cloud-init 2017-04-12
<cornfeedhobo> hellloooo
<cornfeedhobo> i am using AWS + centos 7. could someone tell me why this snippet, pretty much stolen straight from the docs, is not executing? https://gist.github.com/cornfeedhobo/35d60eca0a09ec4bbfb94a957d7520ff
<smoser> cornfeedhobo, could you paste /var/log/cloud-init.log and /var/log/cloud-init-output.log if there is anythign interesting there.
<utlemming> cornfeedhobo: for CentOS? likely a dependency problem. I don't think CentOS includes growpart and sfdisk by default (I reserve the right to be wrong here...)
<cornfeedhobo> smoser: coming up
<cornfeedhobo> utlemming: interesting point
<cornfeedhobo> (i'll double check that too)
<cornfeedhobo> smoser: gist updated
<utlemming> cornfeedhobo: how about a dump of 'journalctl' too?
<smoser> well thats pretty useless
<utlemming> centos is a bit dense when it comes to logging on disk for Cloud-init, sadly
<smoser> normally cloud-init.log shoudl have debug level output, which is quite noisy
<utlemming> you'd think so...but on CentOS/RH/Fedora, there is a packaging issue where the logs are supposed to be owned by 'adm' (as in Debian/Ubuntu) but the user is not created
<utlemming> the result is cloud-init.log is there, but empty
<cornfeedhobo> oh yeah, journalctl output is really verbose. updated again.
<cornfeedhobo> i'm going through it all now, hadn't thought to check journal.
<utlemming> cornfeedhobo: how about /etc/cloud/cloud.cfg? and /etc/cloud/cloud.cfg.d/*
<utlemming> looks like its not configured to setup the disks
<cornfeedhobo> yeah, i can paste those, but fwiw, this is just the stock AMI for centos7 ... oh interesting. incoming.
<cornfeedhobo> okay. updated. .d/* only has a logging cfg, but pasted
<utlemming> cornfeedhobo: yup, that's the problem...the default config doesn't setup disks
<smoser> cornfeedhobo, you should be able to run (and i'd be interested in the output of)
<cornfeedhobo> ah, i saw growpart and mounts and assumed
<smoser> sudo cloud-init single --debug --name=disk_setup --frequency=always
<smoser> er... maybe --debug before single
<cornfeedhobo> okay, let me make sure that module is on disk
<utlemming> cornfeedhobo: I just confirmed that the default config doesn't include that
<smoser> it'll be there. just disable.d
<utlemming> I would argue that this should be a bug in CentOS
<cornfeedhobo> hmmm, i cant find the modules folder on disk
<cornfeedhobo> sorry, i'm super new to cloud init
<cornfeedhobo> repoquery to the rescue, maybe
<cornfeedhobo> okay. testing.
<cornfeedhobo> https://gist.github.com/cornfeedhobo/64bcf6e2b760af27ddf16889b9849686
<cornfeedhobo> it doesn't seem to be loading it, but i see the python module in site-packages
<smoser> cornfeedhobo, look in /var/log/cloud-init.log see if you see
<smoser>   Running module disk_setup
<cornfeedhobo> okay. one sec. question first: is there a way to specify, in my aws userdata, that i want to add fs
<cornfeedhobo> fs_setup module to cloud_config_modules  **
<cornfeedhobo> from what i can tell, if i add a cloud_config_modules section, it overrides the one in cloud.cfg
<utlemming> cornfeedhobo: my $0.02 ZWD....copy /etc/cloud/cloud.cfg to /etc/cloud/cloud.cfg.d/99-me.cfg and add "- disk-setup" to the modules section
<cornfeedhobo> hmm, but that means i would have to create a custom AMI with that copied file in place, or kick off cloud-init again, after aws has executed it?
<utlemming> cornfeedhobo: there are other ways to convince cloud-init....
<cornfeedhobo> well, question: does cloud-init re-process the yaml files between init, config, and final?
<smoser> cornfeedhobo, yes.
<smoser> so you can feed that list in as user-data
<smoser> but it really should have run when you ran it with single there.
<cornfeedhobo> hmmm
<smoser> i might have to resort to littering /usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py with LOG.info or LOG.warn messages
<cornfeedhobo> maybe AWS invokes with more flags? something that pulls from http://169.254.169.254/latest/user-data ?
<cornfeedhobo> no, i dont see anythnig special in the service files
<cornfeedhobo> i'm going to retry with utlemming's $0.02, utilizing bootcmd
<cornfeedhobo> muhahahahaha, it worked
<cornfeedhobo> sed 's@cloud_config_modules:@cloud_config_modules:\n - disk-setup@' -i /etc/cloud/cloud.cfg
<cornfeedhobo> is fs_setup wrapped up in disk-setup?
<smoser> cornfeedhobo, the module name is 'disk_setup' (or disk-setup). that handles both top level config entries 'disk_setup' and 'fs_setup'
<cornfeedhobo> that is what i thought, especially after looking at the modules in site-packages. hmmm, i must just have an error in the fs_setup i just tried. i'm going to look around some logs
<cornfeedhobo> ah, interesting. "Unable to convert swap to a device"
<smoser> powersj, https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/320994
<powersj> smoser: sup
<smoser> can you tell me why i dont want to just pull that ? and why is the bug referenced there (1669306) ?
<powersj> 1) I need to update source and remove source-branch those are pointing at my branch (that was me testing it and forgetting to change it back
<smoser> yeah. just saw that and pushed a coment
<powersj> 2) the LP is a refrence to "command: usr/bin/python3 $SNAP/bin/cloud-init" which is not the normal line
<powersj> it should have been as simple as command: cloud-init
<powersj> but there are some python3 issues with classic confined snaps documented in that LP I refrence
<powersj> reference*
<powersj> smoser: oh and do you want the source to be the master git url or to be '.'
<powersj> pointing at master i.e. https://git.launchpad.net/cloud-init made sense, but wanted to confirm
<smoser> powersj, id ont know what do you think ?
<smoser> what is the "right snap" way to do it.
<smoser> i had this general question once looking at snaps
<powersj> smoser: well if we use '.' we don't checkout cloud-init yet again, however if you use '.' you can try snapping local changes. As far as convention, I didn't feel there was overarching best practice from what I saw.
<powersj> I do like using the git master URL though as it prevents you from doing something silly like making a local change and then having that make it somewhere
<smoser> well, you can pick one. and we can figure out why it was wrong later ;)
<powersj> hahaha
<powersj> smoser: updated with the URL
<cornfeedhobo> okay, got that sorted out. thanks for the help smoser & utlemming!!
<smoser> powersj, https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/322137 how can i see the pylint warings ?
<powersj> smoser: edit the .pylint rc
<smoser> i jsut removed it.
<smoser> and i dont see anyting
<powersj> that works too ;) but you may get even more messages
<smoser> sure. but i dont see ones, like if i run 'pylint tools/mock-meta' i see a buch of errors, but nothing about the log.warning
<smoser> oh. now id o
<powersj> did you change something?
<smoser> just wasnt looking right
<devicenull> hey, we've started to run into issues with this change https://lists.ubuntu.com/archives/ubuntu-devel/2017-February/039697.html
<devicenull> basically, we used to use cloud-init, and dropped it ~4-6 months ago
<devicenull> however, a bunch of our customers had created OS images back when we used it
<devicenull> and now they're starting to run into error messages when they boot up from those images
<devicenull> and I don't really see any way for us to address that, given that the images already exist
<devicenull> backporting that to 16.04 seems like an issue
<devicenull> I'm guessing we'll probably resort to pretending to be ec2 if this ends up becoming a bigger issue
<nacc> if you stopped using cloud-init 4-6 months ago, how woudl a change in cloud-init from 2 months ago be affecting your images?
<nacc> devicenull: --^
<devicenull> customers still have images with cloud-init built in
<rharper> I think they're old images which still have cloud-init inside them
<devicenull> and that's getting updated (automatically?), which then triggers the message on next boot
<devicenull> correct
<rharper> sure, a dist-upgrade would update the cloud-init in them
<nacc> ah i thought they weren't able to boot the image
<nacc> so i wasn't sure how cloud-init was updating
<rharper> well, they're getting the warning
<rharper> it's bootable in xenial
<nacc> rharper: oh i see
<nacc> i read 'error message' as fatal
<devicenull> yea, we've only had a couple reports of it so far, so I'm not sure how big of a problem it's going to turn into
<rharper> the error message should include the cloud-config that the user can emit in to the image to disable the warning and the check
<rharper> it can also be a boot parameter
<devicenull> yea, it throws this https://gist.githubusercontent.com/devicenull/63f96ecf5a84c0c9163c09002726d204/raw/b4d0f508382fe4de9a33e5eaf19e7ac92fed8e23/gistfile1.txt
<devicenull> which then they end up reporting to us
<rharper> ack
<devicenull> we've just been telling them to remove cloud-init, but I thought I'd bring this up here
<rharper> thanks for reporting;   if possible filing a bug for the issue with the details requested w.r.t the provide; that might help users find that it's been reported
<devicenull> yea, we'll probably add some documentation on it too
<rharper> we could also in there point back to you're recommendation that best fits the cloud;  alternatively we could explore whether you could expose an identifier (ideall through SMBIOS settings )
<devicenull> I'm not sure you really want to be adding code detecting us, when we really don't use cloud-init anymore?
<rharper> possibly not; I suppose it may depend on the customer images
#cloud-init 2017-04-13
<blackboxsw> hey smoser, so where do things stand with the current SRU process? I'm trying to find an sru bug related to the work.
<smoser> blackboxsw, ok.. collecting some info
<blackboxsw> good deal, thanks.
<smoser> typing https://public.etherpad-mozilla.org/p/cloud-init-sru-info
<smoser> blackboxsw, so i guess the easiest thing to do is just start doing it... we can walk down the list and pick one by one. if you dont understand the test case / verification that i put in for anything, then you can either
<smoser> a.) ask questions
<smoser> b.) go to the next one
<smoser>  (until you run out and are forced to 'a')
<blackboxsw> yeah that sounds like a good loop. expect the a's
 * blackboxsw starts at the top
<blackboxsw> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1676460 xenial validation
<smoser> blackboxsw, many of the bugs reference 'lxc-proposed-snapshot'
<smoser> that one did not. it would be good to make yourself a lxc with proposed cloud-init in it.
<smoser> see http://pad.lv/1674766 for an example.
<blackboxsw> cool will do
<smoser> that requires 'mount-image-callback' (cloud-utils) .. and it requires lxc1 for 'lxcnsuserexec'
<smoser> err... lxc-nsuserexec
<smoser> so i just ran
<smoser>  ./bin/lxc-proposed-snapshot -vvv xenial --proposed xenial-proposed --publish
<smoser> which means i can now:
<smoser> lxc launch xenial-proposed xp1
<smoser> and it starts with cloud-init inside from -proposed
<blackboxsw> hrm /home/csmith/src/server/lxd:xenial-proposed-11791681: not a file   re-reading the script intent
<blackboxsw> and I have lxc1 & cloud-utils
<blackboxsw> per http://pastebin.ubuntu.com/24375059/
<smoser> newer cloud-utils needed. :-(. that support i think went into zesty.
<blackboxsw> updating... :/
<smoser> if you want, you can probably just grab it
<smoser> its a simplish script
<smoser> and put it in $PATH
<blackboxsw> yeah might just do that while waiting on my desktop upgrade.
<smoser> yikes. you are brave.
<smoser> (i run the development release but most do not :)
<smoser> the other option is doing this on an instance somewhere. that is why lxd is nice.
<smoser> blackboxsw, http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/view/head:/bin/mount-image-callback
<blackboxsw> ok, looks better thanks smoser with the path tweaked to reference the mount-image-callback script thanks.
<blackboxsw> have a quick errand, back in 20
<smoser> k
<ahasenack> blackboxsw_bbl: smoser reading the etherpad and backlog
<blackboxsw> meh, looks like my xenial flavor of lxc doesn't accept push -p
<ahasenack> smoser: ok, read it all, +1 on the plan to go down the list and note which bug we are verifying
<ahasenack> smoser: https://bugs.launchpad.net/cloud-init/+bug/1674946 has been commented by someone saying the fix worked
<smoser> blackboxsw, yeah. you can use the snap i think for lxd
<ahasenack> not the reporter, though
<smoser> but migth still have the issue with mount-image-callback there.
<smoser> ahasenack, so what i'd do is just comment:
<smoser>  I'm marking this verification-done-xenial based on comment X
<smoser> and then do that.
<ahasenack> ok
<smoser> ahasenack, blackboxsw i think you are both aware, but you tag with 'verification-done-xenial' and 'verification-done-yakkety'  rather than just 'verification-done'
<ahasenack> right
<blackboxsw> smoser, hmm looks like my xenial proposed validation fell over the for https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1674766  http://pastebin.ubuntu.com/24375361/
<blackboxsw> the pastbin indicates ds-identify didn't process BigStep support right?
<blackboxsw> I'll double check that I have setup the seed files properl
<smoser> blackboxsw, right. let me try too
<blackboxsw> smoser this might be because I had started this lxc in the first place before adding the seed files in an attempt to work around my lxc file push not supporting -p option
<smoser> quite possibly. yes.
<blackboxsw> start would have run through cloud-init I expect
<blackboxsw> yeah sorting my lxc command version
<ahasenack> it's good to see the "before" case
<smoser> blackboxsw, yeah. so when working with cloud-init and lxc  i have some tools (like the lxc-propsed-snapshot) that help you to adjust a container without starting it.
<smoser> the other option is to start it and then "clean" it.
<smoser> the bin/do-reboot  script there helps do the clean
<ahasenack> smoser: config-drive newbie here, sorry
<ahasenack> how do I pass that to a container?
<ahasenack> a config-drive configuration, and where can i find out what it should look like?
<blackboxsw> TIL, thx smoser
<ahasenack> going through https://bugs.launchpad.net/nova-lxd/+bug/1673411 for some reference, looks like it has a helper script attached
<ahasenack> ah, it's in tools/
 * ahasenack sees the commented bits about network_data.json
<smoser> i've been walking up from the bottom
<smoser> but i'm going to leave https://bugs.launchpad.net/cloud-init/+bug/1665694 for one of you as a introduction to integration test suite
<blackboxsw> smoser, great. ok finally sorted things and have a zesty box at my disposal
<blackboxsw> lxc file pull $name/run/cloud-init/result.json -
<blackboxsw> {
<blackboxsw>  "v1": {
<blackboxsw>   "datasource": "DataSourceBigstep",
<blackboxsw>   "errors": []
<blackboxsw>  }
<blackboxsw> }
<blackboxsw> ok progress. now time to do some work
<smoser> woot!
<smoser> blackboxsw, you can take the https://bugs.launchpad.net/cloud-init/+bug/1665694
<blackboxsw> will do thx
<ahasenack> smoser: can I trigger this bug with just lxd, faking hyperv somehow? https://bugs.launchpad.net/cloud-init/+bug/1674946
<ahasenack> I have a config drive in /config-drive in this lxd, with network_data.json, but it looks like it's not being read (I want to trigger the bug first, then check the fix)
<ahasenack> or maybe I'm being hit by https://bugs.launchpad.net/nova-lxd/+bug/1673411, but this cloud-init in -proposed should have that fix
<smoser> reading
<ahasenack> which is also in our list
<ahasenack> ah
<ahasenack> ok, got it
<smoser> ahasenack, (/me realizes for the first time that you have different internal and external personalities... andreas -> ahasenack)
<ahasenack> I wanted to test the "before" case, but I can't because I need the fixed cloud-init for it to parse /config-drive according to #1673411
<ahasenack> smoser: :)
<ahasenack> andreas was already taken here, my ninjas didn't find him
<smoser> so before the new version, you should be able to populate /var/lib/cloud/data/seed/ConfigDrive
<smoser> i think
<smoser> (with the same contents as /config-drive)
<ahasenack> smoser: ok
<smoser> and it should look there.
<ahasenack> yay
<ahasenack> I haz result.json
<blackboxsw> interesting in running tox on the cloud-init tests is that my cd drive started spinning up. wondering if we're leaking some calls to mount that drive from the unit tests
<smoser> blackboxsw, it probably is.
<smoser> someone working on freebsd found one of those the other day ... let me se
 * ahasenack injecting errors to see what happens
<ahasenack>    "Unknown network_data link type: dvs-andreas-was-here",
<ahasenack> good
<smoser> nice ahasenack !
<ahasenack> you made an MP against upstream to stop sending these deprecated link types, right
<ahasenack> https://review.openstack.org/#/c/400883/2
<smoser> yeah, and it just made it into trunk
<smoser> they're really not "deprecated". as they're real and useful link types at the hypervisor level
<smoser> but not at the guest level
<ahasenack> right, wrong choice of words
<ahasenack> smoser: is cloud-init also responsible for creating the ubuntu user?
<smoser> yes
<smoser> > precise at least
<ahasenack> smoser: ok, because once I injected that failure in the network, the ubuntu user wasn't created: https://pastebin.canonical.com/185908/
<smoser> yeah. if the datasource traces it wont go on
<ahasenack> ok
<smoser> well, in that way at least.
<ahasenack> sorry about the canonical pastebin
<ahasenack> habit
<ahasenack> and bookmarks
<smoser> ahasenack, well you can just use 'pastebinit' now!
<smoser> i am constantly doing:
<smoser>  xsel -o | pastebinit
<ahasenack> indeed!
<ahasenack> finally
<blackboxsw> smoser, looks like I'm hitting a dependency error trying to install pylxd as the tox -e citest environment
<smoser> (highlight something  and then paste and paste the url to something)
<smoser> hm..
<blackboxsw> http://paste.ubuntu.com/24375992/
<smoser> i suspect you need python3-dev
<rharper> blackboxsw: ping powersj ; he's worked through those before
<smoser> yeah
<blackboxsw> powersj, has been invoked :)
<smoser> tox is wonderful
<smoser> until you have c extentions
<smoser> then it becomes honestly kind of silly in my opinion
<smoser> but still better than lots of other stuff
<smoser> blackboxsw, install python3-dev with apt
<smoser> and try again
<smoser> pylxc needs c bindings to lxc and thus the sillynees. python3-dev is pretty light though.
<blackboxsw> getting closer. ok new traceback, missing C headers as well. I'll iterate over these to get them fixed. and report a list of the deps I had to install to get things working
<blackboxsw> ok needed only libssl-dev too. So, the list of packages I've installed on a fresh zesty system: ls
<blackboxsw> lxc1, cloud-utils, devscripts, python3-dev, libssl-dev
<nacc> we use lxc1?
<smoser> lxc1 provids lxc-usernsexec
<smoser> which is used by mount-image-callback lxd:foo
<smoser> to get the right user namespace when it chroots to the target
<rharper> nacc: yeah; I suggested we get lxd to implement lxc-usernsexec but I forget why they said not to
<smoser> blackboxsw, http://paste.ubuntu.com/24376111/
<smoser> so inside an lxc container ^ worked.
<smoser> well, it shows me jumping into the container
<smoser> but that should list everything you need on xenial which i suspect is the same on zesty
<blackboxsw> excellent, will not distrupt my other systems then
<smoser> blackboxsw, well. you dont *really* want to run inside lxc
<smoser> because then you can't use zfs
<smoser> :)
<smoser> i'm not certain if it works or not fully. but lack of zfs is reason enough for me to not want to do it.
<blackboxsw> certainly. makes sense
<smoser> if you want... you can walk the merge proposal path ...
<smoser> with a doc enhancement to doc/rtd/topics/tests.rst
<smoser> with the above
<smoser> (the dependencies needed and such)
<blackboxsw> +1 smoser sure I'll put up an mp.
<ahasenack> smoser: ok, so I verified xenial and yakkety and added the respective verification-done-<release> tags. Someone else will flip the verification-needed tag into verification-done?
<ahasenack> ops, for bug https://bugs.launchpad.net/nova-lxd/+bug/1673411 that is
<smoser> ahasenack, no. you drop the verification-needed.
<smoser> changing it to verification-done-<x>
<smoser> really with 2 srus in flight at the same time... there really should be verificaton-needed-xenial and verification-needed-yakkety
<smoser> the sru process really just doesn't handle the different releases at once all that well
<ahasenack> smoser: how do they know they should hold pushing this to updates since we have the other bugs for the same package still being verified? Via debian/changelog?
<smoser> ahasenack, https://people.canonical.com/~ubuntu-archive/pending-sru.html
<smoser> look for cloud-init there. after everything gets verified, all its bugs should be green or gold (i dont understand the difference)
<ahasenack> ok, I see the proposed package associated with all the bugs
<blackboxsw> smoser, so I added verification-done-xenial verfication-done-yakkety tags to https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1674766
<Rafael> Hello folks
<Guest678> can somebody help to clarify a doubt that I have
<Guest678> do I need any special configuration to enable cloud-init to reset SSH keys?
<Guest678> The first time I create an instance it assigns the configure SSH key to it, however, if I change the key, cloud-init is not updating it in the VM
<smoser> change the key ?
<smoser> hi Raboo
<smoser> bah. sorry Raboo tab complete fail.
<Guest678> yeap
<Guest678> I mean change the public key that is inject into the VM
<Guest678> When the VM is created cloud-init succesfuly install the keys under ~/.ssh/authrozied_keys
<Guest678> However, if I change the keys in my cloud orchestrator, and then reboot the VM, cloud-init is not changing the keys
<smoser> Guest678, yeah, that is only done on first instance boot.
<Guest678> I am guessing it has something to do with these logs ""
<Guest678> yep
<Guest678> that is what I thought from reading the log files
<smoser> it is not kept in sync. that is a feature that we'd like to have at some point.
<Guest678> yes there a way to configure this?
<Guest678> ah
<smoser> um... maybe.
<Guest678> is there a work around?
<Guest678> I thought that there would be a way to indicate which modules I want to run once per instance, or once per boot, or constantly with the VM running
<smoser> actually, not easily. you could turn the ssh config module (which is what puts those keys into the default user's .ssh/authorized_keys)
<smoser> but changing that to run PER_ALWAYS (boot) would be not ideal
<smoser> as that is the same thing that wipes the host keys
<smoser> which you probably do not want ot happen every boot
<smoser> what cloud provider is this ?
<Guest678> ah
<Guest678> the same script that configures SSH keys is setting up host keys?
<Guest678> CloudStack
<Guest678> the module you are talking about is the one called "ssh"?
<Guest678> I noticed some other module called "ssh-authkey-fingerprints"
<smoser> blackboxsw, on https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1674766
<smoser> (and the others you have done)
<smoser> the sru person will want to see a comment at least stating "i verified both xenial and yakkety following the test case template above"
<smoser> i like to copy and paste evidence of having done that
<smoser>  
<smoser> https://bugs.launchpad.net/cloud-init/+bug/1674685https://bugs.launchpad.net/cloud-init/+bug/1674685
<smoser> Guest678, yeah, that does sound silly doesnt it :)
<smoser> yes, its ssh. we really should separate that out so you could tweak one wihtout the other.
<smoser> but as it is right now... the datasources cache the data on disk (so as to not rely upon a metadata service on reboot)
<Guest678> hmm
<smoser> so.. they'd just re-aply the cached data.
<smoser> this is a feature we'd like to have, but its not present now. sorry .
<blackboxsw> +1 smoser will add that now and close those bugs appropriately
<smoser> well, you dont change the state of the bugs..
<blackboxsw> s/close/tag
<Guest678> thanks @smoser
<smoser> once the bug moves from -prpoosed to -updates it will automatically do that.
<smoser> ahasenack, https://bugs.launchpad.net/cloud-init/+bug/1570325
<smoser> you can do that one... that is a integration test also. (for learning to at least run the integration test)
<ahasenack> deal
<smoser> ok. /me has to run.
<ahasenack> ok, https://bugs.launchpad.net/cloud-init/+bug/1674946 verified
<ahasenack> this will go much faster from now on
<ahasenack> ok, I'm EOD
 * blackboxsw is spending some time walking through those integration tests on cloud-init
<blackboxsw> see you smoser/ahasenack
<powersj> blackboxsw: https://cloudinit.readthedocs.io/en/latest/topics/tests.html also worth a read
<blackboxsw> excellent
<blackboxsw> keep 'em coming
<blackboxsw> actually, that's what I'm going to put up a simple doc mp for
<blackboxsw> just trying to wrap up some verification first
<powersj> blackboxsw: are you familiar with tox and running it?
<powersj> both cloud-init and curtin use tox as a wrapper around our unit tests and style tests. When we get a MR that is what we expect to pass
<blackboxsw> just a bit, TIL about environments. you guys have a bit more tox.ini config options/setup than services I've worked on
<powersj> you can look at tox.ini in the root dir of both projects
<powersj> ah ok :)
<dpb1> powersj: MR... wth
<powersj> O.o
<powersj> merge request?
<dpb1> step child of MP and PR
<powersj> lol
<blackboxsw> haha
 * powersj is split brain due to github and launchpad usage
<dpb1> :)
<powersj> Looks like pylint 1.7 was released today O.o
#cloud-init 2017-04-14
<powersj> https://paste.ubuntu.com/24377231/ are the new errors we get
<smoser> powersj, i dont understand those E0702
<smoser> powersj, https://github.com/PyCQA/pylint/issues/1419
<smoser> rharper, fun bond fix spinoff
<smoser>  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1682871
<smoser> blackboxsw, are you actually doing the maas one ?
<smoser> (bug 1677710)
<smoser> thats great if you are.
 * smoser hasn't had a maas around for a while.
<blackboxsw> smoser, yeah was going to give it a go
<rharper> smoser: =(
<rharper> I saw another thread about vlan_id=0 is somewhat undefined behavior
<smoser> i didnt try anything other than xenial and yakkety
<rharper> https://lwn.net/Articles/719297/
<rharper> 'In theory, VLAN 0 means "no VLAN". But the Linux kernel currently handles this differently depending on whether the VLAN module is loaded and whether a VLAN 0 interface was created. Sometimes the VLAN tag is stripped, sometimes not.
<blackboxsw> smoser, per 1677710, I deployed  a xenial (!proposed) machine on maas 2.20rc1.  I see  the following in /run/cloud-init/ds-identify.log:
<blackboxsw> single entry in datasource_list (MAAS) use that.
<blackboxsw> [up 4.70s] returning 0
<smoser> hm..
<blackboxsw> does this actually mean that ds-identify properly parsed the datasource?
<smoser> no. it means the install only had that enabled
<smoser> rharper, maybe you were missing that ?
<smoser> rharper found this when doing some ubuntu core stuff
<smoser> some versions of maas must pre-seed cloud-init with just MAAS
<smoser> which, yeah, means that ds-identify shortcuts.
<smoser> i had wondered why there wasnt much screaming over this
<smoser> :)
<rharper> what's going on ?
<rharper> some versions of maas?  how can it preseed the target?
<smoser> through the cloud-inti preseed
<rharper> I don't know what that is (ephemeral or deployment) ?
<smoser> curtin passes the debian preseed through and dpkg-reconfigure
<rharper> for *ubuntu* images
<smoser> right.
<rharper> deb-conf set selections work
<smoser> and obviously that doesnt work for core images
<rharper> the issue for the bug is for images that don't use debs
<rharper> of course no  one complained as no one is using core yet
<smoser> well, yes.
<smoser> but i didnt' realize it was was working still in ubuntu
<rharper> you mentioned something that I missed ?
<smoser> i didnt' think you understood it to be working in ubuntu either
<rharper> ds-identify detecting maas in ubuntu images without the change to ds-identify to parse cloud.cfg.d/ ?
<rharper> it entirely depends on where dpkg-reconfigure cloud-init wrote the cfg file
<rharper> if it updates a file in /etc/cloud/*maas*.cfg; and had MAAS in the file, it would work;  the issue was that it didn't search the cloud.cfg.d/*.cfg
<smoser> hm..
<rharper> oh
<rharper> no
<smoser> oh. so it was busted. and blackboxsw verified it fixed.
<rharper> they use pxe kernel cmdline, right ?
<rharper> in addition to the cloud config
<smoser> i didnt' realize blackboxsw above said proposed
<rharper> they had two possible paths to tell it
<smoser> for ephemeral they do cmdline
<smoser> and it worked there.
<rharper> right
<smoser> but was busted in install images.
<smoser> but i'm confused why there wasnt more screaming.
<rharper> hrm
<smoser> wasnt/isnt/
<smoser> oh.
<smoser> i know
<smoser> it was just yakkety that was busted
<smoser> xenial is report only
<rharper> yes
<rharper> that's right
<rharper> so it'd say it found nothing but then work anyhow
<rharper> ie, not disable itself
<rharper> that doesn't quite line up with saying it found MAAS though
<rharper> I'd be happy to see /run/cloud-init/* and /etc/cloud/*
<rharper> from that instance
<rharper> then we can sort out what it did and did not find
<smoser> ?
<smoser> in above, it found a single datasource configured (MAAS)
<smoser> which indicates it searched /etc/cloud.cfg.d/
<smoser> blackboxsw, you could pastebin /run/cloud-init/ds-identify.log
<smoser> blackboxsw, ahasenack thanks. you guys have been a huge help
<rharper> smoser: the newer code does
<rharper> he said !proposed
<rharper> so that'd be the previous one, right ?
<rharper> which didn't search cloud.cfg.d
<smoser> i dont know
<smoser> i dont know how it could work.
<smoser> i'd like to see more.
 * smoser really wishes he had a maas again
<smoser> i do intend at some point to get beisner's work up joined with some updated "virtual-maas" stuff
<smoser> so that we can easily deploy a maas on diglett
<rharper> well, I know it didn't search the .cfg dir before; the code was pretty clear about what files it searched
<rharper> what's not clear is what cloud-init was tested in the above report
<rharper> report
<rharper> supposedly not the proposed one
<rharper> but we'll need the log and cloud dirs
<smoser> hey. i'm out. have a nice weekend all.
<rharper> ok
<rharper> you too
<blackboxsw> sorry smoser got pulled into a phone call.
<blackboxsw> geting that now
<blackboxsw> http://paste.ubuntu.com/24382246
 * blackboxsw catches up on backlog.  So yeah I had a maas 2.2.0rc1 available and when I deploy xenial to bare metal I didn't see any warnings on login about missing datasources per 1677710. I didn't tweak anything in preseeds on this install as I wanted/hoped to see the original failure mode
<blackboxsw> this is commandline login BTW I don't have physical access to the hardware
<blackboxsw> grabbing cloud-init/* and /etc/cloud/*
<blackboxsw> rharper, for future reference on this one http://paste.ubuntu.com/24382309/ here are my /var/cloud-init/* files . I'll keep poking at this
<blackboxsw> I had tested xenial (not xenial-proposed). ii  cloud-init                         0.7.9-48-g1c795b9-0ubuntu1~16.04.1           all          Init scripts for cloud instances
<rharper> blackboxsw: can you paste "find /etc/cloud"
<rharper>  /etc/cloud/cloud.cfg.d/90_dpkg.cfg set datasource_list: [ MAAS ]
<blackboxsw> grabing that too. http://paste.ubuntu.com/24382329/
<rharper> there may be some issues here with report mode vs the strict handling
<blackboxsw> http://paste.ubuntu.com/24382335/
<blackboxsw> ^ /etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg with keys redacted
<smoser> ok. that makes sense.
<smoser> it always read that file
<smoser> and maas seeded cloud-init with that so it wrote it.
<smoser>  /etc/cloud/cloud.cfg.d/90_dpkg.cfg
<smoser> later
<blackboxsw> cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
<blackboxsw> # to update this file, run dpkg-reconfigure cloud-init
<blackboxsw> datasource_list: [ MAAS ]
<blackboxsw> have a good one smoser
#cloud-init 2018-04-09
<smoser> o/
<blackboxsw> minor nits on https://code.launchpad.net/~james-hogarth/cloud-init/+git/cloud-init/+merge/333657
<blackboxsw> oops I meant on https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342007
<blackboxsw> smoser: rharper the artifacts that jenkins is keeping around for a proposed run are 3.4 GB. The heavy hitters seem to be that we are keeping around qcow images in our artifacts. Think we can prune these ? https://jenkins.ubuntu.com/server/view/cloud-init/job/cloud-init-integration-proposed-a/lastSuccessfulBuild/artifact/
<rharper> blackboxsw: I'd think so
<rharper> in curtin, we purge images if we succeeed on only keep on failure
<rharper> also, you really should tar any qcow2 file you capture as the copy on jenkins isn't sparse aware and will fill up master
<blackboxsw> yeah I'd like to be able to attach the artifacts zip to cloud-init's SRU, but attaching 3.X gig file seems a bit irresponsible
<smoser> blackboxsw: i think we want to not collect qcow2 and .img files
<blackboxsw> +1 I'll put up a PR for jenkins-jobs
<blackboxsw> to prune that action
<blackboxsw> s/action/bloat
<smoser> so.. out of thall those cloud-test-* files..
<smoser> i'd like to keep the console log
<smoser> they others i'd like to ditch
<smoser> actually the console logs are saved in the test output
<smoser> so dont need themn from the top level
<blackboxsw> yeah was just discovering that too.
<smoser> let me see if we can put those files elsewhere so that they're not collected.
<rharper> https://bugs.launchpad.net/netplan/+bug/1664844
<ubot5`> Ubuntu bug 1664844 in netplan "No distinction between link-up and link-down interfaces" [High,In progress]
<rharper> we now have a 'optional' boolean to determine if wait-online cares about an interface
<rharper> I think any control-manual interfaces should get optional: False , but need to think some more
<smoser> well, reversed, but yes
<smoser> control-manual is optional: True
<rharper> ah, right
<blackboxsw> smoser: rharper https://github.com/canonical-server/jenkins-jobs/pull/33 ?
<blackboxsw> this should avoid collecting that artifact for us in jenkins (we'd still be able to look through it if we connect to jenkins cli
<rharper> blackboxsw: smoser: ok, updated the ntp branch with something I think covers what we need, the unittest breakage was minimal;  smoser  in particular in the ntp branch, I wanted to get your eyes on the decode_text() changes, it appears that we need to decode the template contents before passing to jinja for rendering on python2.7; so I've changes to handle that but definitely want your comments there
<blackboxsw> good deal rharper
<blackboxsw> checking
<smoser> blackboxsw: so 'data_dir' as we use it is 98% "results dir"
#cloud-init 2018-04-10
<blackboxsw> another round of manual SRU results https://github.com/cloud-init/ubuntu-sru/pull/6
<blackboxsw> gce ^
<blackboxsw> all good there. just wanted to highlight it for tomorrow
<blackboxsw> rharper: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1762759
<ubot5`> Ubuntu bug 1762759 in cloud-init (Ubuntu) "cloud-init analyze: event timing gap in azure's events" [Undecided,New]
<rharper> blackboxsw: thx!
<blackboxsw> smoser: if you get a chance today, we can clean up and land https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342007
<smoser> blackboxsw: yeah.
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342940
<blackboxsw> hah, ohh we didn't turn it on in bionic
<blackboxsw> ok
<blackboxsw> smoser: was there a bug reported with that?
<blackboxsw> approved https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342940
<blackboxsw> if there was a bug there, please associate it with the changelog. If not, then good to land
<smoser> blackboxsw: there was not a bug... but i might as well report one
<blackboxsw> it'd help our ffe case
<blackboxsw> :)
<blackboxsw> rharper: smoser https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342428 is ready for re-review. just ran through a container test to confirm expected behavior
<rharper> blackboxsw: smoser: so the 10 seconds in the azure ds, is 100% attempting to resolve the __cloud_init_expected_not_found url
<rharper> https://paste.ubuntu.com/p/ftpgWdmMyk/
<smoser> :-(
<rharper> https://paste.ubuntu.com/p/2JDgrnybxy/
<smoser> rharper: does that reproduce after ?
<rharper> why would nslookup  be faster than gethostby name
<rharper> yeah
<rharper> it's permanent
<rharper> and it's not our upgrade, it's slow pre sru anyhow
<smoser> because .... wait for it ... systemd-resolved !
<rharper> yes
<rharper> but
<rharper> on artful
<rharper> I don't have bionic on azure to compare to see if it's fix there
<smoser> smells maybe like https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1730744
<ubot5`> Ubuntu bug 1739672 in systemd (Ubuntu Bionic) "duplicate for #1730744 Regression in getaddrinfo(): calls block for much longer on Bionic (compared to Xenial), please disable LLMNR" [High,Fix released]
<rharper> y
<rharper> yes it does
<rharper> LLMNR setting: yes
<rharper>  
<rharper> is in --status
 * rharper tries to disable per bug 
<rharper> restarting systemd-resolved with new config doesn't help
 * rharper reboots
<rharper> strange status says it's enabled by resolving is fast
<rharper> smoser: the ntp branch is ready, if you've not already started, https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438
<smoser> rharper: yeah... i  plan to sorry so slow
<rharper> np, just wanted to get any changes in soon so we can upload to bionic
<rharper> I'm out on Friday
 * blackboxsw hits that final pass on ntp
<dpb1> its time has come.
<rharper> time to synchronize with master
<dpb1> yes, best not to let it drift too far
<rharper> but we don't want to get ahead of ourselves
<powersj> *slow clap*
 * rharper hands powersj some chrony 
<rharper> powersj: we'll fix that right up
<powersj> lol
 * rharper goes back to watching vmtest runs 
<smoser> blackboxsw: some feedback on https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342428
<blackboxsw> fielding now smoser thanks
<blackboxsw> rharper: review comments on ntp posted. just wondering about a bit more validation of the config fields
<blackboxsw> it's a bit of work, but that's high likelihood of  user-introduced failures
<blackboxsw> take or leave what you think is worthwhile
 * blackboxsw has to run a bit early today
#cloud-init 2018-04-11
<smoser> blackboxsw: i'im goin gto upload with the IBMCloud to ubuntu
<smoser> just the ubuntu/devel as it is
<blackboxsw> +1 smoser
<apollo13> hi, I am testing with the NoCloud provider and noticed that cloud-init will try to reconfigure if I change the instanceid; is there a way to only allow cloud-init to run once
<apollo13> this is currently on a fedora atomic image
<blackboxsw> apollo13: I think you could touch /etc/cloud/cloud-init.disabled after first boot and cloud-init won't do anything anymore
<blackboxsw> per http://cloudinit.readthedocs.io/en/latest/topics/boot.html#generator
<rharper> apollo13: also manual_cache_clean: true  would work as well;  https://git.launchpad.net/cloud-init/tree/doc/examples/cloud-config.txt#n450
<blackboxsw> smoser: comments on https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342948
<blackboxsw> rharper: I'll pull together a quick patch adding ntp:config key validation and post it
<rharper> thx
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342948 updated.
<dpb1> ntp?
<smoser> on that now.
<blackboxsw> rharper: smoser my final patch suggestion ntp  validation http://paste.ubuntu.com/p/dcKcYrM5ty/
<smoser> i had just posted some coments there.
<smoser> rharper: i will check back in later. i have to go now.
<smoser> but your issues with util.read_file_or_url
<smoser> is that it was just busted.
<smoser> it is supposed to return bytes in .contents
<smoser> and str() should give you text on that thing.
<blackboxsw> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342948 merged
<smoser> you could just as well use load_file sinc eyou'rre loading a file in all cases
<smoser> http://paste.ubuntu.com/p/fHSjws2hR9/
<smoser> that set of changes works correclty, and uses the response of read_file_or_url "correctly" such that it works witih a url or a file.
<blackboxsw> addressed review comments on https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342761
<blackboxsw> changed description
<rharper> smoser: my issue was that in tox -e python2.7 without my code changes, jinja blows up, not the unittests;
<rharper> jinja2 's render_template expects decoded str (ie, not utf-8); but it does load_file() on the template (chrony) and gets back utf-8 chars; those have to be decoded before you can call jinja.render_template()
<rharper> once we decode the contents before rendering, the next issue was python2.7 calling str.decode() fails because the default encoding for python2.7 wasn't UTF-8;
<rharper> that's my analysis;
<rharper> I'm working on switching to attribute; I like that better than the calling in init; let's see if that makes it easier
<smoser> something is fishy there still
<rharper> smoser: yes, the simple recreate is the test using the "real' templates in test_handle_ntp*.py;  if you run that one under py2.7 and remove the util.decode_text() calls, you'll see the jinja error
<rharper> blackboxsw: thanks for the schema update; pulling that in
<rharper> smoser:  I'm just about ready to push an update switch to property for the preferred_ntp_clients; that works quite well and avoids mocking in non-ntp related test cases that use distro objects
<blackboxsw> np rharper
<rharper> ok, i've pushed, I've not squashed yet but will do so before we land
<blackboxsw> lander squashes so you don't have to
<rharper> blackboxsw: interesting
 * rharper likes to squash 
#cloud-init 2018-04-12
<blackboxsw> personally I like seeing your separate commits post my last review. there's gotta be a way to do that once someone has already git push --force'd to a remote.
<blackboxsw> I guess I could just git diff rhrper-repo/branch-name and see it before I pull
<blackboxsw> personally I like seeing your separate commits after my most recent review.*
<smoser> rharper: so templates/chrony.conf.debian.tmpl:
<smoser> has a non-ascii char in it.
<smoser> and as yo udiagnosed, that is getting decoded somewhere as ascii in python27 jinja2
<smoser> and it just isnt ascii
<smoser> http://paste.ubuntu.com/p/c5jxV6Rg45/
<smoser> that recreates the basic failure
<smoser> https://stackoverflow.com/questions/22181944/using-utf-8-characters-in-a-jinja2-template
<smoser> well, ignore that. that failure wasnt right. but its somewhere around there.
<apollo13> blackboxsw, rharper: thanks, any reason to prefer manual_cache_clean over /etc/cloud/cloud-init.disabled or just personal pref?
<rcj> rharper: I see that you're working on chrony support (according to blackboxsw in https://cloud-init.github.io/status-2018-03-19.html#status-2018-03-19) Will that land for bionic?
<smoser> rcj: yes. will land today
<smoser> rharper: we need your network-ipv6 fix today too
<smoser> rcj: but the images to my knowledge do not have chrony installed.
<smoser> cloud-init will not install chrony if systemd-timesyncd is present, it will just configure that.
<rharper> smoser: network-ipv6?
<smoser> 10 second timeout
<rharper> oh, netplan
<rharper> yes, that would be really good to get an upload into bionic at least
<rharper> even if it needs an SRU to artful
<rharper> I have the PR in
<cyphermox> I'll review in a bit and upload that with some other bugfixes
<rharper> thx
<smoser> rharper: i had thought that that was in cloud-init ... sorry i missed rthat.
<rharper> smoser: I had a branch to disable-ra in cloud-init but after discussion with stgraber I realized the underlying issue was related to how systemd-networkd handles RA
<rharper> and what the default netplan setting was
<rharper> so that's probably why you thought we had some cloud-init work to do
<smoser> rharper: and yes... thank you for seeing that. i had in 2 places seen the ~ 13 second network config and just thought "wow, their dhcp server is really slow"
<rharper> yeah; it actually come up under the autopackage test failures on bionic in lxd containers because the network wasn't yet on line and apt update/install failed
<smoser> well,. that needs fixing :)
<smoser> sleep 5 && echo "everything ready!"
<rharper> they added some retry logic
<rharper> https://lists.ubuntu.com/archives/ubuntu-devel/2018-February/040138.html
<smoser> very much a need in Ubuntu for "wait until system is booted" command.
<rharper> folks disagree what "booted" means; I think that thread demonstrates that
<Cyclohexane> Is there a known issue with any of these packages? https://pastebin.com/kwT1Fs8c I'm following https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-migrate-ipv6.html#ipv6-dhcpv6-rhel and it works absolutely fine on a fresh install, then I run yum update and the whole thing goes caput, networking dies and the server becomes inaccessible...
<rharper> did you reboot or just after the yum update completes ?
<rharper> cloud-init isn't active after boot, so an update to cloud-init won't affect a running instance;
<Cyclohexane> i reboot yeah, as the amazon guide says
<rharper> cloud-init at that level isn't going to affect the network config; it's just dhcp on eth0
<rharper> if you look at the instance console-log, that should show if cloud-init ran, it dumps the network state of the instance
<Cyclohexane> yeah cloud-init runs, but and updates per 99-custom-network.cfg but then when cloud-init runs netstat -rn the route table is dead so it loses connectivity
<Cyclohexane> it works fine prior to running yum update, can reboot and it comes back up with ipv6 connectivity
<rharper> maybe the dhcp update then ?
<rharper> I suspect you may need to bisect your update ; upgrade a few at a time to see if keeps networking, or not
<smoser> http://paste.ubuntu.com/p/FsDxksfVS3/
<smoser> rharper: $ grep -r can.t templates/
<Cyclohexane> rharper: it seems to be this https://bugs.centos.org/view.php?id=14585
<rharper> smoser: ok, pushed an update to the branch to update template and drop the related changes
<rharper> Cyclohexane: yeah, not much info there;   you could 1) yum upgrade  2) make a copy of /etc/sysconfig/{network,network-scripts}  3) add the 99-custom-networking.cfg to /etc/cloud/cloud.cfg.d/ and then 4) cloud-init --force --debug init --local  which will re-run the initial cloud-init stage that renders network config;  then compare /etc/sysconfig/{network, network-scripts} contents with your copy and see whats different
<Cyclohexane> rharper: https://gist.githubusercontent.com/bytestream/cb6fa966875b902cdb34986047eca1b3/raw/1183714df137a7df9441600f242cab46df52ffa7/gistfile1.txt
<rharper> Cyclohexane: if /etc/sysconfig/network-scripts/ifcfg-eth0 is the same before and after, then it's not the config that cloud-init is generating
<rharper> Cyclohexane: if possible, you could add a second interface and configure that with a public ip, or a second vm in the same VPC both with dual nics in the same network; you could  then hop through the second interface to see what things look like after reboot
<blackboxsw> smoser: we good on https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342761?
<Cyclohexane> rharper: it's something to do with changes between cloud-init-0.7.9-9.el7.centos.2.x86_64 and cloud-init-0.7.9-9.el7.centos.6.x86_64, I downgraded cloud-init after yum update back to .2 and it's working fine again
<dpb1> Cyclohexane: you should file a bug, or add to that one you found already
<smoser> blackboxsw: acked.
<smoser> tahnks
<smoser> rharper: you must have cherry picked my commit from trunk ?
<smoser> 0f7745619ab0a61d7dee5bde43e1e970ddf4a9b6 is on your branch but is
<smoser> never mind
<smoser> noise here
<smoser> rharper: responded on your mp
<smoser> i like it. thank you
<rharper> smoser: checking, thanks
<rharper> smoser: fixed  https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438
<smoser> blackboxsw:
<rharper> https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/343122
<smoser> rharper: isnt' there some argparse -> bash completion ?
<rharper> smoser: maybe
<rharper> I'll look
<blackboxsw> ooooh raharper... nice I did bash completion for landscape, will check
<rharper> blackboxsw: yeah, dpb1 mentioned that
<blackboxsw> rharper: yeah per smoser's comment python3-argcomplete might do some of this work for us, and doesn't add any more dependencies besides that package
<rharper> well, I was interested in static generate of the file
<rharper> like build-time run argparse to shell code and pack that up
<blackboxsw> time-saver :)
<rharper> rather than having python runtime during tab-tab
<smoser> yeah. you'd think something like that would exist
<dpb1> ntp merged?
<rharper> dpb1: y
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/343127
<dpb1> rharper: woop!!!
<dpb1> upload?
<smoser> rharper: the files you got for templates
<smoser> they were exactly from debian/ubuntu ?
<smoser> with the "can?t"
<rharper> yes
<rharper> look at bionic's chrony.conf
<rharper> lemme fire up sid
<rharper> yeah
<smoser> blackboxsw: around?
<blackboxsw> smoser: yep
<blackboxsw> hangout?
<smoser> yeh
<blackboxsw> release related
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/343120
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/343127
<smoser> man. that exception_cb is a mess.
<smoser> who wrote this stuff
<smoser> readurl and wait_for_url are much different in what they pass
<smoser> blackboxsw: i'll be back in in ~ 3 hours i guess.
<blackboxsw> sounds good. just pushed robjo's changes  into my nettools branch, will have a devel branch up for you  before you get back. if you find the net-tools changes good I'll check in later to see if we want to land that
<blackboxsw> pushed https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/343136 for bionic release to include ntp/chrony fixes
<smoser> blackboxsw: merged yours and uploading
#cloud-init 2018-04-13
<blackboxsw> thx smoser I'm reworking ifconfig-drop branch to incorporate more review comments, it'll be tomorrow before it's ready
<blackboxsw> just found a bug in the dropping ifconfig/route branch on devices which don't have addresses (like deactivated wirelesss interfaces).
<blackboxsw> tweaking the logic and also attempting a regex approach instead of this separate for loop nonsense to see if it's faster
<blackboxsw> s/separate/decoupled/
<smoser> blackboxsw: help ?
<smoser> http://paste.ubuntu.com/p/sWZg7wbPNH/
<blackboxsw> checking. /me is neck deep in regex atm
<dpb1> blackboxsw: don't die
<blackboxsw> so python only imports a module once right, once the mock is active it would apply to the global wouldn't it until the mock is released.
<nacc> smoser: are cloudint.util and cloudint.ssh_util.util the same thing?
 * blackboxsw checks to be sure
<blackboxsw> yeah they are the same
<nacc> i think this a case where yhou need to mock the local util.write_file
<nacc> not the one in the class
<nacc> because it's already been imported at this point
<nacc> https://docs.python.org/3/library/unittest.mock.html#id5 the bit about mocking where something is used
<nacc> and importing puts things in the local namespace, not the source namespace (iirc)
<nacc> it's different if you do a from ... import too
<smoser> blackboxsw: yeah, they're the same thing. and yeah nacc is right i'm sure, bu t i just should have somehow known this before. i'm sure we do stuff like this otherplaces in tests.
<smoser> but oh well
<nacc> smoser: i ran into this while writing some of our tests for git-ubuntu
<nacc> smoser: if you had just run 'import cloudinit', i believe your code would dtrt
<nacc> smoser: but since you import a specific module from cloudinit, it 'imports' it into the current modules' namespace
<nacc> so there's now a testme.util which is what you actually want to mock, i think
<nacc> (or so, there are some examples i've found in the past for what you are supposed to use for hte module/class name)
<nacc> i can keep the logic in my head for about 1 minute and then it slips away again :)
<nacc> smoser: the frustrating part is, it also depends on how the tests run -- so sometimes things work when they really probably shouldn't (or you end up getting one test affecting all other tests in the same class/test file)
<seba> heyja
<seba> I'm currently playing around with the cloud init user_data config
<seba> I'm wondering... if the image I want to configure with cloud-init has a lot of stuff configured in its /etc/cloud/cloud.cfg (a debian in this case) how can I override these defaults?
<seba> the debian default cloud.cfg adds a default "debian" user. I don't want that, but it seems to be specified there and it gets added even if i specify my own users in my user_data
#cloud-init 2020-04-06
<andras-kovacs> I really don't get it now how the max_wait parameter works with OpenStack :(
<andras-kovacs> max_wait: 100 without any other timeout related settings tries to curl the metadata seems like forever (I was waiting for more than 5minutes)
<andras-kovacs> But     max_wait: 100,      timeout: 10,    retries: 5 waits for 130 seconds before gave up (which will be fine). Maybe it's my dumbness but I can't make the math.
<andras-kovacs> and it doesn't work like it should... again :D
<andras-kovacs> so cloud-init with rhel 7.8 is not timing out for me
<blackboxsw> woot!  cloud-init | 20.1-10-g71af48df-0ubuntu3         | focal           | source, all
<blackboxsw> looks like focal cloud-init cherry-picks got released Odd_Bloke/rharper
<rharper> blackboxsw: \o/
<Odd_Bloke> \o/
<neale> I'm building my own cloud. How can I get cloud-init to configure the network if I don't have access to the local filesystem (it's a PXE booted squashfs that I don't want to regenerate for every node)? Is my best bet using the EC2 dataset provider and retooling my management network to include the link-local subnet?
<rharper> neale: cloud-init can read network-config via /proc/cmdline
<rharper> it also parses ip=  klibc formats
<neale> rharper: the base64-encoded v2 configuration?
<rharper> https://cloudinit.readthedocs.io/en/latest/topics/network-config.html
<rharper> or v2
<rharper> or v1
<neale> rharper: okay, cool. Is there no way to do it with the cloud-init yaml file?
<rharper> so, network-config needs to be provided by the datasource or the system config as we bring networking up to fetch user-data
<rharper> https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
<neale> Right, I'm already doing that configuration with `ip=::::${IPv4}:BOOTIF` and the `hwaddr=` cmdline argument.
<neale> But I think I have my answer, thank you very much!
<rharper> cool
<Odd_Bloke> FYI folks, I have a unit test doc PR at https://github.com/canonical/cloud-init/pull/311 and a pytest improvement PR at https://github.com/canonical/cloud-init/pull/304
<Odd_Bloke> blackboxsw: rharper: FYI: ^
<rharper> Odd_Bloke: thanks,  will review ... pushing up all the curtin PRs stuck behind the multipath one
<rharper> blackboxsw: I didn't forget your PR for daily builds too
<blackboxsw> rharper: no problem, was just wrapping up a review in ua-tools land
<blackboxsw> rharper: on my daily recipe build now.
<blackboxsw> will likely use git revert -n notation instead of git reset HEAD~#
<blackboxsw> will be cleaner, and less individual commits for our bulk --pop-all, new cherry-pick etc.
<rharper> blackboxsw: ah, ok
<rharper> sounds nicer
<andras-kovacs> your bugzilla is great! I really don't mind now that I got registered
<andras-kovacs> Do you have any idea which would be the most sophisticated way to increase LVM with cloud-init?
#cloud-init 2020-04-07
<blackboxsw> rharper: https://github.com/canonical/cloud-init/pull/308 is updated. and so is https://github.com/CanonicalLtd/uss-tableflip/pull/45
<blackboxsw> new instructions :)
<blackboxsw> will check back later
<blackboxsw> otherwise we can do this in the morning.
<marlinc> Are there some official Ubuntu instructions on how to clean an image after a build for the next boot? I'm already running a cloud-init clean, deleting SSH host keys and clearing apt caches
<andras-kovacs> marlinc:  search around for docker image bulding best practices
<andras-kovacs> and don't forget to reset the /etc/machine-id
<andras-kovacs> Do any of you know why runcmd doesn't work at all on a machine which uses LVM?
<Odd_Bloke> marlinc: Are you asking because you're seeing issues, or just to be sure?
<marlinc> Just to be sure Odd_Bloke, there's just tons if things that generated and its easy to forget something
<marlinc> Systemd's /etc/machine-id for example, I don't think I'm deleting that rigth now
<Odd_Bloke> So cloud-init will detect that it's running on a new instance, even without a clean, and regenerate host keys etc.
<Odd_Bloke> I don't know that we have much documentation around what else you might want to clean up.
<rharper> marlinc: it's really quite difficult.  I do have some hand tuned files that I remove to return an image to "pristine" ; it's not exhaustive;  I started with running find . -cnewer /etc/machine-id ;  it's challenging to know which files are created or modified; and to know that, you need to have the filesystem layout captured *before* it's been booted; and then you can diff the two lists and then make some patters for removal
<rharper>  find / -xdev -cnewer /etc/machine-id
<Odd_Bloke> For most cases, you don't need to restore an image to it's fully pristine state.  (That's more of a concern for us when testing, because we want to ensure that cloud-init behaves properly specifically against a pristine image.)
<HellMInd> Hello!
<HellMInd> Im trying to use proxmox ovh, but in otder to use cloud init to assign an ip , i need an static route to make the gw valid. How should I do that ?
<HellMInd> Hello!
<sarnold> hey HellMInd, the webchat is working fine, I've just got no idea, and others who might are probably occupied elsewhere for the moment :)
<blackboxsw> rharper: if there is time today. all instructions are updated on https://github.com/canonical/cloud-init/pull/308 and  https://github.com/CanonicalLtd/uss-tableflip/pull/45
<Odd_Bloke> HellMInd: I'm not sure I fully understand the question.  Could you explain the problem you're having in a bit more detail?
<rharper> blackboxsw: yes, I'll review both today
<HellMInd> Im using Proxmox, and I want to create a debian kvm  template
<HellMInd> So  i need to use cloud init to assign an ip
<HellMInd> Since my box is in ovh, you need to setup a fake route to enable the gw because its outside subnet
<HellMInd> But how can I set that rule.
<HellMInd> I need to do some post-up  cmds, not just the ip
<Odd_Bloke> Let me make sure I'm understanding right: you're running Proxmox on an OVH node, and you would like to configure the instances launched on Proxmox with network configuration?
<HellMInd> yes
<HellMInd> I need a way to change the network templates to add some static routes
<Odd_Bloke> When you say "the network templates", you mean /etc/network/interfaces{,.d/*}?
<HellMInd> I think cloud-init uses some template to add  the interface file. But im not sure
<HellMInd> cloud init isnt uses : /etc/cloud/cloud.cfg/custom-networking.cfg ?
<rharper> HellMInd: https://cloudinit.readthedocs.io/en/latest/topics/network-config.html  ;  it's not a template; instead cloud-init will read configuration files written to disk, or in some platforms, it reads the configuration from a metadata-service ;
<rharper> if you need a static route, there are a few examples;  static routes  in eni are added as post-ups ,  https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html
<blackboxsw> Odd_Bloke, or rharper, if there is time, can you peek at a json schema addition that needs eyes https://github.com/canonical/cloud-init/pull/152?
<blackboxsw> I didn't want it to timeout and get closed
<rharper> blackboxsw: ok
<HellMInd> I did this, but no change: https://prnt.sc/ruxzbs
<HellMInd> "not a new instance. network config is not applied" says :S
<HellMInd> I dont  know  if that is some error,
<HellMInd> how can I debug that .cfg
<rharper> HellMInd: after each change, you'll want to cloud-init clean ; which will remove the instance-id file and tell cloud-init that it's booting a new instance ..
<HellMInd> thank you rharper
<HellMInd> tharper, if I want to change the ip from  proxmox?
<rharper> HellMInd: also, you can manually render the file, this is an awkward interface, but it exists:  cloud-init devel net-convert --network-data /path/to/input.cfg --kind yaml --distro <debian|ubuntu> --output-kind eni --directory /tmp/test-network-config/
<rharper> HellMInd: I'm not sure what you mean?  If you have proxmox give you a different IP for a VM, you'll need to reset cloud-init with the 'clean' command and reboot;
<HellMInd> devel: error  argument subcomman : invalid choice net'convert
<rharper> hrm, what version of cloud-init ?
<HellMInd> 18.3
<rharper> just shy, landed in 18.4
<HellMInd> dont know how to test the damn cfg,
<HellMInd> I just need to see some error, there are no error on log files
* powersj changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting April 14 16:15 UTC | 20.1 (Feb 18) | 20.2 (Apr 28) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> rharper: if you haven't reviewed https://github.com/canonical/cloud-init/pull/308 or https://github.com/CanonicalLtd/uss-tableflip/pull/45 yet, I think I can further simpify debian/cloud-init-cherry-picks by dropping the 'applied' column in that file.
<blackboxsw> any objections to be force pushing new ubuntu/devel based on that ?
<blackboxsw> any objections to *me* force pushing a new commit?
<Goneri> review of https://github.com/canonical/cloud-init/pull/298 welcome.
<blackboxsw> Goneri: good reminder. cheers. will get some eyes on this today
<blackboxsw> https://github.com/canonical/cloud-init/pull/308 and https://github.com/CanonicalLtd/uss-tableflip/pull/45 good
<rharper> blackboxsw: ok, finished my review, but not before you've pushed more changes; sorry some local drama interceded;  one question though;  should we land the cloud-init PR to fix ubuntu/devel while we work on improving #45 ?  or should we get #45 in shape first and then redo the cloud-init PR ?
<blackboxsw> rharper: I had drama related to that changeset too. I'm sorry about that. I wanted to make sure we were in a good state currently on ubuntu/devel by performing that initial "git add debian/cloud-init-cherry-picks; git commit -am 'Add debian/cherry-picks file"
<blackboxsw> rharper: I suppose we could address the debian/cloud-init-cherry-pick* files afterwards.
<blackboxsw> that'd make 308 a standalone
<blackboxsw> yet you wouldn't have the --pop-all option for cherry-pick.
<blackboxsw> so the manual steps would be longer
<blackboxsw> hrm, you think it's worth maybe having cherry-pick put up a temporary tag before cherry-picking to ensure we reset back upon failure?
<blackboxsw> rharper: ^?
<blackboxsw> hrm this --push-all  is gonna be fragile especially if we have to use quilt directly to refresh patches (which then has no knowledge to update debian/cloud-init-cherry-picks with new commitishes)
<rharper> sorry
<rharper> here
<blackboxsw> I worry about putting too much stock in designing a fully-featured cherry-pick cmd. as generally I think initially we want to just fix daily recipe or revert all
<blackboxsw> as most of our releases also tend to be new-upstream-snapshots too
<blackboxsw> but I don't know the best path forward here at the moment.
<blackboxsw> I really think we need to unblock daily recipe builds on focal for testing purposes
<rharper> I guess it just seems to me that if we can push a single one on, we can pop one off.;
<blackboxsw> rharper: we can, but I tried being smart and collectively doing that in bulk to avoid git commit log noise
<blackboxsw> but, again, it's probably not worth the effort there given how complex I've made it
<blackboxsw> so, if individual push/pop noise is acceptable during cherry-pick reverts an re-applies, then it will make the tooling simpler
<rharper> ok, I see where you're coming from
<blackboxsw> and we can support the pop-all and push-all
<rharper> I'm going to have to run now; let's discuss tomorrow;
<blackboxsw> rharper: will do take care
#cloud-init 2020-04-08
<apollo13> hello, anyone around to help me with network configuration?
<apollo13> my cloud-init on debian properly writes the config to /etc/network/interfaces.d/50-cloud-init.cfg
<apollo13> but also generates a dhcp template in /run/network/interfaces.d which obviously takes precedence but won't work because of no dhcp server in that network
<apollo13> any idea what is causing that?
<apollo13> also I need to rename 50-cloud-init.cfg to 50-cloud-init that it works
<apollo13> okay, I fixed that by upgrading cloud-init to > 19.2
<apollo13> sadly that is not yet in the buster images, but apparently on it's way
<apollo13> so, cloud-init network configuration configures dns-nameserver in /etc/interfaces yet the debian cloud images do not have resolvconf, how is that supposed to work?
<tribaal> apollo13: just a heads up - most of the active cloud-initers are in US timezones (I don't know how to answer your question personally)
<apollo13> I have opened https://salsa.debian.org/cloud-team/debian-cloud-images/-/issues/24
<apollo13> tribaal: thanks, my bouncer will stay online :)
<tribaal> apollo13: hehe ack :) I guess opening the bug report was the right reflex - it is a good asynchronous contact point as well.
<apollo13> in all fairness I am surprised how badly cloud-init seems to work every time I try it :(
<apollo13> I mean I guess it is probably my fault for using a static network configuration instead of DHCP and I fully understand that getting those things right is hard, but I fail massively every time I try cloud-init :(
<Odd_Bloke> apollo13: What is creating the file in /run/network/interfaces.d?
<apollo13> Odd_Bloke: the ifupdown-cloud-init-helper
<apollo13> but it only does this because cloud-init wries 50-cloud-init.cfg int /etc/network/interfaces.d which is a invalid name
<apollo13> so that part got fixed by upgrading to cloud-init > 19.2
<Odd_Bloke> I'm not familiar with ifupdown-cloud-init-helper, what is that?
<Odd_Bloke> Well, regardless, sounds like you're past that part of the issue.
<apollo13> yes
<apollo13> all that remains (and I also fixed that by remastering the debian cloud images) was DNS which got fixed by adding resolvconf to the image
<Odd_Bloke> So it sounds like this is an issue with the mastering of the Debian cloud images more than it is with cloud-init?
<apollo13> yes, seems like it
<apollo13> hence I opened an issue at https://salsa.debian.org/cloud-team/debian-cloud-images/-/issues/24
<apollo13> apparently they only test the dhcp boot cases, dunnoâ¦
<Odd_Bloke> OK, cool, was just making sure I'd understood fully.  Thanks for bringing the issue here (even if you fixed it before the core team woke up ;).
<apollo13> hehe, no issue after allâ¦
<apollo13> although one could argue that the cloud-init debian package should probably depend on resolvconf (or at least something that is able to write dns into resolv.conf or somewhere else)
<apollo13> or can I use a different network renderer on debian that would work
<apollo13> Open for suggestions if anyone has any
<Odd_Bloke> I believe Debian package cloud-init themselves, so a Depends change would be a bugs.debian.org conversation.
<apollo13> Odd_Bloke: yeah, or at least for the cloud-images, anyways thanks for trying to help! much appreciated. It works kinda lovely from proxmox now
<apollo13> now if I were to figure out how to easily remove snap from the ubuntu server images, I could also use that :D
<Odd_Bloke> You could potentially automate that with cloud-init user-data.
<apollo13> sure, sadly proxmox cloud-init support is rather limited
<kqkq95> hi all ! i hope you are well :) , is cloud-init ready for CentOS 8 ?
<blackboxsw> kqkq95: cloud-init supports CentOS. tip of cloud-init has python3 support and looks like centos8 looks to be python3.6. We should be good there, though I see stock centos8 has a  dated cloud-init version 18.5 available
<blackboxsw> we are at 20.1 now
<blackboxsw> ~ 1.5 years of missing bugfixes/features
<blackboxsw> rharper: Odd_Bloke here are the  use-cases our release tooling is trying to automate https://hackmd.io/VbmtcZLyR4650aqqmfMMYg
<meena> blackboxsw: and how many bugs? have we introduced since then? :P
<rick_h_> meena:  3 definitely 3
<blackboxsw> haha!
<blackboxsw> kqkq95: also you can get latest cloud-init development bits from https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/
<Odd_Bloke> blackboxsw: Scanning through that now, thanks for putting it together!
<Odd_Bloke> My first note: Feature Freeze only applies to the devel release, but the wording suggests that we should behave differently for non-devel series during FF.
<blackboxsw> good pt Odd_Bloke changing
<Odd_Bloke> Thanks!
<blackboxsw> fixed
<kqkq95> blackboxsw: ok i have the latest version ;)  i have a problem with my template vmware (OS : Centos 8)  with open-vm-tools and cloud-init) the network configuration is not set :(
<blackboxsw> kqkq95: not quite sure, might be related to https://bugs.launchpad.net/cloud-init/+bug/1835205 maybe?
<ubot5> Ubuntu bug 1835205 in cloud-init "OVF datasource should check if instant id is still on VMware Platform" [Medium,Triaged]
<Odd_Bloke> So I am beginning to wonder if Robie's idea for a third git branch would be a cleaner way of handling this.  Currently, we merge `master -> release` and build from that.  If instead we merged `new_branch -> release` and then `master -> release`, we could (a) revert cherry-picks on `new_branch` as soon as we apply them to `release`, and (b) resolve any other merge conflicts in `new_branch` rather than
<Odd_Bloke> `release`.  (The key here is that `new_branch` only need be updated when it's necessary to fix the builds, most of the time it won't need to be touched.)
<kqkq95> blackboxsw: not really, at boot my network interfaces are disconnected, only with cloud-init, i test without cloud-init in my template that works well :(
<Odd_Bloke> Basically `new_branch` is where we handle the munging required for `master` to cleanly merge into `release`.  This would be instead of us doing it in `release` (because doing it there is what leads us to have to think about popping cherry-picks etc.).
<Odd_Bloke> (Oh and when I say "if instead we merged", I'm talking in the daily recipe build.)
<rharper> Odd_Bloke: I guess I don't understand what's so objectionable to popping of cherry picks in the release branch;
<rharper> s/of/off
<rharper> my question with a third branch (or daily sub branch to the release branch is):  where do you land release changes first  (I want to make a change to debian/control on ubuntu/devel ) ; we will ultimately release this *eventually* but certainly we want daily to get it first ... so if we put in ubuntu/devel first, we have to remember to merge that into ubuntu/devel/daily (for example)  or reverse this, and we still need to remember to land the change
<rharper> on two branches
<rharper> if we are changing recipes, I would prefer the idea of having daily recipe "fix itself" by popping off cherry-picks, if any
<Odd_Bloke> Well, we don't have to remember per se, daily builds will fail and then we can fix it (or they won't and we don't).
<rharper> heh
<rharper> we already have that today;
<rharper> I was hoping to not have daily builds fail if we have a well-defined process for updating release branches and automatically cleaning up after releasing
<rharper> at least now we have exactly one release branch to maintain  vs. two per release going forward
<Odd_Bloke> From a practical perspective, popping off cherry-picks fails in (5) in Chad's doc.
<Odd_Bloke> Having to reapply them to add another one, and then pop them all back off feels fragile.
<Odd_Bloke> From a less directly practical perspective, having the release branches not represent what's actually in those releases is confusing (and annoying in a practical sense, `git diff ubuntu/devel` won't show me how my current tree differs from focal, for example).
<rharper> hrm, I would suggest that if only the recipe build popped cherry-picks off; that seems the least fragile;    the release branch stays in it's "released" state;  daily reverts any cherry picks if present and builds it;
<Odd_Bloke> Well, I don't believe that's an option in the recipe builder.
<Odd_Bloke> And wouldn't help in the d/control scenario you laid out.
<rharper> https://help.launchpad.net/Packaging/SourceBuilds/BzrBuilder   suggest that we can use run <shell code here>
<Odd_Bloke> "Note: Launchpad does not support the run command."
<rharper> ah I see, the bzr-builder did but lp doesn't
<Odd_Bloke> Yep, for security reasons I believe/assume.
<rharper> so forced to put what we'd run into a branch;  it does help in the d/control scenario as we merge in the release branch where the new d/control change was made, and then just need to pop off the cherry picks before building
<rharper> due to the recipe build restrictions it sounds like we have to maintain a different branch if we don't want to push/pop for daily builds if the release has cherry picks
<Odd_Bloke> "merge in the release branch" <-- merge from where into where?
<rharper> master merges release into itself for recipe builds
<rharper> it's git checkout master git merge <release>
<rharper> that's daily ppa recipe
<Odd_Bloke> Sure, just wasn't sure if that's specifically what you meant. :)
<rharper> I wonder if we could test to see if a branch was present ...
<blackboxsw> yes per recipe builder, I was trying to utilize the 'run ' command in the build recipe to do that popping for us. but it seems unsupported in git build recipes
<rharper> it would be nice if the recipe would prefer ubuntu/devel/daily  if present, and if not fallback to ubuntu/devel
<rharper> blackboxsw: yeah, lp does not support run for security reasons
<Odd_Bloke> I guess the d/control case I'm thinking of is the pytest one: the daily build and the release build needed _different_ d/control values.
<rharper> but we cherry picked in a d/control fix though IIUC
<Odd_Bloke> So I would expect that if we merge in the correct order, `ubuntu/devel/daily` would not need to be kept up-to-date.
<rharper> so it;s just one more cpick to pop off
<Odd_Bloke> We accepted having both test frameworks in our Build-Depends for a while, I think.
<rharper> Odd_Bloke: intersting  ...
<rharper> tell me more  as there is multi branch merging  in the recipe
<blackboxsw> hrm ahhh
<blackboxsw> right so if all we care about for dailies is to drop debian/patches/cpicks- we could merge the ubuntu/devel/daily/debian/patches directory that only contains the base patches that matter for the release branch
<Odd_Bloke> That isn't where I was going (which isn't to say that it isn't worth going there :p).
<blackboxsw> so any cpick patches that we happen to apply to ubuntu/devel would be absent once ubuntu/devel/daily/debian/patches was 'merged' ontop
<rharper> blackboxsw: I don't think you can merge away changes ...
<rharper> unless the fix includes a commit that removes the files ..
<Odd_Bloke> So essentially what we're trying to avoid when we pop cpicks back off is merge conflicts, right?
<blackboxsw> right
<rharper> Odd_Bloke: yes
<rharper> so, if we create ubuntu/devel and ubuntu/devel/daily  at the same time;  then update our recipe to be:  lp:cloud-init master; merge ubuntu-pkg lp:cloud-init ubuntu/devel; merge ubuntu-pkg-fix  lp:cloud-init ubuntu/devel/daily ;  the last merge is no-op as all of the commits in daily are already merged ;; if that doesn't break
<Odd_Bloke> I think we need to merge ubuntu-pkg-fix before ubuntu-pkg, so that it can resolve merge conflicts.
<rharper> then if we added cpicks to ubuntu/devel and  update ubuntu/devel/daily with the same cpicks;  then we pop off on ubuntu/devel/daily and commit that
<rharper> Odd_Bloke: maybe;  I'm weak around the ordering ...
<Odd_Bloke> So my proposal is that we merge a third branch into `master` before `release`, which can resolve those merge conflicts.  When we have a conflict, we manually merge `master -> new_branch` and then `release -> new_branch` (which should cause the conflicts, which we can manually resolve).
<Odd_Bloke> That would mean, in the immediate term, that the second merge in the recipe would be a noop.
<Odd_Bloke> (The second merge being `release ->` the result of `new_branch -> master`.)
<rharper> I think two steps here 1) sorting out the right recipe with the third branch  2) updating the tooling to keep daily recipe sub branch in sync when we modify the release branch ;  for (2)  I know if we cpick we need to copy and then revert in daily    for new changes to release branch; (like d/control)  it's not clear to me if we need to keep daily in sync or not
 * blackboxsw was starting to document the possibility here https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?both#proposed-build-recipe
<blackboxsw> ahh but the 'merge' attempt of u/d/daily would result in merge conflicts
<Odd_Bloke> So, to be clear, I'm proposing all three are full trees that are merged together fully.
<Odd_Bloke> But there might be a better solution that doesn't do that.
<rharper> why would there be merge conflicts; sorry I'm a little git merge slow
<rharper> only if ubuntu/devel changes touched a file that is cpicked, right ?
<blackboxsw> rharper: folowing Odd_Bloke's suggestion of 3 separate branches. I guess we wouldn't if we did the following during cherry-pick
<blackboxsw> (from ubuntu/devel): cherry-pick mycommit; git push ubuntu/devel:ubuntu/devel/daily;  git checkout ubuntu/devel/daily; cherry-pick --pop-all;
<blackboxsw> so we cherry pick into u/d; push to u/d and u/d/daily; checkout u/d/daily and pop-all;
<blackboxsw> then we won't have merge conflicts; and we won't have to revert ubuntu/devel
<rharper> Odd_Bloke:  we have master, we have ubuntu/devel which is a fork of master on which we keep the debian dir (and sync it up with master from time to time);  and ubuntu/devel/daily which is a fork of ubuntu/devel which is updated only when we modify ubuntu/devel and daily recipe would fail ; the cpick scenario  ;
<rharper> blackboxsw: yes
<rharper> and if ubuntu/devel has a new non-cpick commit to d/control, if we merge ubuntu/devel/daily over it;  there shouldn't be any conflicts AFAICT ;  daily is just behind ubuntu/devel but they don't overlap in changes
<blackboxsw> rharper: right any non-cpick is present already in ubuntu/devel/debian/patches and untouched by the --pop-all
<blackboxsw> so u/devel/daily would still contain the non-cpick debian/patches file
<blackboxsw> ok I *think* this clarifies what we  can do in cherry-pick for me if we are ok with a separate ubuntu/series/daily branch that automatically is updated during new-upstream-snapshot and cherry-pick calls
<blackboxsw> if cherry-pick/new-upstream-snapshot pushes to both ubuntu/<series> and ubuntu/<series>/daily then we don't have to any reverts in ubuntu/<series>
<blackboxsw> and we don't have to manually maintain those two branches as it'll be hidden under the tooling
<Odd_Bloke> Let's figure out what the behaviour should be (e.g. when applying a cherry-pick to ubuntu/$series, a corresponding revert should be pushed to ubuntu/$series/daily) before thinking about automating it.
 * blackboxsw tries writing it out on the hackmd doc
<blackboxsw> Odd_Bloke: rharper if we were ok supporting a seprate full ubuntu/<series>/daily branch anyway, there may be no reason to merge ubuntu/<series>/daily into ubuntu/series as ubuntu/<series>/daily should have all the commits it needs plus only the drop of any cpicks.
<blackboxsw> so build recipe could remain simple. but I'm writing that out now
<Odd_Bloke> OK, hang on, I'm a little confused: we haven't popped the latest round of cherry-picks, but daily builds are succeeding.
<rharper> https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-devel
<rharper> says no
<Odd_Bloke> Oh haha, I was looking at bionic.
<Odd_Bloke> Very smart.
<rharper> blackboxsw: AFAIK we always want to merge ubuntu/devel/daily  ; it just may not have any content to add
<rharper> unless we have cpicks
<Odd_Bloke> OK, so the issue isn't really merge conflicts, actually, it's that quilt patches don't necessarily cleanly apply.
<rharper> blackboxsw: we want a stable recipe that always works; and once we diverge a release branch then daily contains the "fixups"
<rharper> Odd_Bloke: we can also put our refresh patches work on to daily
<Odd_Bloke> Right, though we would probably just revert them there?
<blackboxsw> rharper: if ubuntu/devel/daily has no cpicks to drop ubuntu/devel == ubuntu/devel/daily
<blackboxsw> if we just always push ubuntu/devel -> ubuntu/devel daily with new-upstream-snapshot and cherry-pick, we never really have to merge
<blackboxsw> I think I tried capturing this in https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?view#proposed-branching-strategy
<Odd_Bloke> That would require a force push to ubuntu/$series/daily, I think
<Odd_Bloke> ?
<rharper> I don't want to force push;  and I don't see why we need to push anything into daily except cpicks
<Odd_Bloke> "tip of ubuntu/<series> plus commits to revert of any debian/patches/ named cpick-* or fix-cpick-*" implies force-pushing, I think.
<rharper> daily only needs to hold refresh debian/patches  and cpick pop-all
<blackboxsw> Odd_Bloke: I believe a force push would be required to daily right
<rharper> why ?  we merge in cpicks from release branch to apply a revert ;
<Odd_Bloke> Yeah, to have it be "tip + ..." it would have to be a force push.
<rharper> and we hold changes to debian/patches needed for refreshing release branch patchs
<Odd_Bloke> (Maybe I'm just getting hung up on the wording here.)
<blackboxsw> rharper: because we reverted cpicks in u/$series/daily  but didn't revert in u/$series base branch
<rharper> if I branch ubuntu/devel and then ubuntu/devel/dail from ubuntu/devel;   and Ihave master, merge in ubuntu/dlevel, and then merge in ubuntu/devel/daily
<blackboxsw> so how do we update u/$series/daily from u/$series without a force push?
<rharper> but the reverts play on top of the commits already merged in from ubuntu/delvel ?
<Odd_Bloke> Merge?
<blackboxsw> yeah I guess we could keep merging u/$series
<rharper> daily only needs to apply on top of release
<Odd_Bloke> We shouldn't really need to touch /daily all that often.
<rharper> and it's "undo commits only"
<rharper> yes
<rharper> that's the idea
<Odd_Bloke> (Not even for most releases, I would hope.)
<rharper> normaly daily will be the same state as ubuntu/devel was at the start of the new series
<rharper> Odd_Bloke: yes;
<Odd_Bloke> I'm putting together a test recipe now.
<rharper> nice
<blackboxsw> yeah I think that recipe would be just this right? https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?view
<Odd_Bloke> Oh ubuntu/devel/daily is an invalid name because we have ubuntu/devel, I've gone for ubuntu/daily/devel for now.
 * blackboxsw didn't understand that Odd_Bloke . why is that invalid?
<blackboxsw> because the branch name we reference looks like a path under ubuntu/devel?
<Odd_Bloke> Because `.git/refs/heads/ubuntu/devel` is already a file, so it can't also be a directory, basically.
<blackboxsw> ahh righto
<rharper> bummer
<Odd_Bloke> OK, so I have a working prototype recipe here: https://code.launchpad.net/~daniel-thewatkins/+recipe/oddbloke-cloud-init-daily-devel
<Odd_Bloke> My ubuntu/daily/devel has reverts of every cherry pick added since the previous release.
<blackboxsw> nice. /me wonders why the order seems backwards
<blackboxsw> I have the same understanding. if daily/devel
<Odd_Bloke> You have to revert in reverse order.
<blackboxsw> but I'd expect merge conflicts when you are merging ubuntu/devel into lp:cloud-init  on line 2
<Odd_Bloke> The issue is not merge conflicts, the issue is patches failing to apply.
<Odd_Bloke> And they aren't applied until the recipe is complete and both merges have happened.
<blackboxsw> ahh right. right.
<blackboxsw> +1
<blackboxsw> yes that makes sense in my head now
<Odd_Bloke> (I was earlier mistaken about this, so my earlier proposals presupposed that we needed to order the merges to be able to resolve conflicts.)
<Odd_Bloke> (Which is why they expressed them in a different order.)
<Odd_Bloke> blackboxsw: So I would suggest that we document this process and see how it works for us manually a couple of times before enshrining it in tools.
<blackboxsw> but again, why couldn't we just drop the merge ubuntu-pkg lp:~daniel-thewatkins/cloud-init/+git/cloud-init ubuntu/devel
<blackboxsw> as ubuntu/daily/devel will be devel + drops
<Odd_Bloke> It is currently.
<blackboxsw> I mean 'omit the line that says " merge ubuntu-pkg lp:~daniel-thewatkins/cloud-init/+git/cloud-init ubuntu/devel" in your recipe
<Odd_Bloke> I understood.
<Odd_Bloke> ubuntu/daily/devel is currently devel + drops.
<Odd_Bloke> But we don't want to have to update ubuntu/daily/devel every time we update ubuntu/devel.
<Odd_Bloke> OK, so I just did a `new-upstream-release master` and pushed to my repo and the build is now failing.
<blackboxsw> ok Odd_Bloke I see. right we only need to update it when we cherry-pick or drop cherry-picks
<blackboxsw> Odd_Bloke: I'd expect that if we aren't pushing to ubuntu/daily/devel every change to ubuntu/devel
<Odd_Bloke> It looks like because the fix-cpick-* patch wasn't dropped automatically by new-upstream-release.
<blackboxsw> Odd_Bloke: right that fix is also in my PR #45 on uss-tableflip
<blackboxsw> fix-cpick* and cpick-* get handled
<blackboxsw> s/handled/popped
<blackboxsw> by both new-upstream-snapshot and cherry-pick --pop-all
<Odd_Bloke> OK, with all patches dropped, the build succeeded.
<Odd_Bloke> Without updating ubuntu/daily/devel.
<blackboxsw> so Odd_Bloke if we only update ubuntu/daily/devel when cherry-picks are applied. then procedure looks like this maybe?:   git checkout ubuntu/devel; cherry-pick <upstream_cpick_commit>; git checkout ubuntu/daily/devel; git merge ubuntu/devel; git revert <local_cpick_commit>; git push ubuntu/daily/devel;
<blackboxsw> so individually each time we cherry-pick, we git revert that local cherry-pick commit and push to ubuntu/daily/devel
<Odd_Bloke> Something along those lines, yes.
<rharper> Odd_Bloke: catching up;   "with all patches drop the build succeeded/Without updating ubuntu/daily/devel"   -- that exactly what I was hoping.  Excellent!
<johnsonshi> Anybody have issues with running `tox -e pylint` recently? I constantly hit the "unable to import pytest" error, but when I try to do so through the py2/py3 console everything seems fine. This is on cloud-init master btw so unsure if this is a bug.
<rharper> johnsonshi: I've not, let me see if I can reproduce
<rharper> johnsonshi: tox -r -e pylint works for me
<rharper> I used -r to recreate the env fresh which means pulling down the pip packages into the virtenv
<johnsonshi> rharper: Yeah -r passed. Thanks!
<rharper> cool!
#cloud-init 2020-04-09
<kqkq95> hi all ! i use Cloud-init with foreman, I have a vmware template under Centos 8, and when my vm starts my network interfaces are disconnected, I use open-vm-tools-10.3.10-3.el8 and cloud-init v18.5-7.el8, and when I delete the cloud-init package from my template it works fine, I have been looking for several hours but nothing
<kqkq95> ?
<kqkq95> nobody uses cloud-init and open-vm-tools ?
<Odd_Bloke> johnsonshi: tox does attempt to detect when declared dependencies have changed and automatically recreate, but it isn't always perfect (as you now know ;).
<Odd_Bloke> kqkq95: The cloud-init core team are all based in US timezones, so we're only just coming online now.  There are ongoing issues with VMWare and cloud-init, which the cloud-init core team don't have the expertise to address.  We work with the VMWare developers who contribute to cloud-init, but as we don't have a depth of VMWare knowledge (or access to different environments etc.), there's only so much we
<Odd_Bloke> can do independent of them.
<Odd_Bloke> kqkq95: All that said, if you could run `cloud-init collect-logs` in a failing instance and make that tarball available somewhere, then I can see if there's anything obvious going on. :)
<apollo13> kqkq95: did you get guest customizations to work with a golden image against vmware?
<apollo13> I always fail because vmware won't detect that the open-vm-tools are installed till I start the vm the first time
<apollo13> which I especially do not want to do because it is a clean template
<kqkq95> hi all
<kqkq95> yes i use a vmware template under Centos 8, I do not know too much comment configure cloud-init with the value: disable_vmware_customization: false
<kqkq95> because i configure my static network with foreman who use open-vm-tools
<apollo13> kqkq95: curious, if you do not mind, how did you create the template? and does vmware detect guest customizations in the template without ever starting the template on vmware?
<kqkq95> the template was created with the official CentOS ISO
<kqkq95> everything works fine without cloud-init :(
<kqkq95> ok i follow this : https://kb.vmware.com/s/article/59557
<kqkq95> the network is now good :)
<kqkq95> but i have error in log : https://pastebin.com/X92rReed
<kqkq95> why eth0 show nothing while my network is good "ip address list" is ok
<apollo13> kqkq95: so you created the template by installing it on vmware and then convert to template?
<apollo13> Odd_Bloke: are you around by any chance and can help me with an ubuntu cloud image? I am trying to get rid of the console changes they do for grub
<kqkq95> appollo13: yes
<kqkq95> apollo13: yep
<apollo13> kqkq95: ah okay, how do you clean the system up afterwards?
<apollo13> although I guess you might just rely on cloud-init to set machine-id to something new etc
<kqkq95> apollo13: iarf
<apollo13> I went another way and used virt-sysprep to pregenerate clean images but then vmware won't realize that they include guest tools *shrug*
<kqkq95> apollo13: I don't clean...
<kqkq95> apollo13: is there a procedure or script to put on the model ?
<apollo13> na I scripted it myself with packer from hashicorp to automate the install and then cleaned it with virt-sysprep and generated an ova that I could import in vmware
<kqkq95> i can use : cloud-init clean
<apollo13> interesting, didn't know of that one :)
<kqkq95> ok cloud-init show all my interfaces when i run cloud-init init :)
<kqkq95> but why he says "Did not find any data source, search classes"
<kqkq95> i setup a file name 10_foreman.cfg in directory /etc/cloud/cloud.cfg.d/
<Odd_Bloke> apollo13: Sure, what's up?
<apollo13> Odd_Bloke: so ubuntu cloud-init images inject console=tty1 console=ttyS0 into grub.cfg
<apollo13> somewhere very early, I am trying to find out were so I can remove that
<Odd_Bloke> I believe it's in the shipped image.
<apollo13> not exactly sure why one would want a serial console :)
<apollo13> yes, but where :)
<apollo13> I am remastering the image with guestfish and I'd like to get rid of that
<Odd_Bloke> Oh, I see, you're asking where it's configured in the image?
<apollo13> debian had it in /etc/default/grub, ubuntu has it (after boot) in /etc/default/grub.d/50-cloudinit-settings.cfg
<Odd_Bloke> /etc/default/grub.d/50-cloudimg-settings.cfg
<apollo13> but the latter only exists after first boot?
<Odd_Bloke> I don't think so, I think it's in the shipped image.
<apollo13> let me recheck
<Odd_Bloke> (As an aside, the Ubuntu cloud images are designed to boot in as many scenarios as possible, and some places need a serial console for that.)
<apollo13> openstack I presume?
<apollo13> you might be right, I just checked the sha on my image and it doesn't match the one from https://cloud-images.ubuntu.com/focal/current/SHA1SUMS -- maybe I deleted it already
<apollo13> let me redownload that
<Odd_Bloke> I'm not sure off-hand which scenarios require it, but given how configurable OpenStack is, I'm sure there are OpenStacks that require it, yeah. :p
<apollo13> the funny thing is that by requiring a serial console it doesn't work on proxmox etc by default
<apollo13> or rather then I have to switch to serial and I rather have spice displays
<apollo13> Odd_Bloke: perfect the file is there, thank you.
<Odd_Bloke> OK, phew. :)
<Odd_Bloke> rharper: blackboxsw: openssh now (in focal since February) supports .d directories for ssh_config and sshd_config.  I've filed two bugs related to this (basically, one for reading, one for writing): https://bugs.launchpad.net/cloud-init/+bug/1871858 https://bugs.launchpad.net/cloud-init/+bug/1871859
<ubot5> Ubuntu bug 1871858 in cloud-init "cloud-init should support parsing ssh_config/sshd_config files with Include directives" [Undecided,New]
<ubot5> Ubuntu bug 1871859 in cloud-init "cloud-init should write ssh_config.d/sshd_config.d snippets (when supported) instead of modifying config files" [Undecided,New]
<Odd_Bloke> These seem like good candidates for our roadmap for next cycle to me.
<Odd_Bloke> powersj: rick_h_: ^ FYI.
<rick_h_> Odd_Bloke:  cool ty
<rharper> Odd_Bloke: ok
<andras-kovacs> Hi! Is there a way to disable cloud-init to write in the root users's authorized-keys file?
<Odd_Bloke> andras-kovacs: What is currently being written in there that you would prefer wasn't?
<andras-kovacs> Odd_Bloke:  no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"myusername\" rather than the user \"root\".';echo;sleep 10" + my pubkey
<andras-kovacs> Nothing, I decided to remove the whole file in the end with runcmd.
<kqkq95> ok I have made good progress in my problem, my configuration seems good, but it seems that the client cannot reach foreman, the flows are however very open
<kqkq95> error here : https://pastebin.com/hqnMrZFy
<andras-kovacs> kqkq95: what OS do you have exactly?
<andras-kovacs> what does systemctl status cloud-init.target says?
<blackboxsw> andras-kovacs: I think you are looking at #cloud-config`disable_root` and `disable_root_opts` settings
<blackboxsw>     disable_root: <true/false>
<blackboxsw>     disable_root_opts: <disable root options string>
<blackboxsw> https://cloudinit.readthedocs.io/en/latest/topics/modules.html#authorized-keys
<blackboxsw> you should be able to provide disable_root: false to avoid adding that to /root/.ssh/authorized_keys file
<blackboxsw> andras-kovacs: or alternately you could provide a different set of opts than the default .... no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command
<blackboxsw> by setting disable_root_opts: 'something else'
<andras-kovacs> thank you! and what about the ssh pubkey?
<andras-kovacs> I get it from a datastore but it would be enough if the default user would get it.
<kqkq95> andras-kovacs: CentOS 8
<andras-kovacs> kqkq95: if cloud-init services are enabled, no file or kernel parameter which disables it, try:
<andras-kovacs> mv /etc/systemd/system/cloud-init.target.wants/cloud-* /etc/systemd/system/multi-user.target.wants/
<andras-kovacs> cloud-init clean --reboot
<blackboxsw> andras-kovacs: ssh_import_id: [gh:<youruser]   or [lp:<youruser>]   or ssh_authorized_keys per https://cloudinit.readthedocs.io/en/latest/topics/examples.html#configure-instances-ssh-keys
<blackboxsw> any of those keys get assigned to the default user
<kqkq95> andras-kovacs: why mv ?
<andras-kovacs> There was a significant change with cloud-init in RHEL. Previously all these services were wanted by the multiuser.target
<andras-kovacs> now it moved to it's custom, cloud-init.target
<andras-kovacs> It's not working for me either in RHEL 7.8
<andras-kovacs> https://bugzilla.redhat.com/show_bug.cgi?id=1820540
<ubot5> bugzilla.redhat.com bug 1820540 in cloud-init "cloud-init package broken post 7.8 upgrade" [Medium,Assigned]
<andras-kovacs> I know the mv part is dirty, but it is the easiest way to test it IMHO
<kqkq95> my unit vloud* is already on multi-user.target.wants
<kqkq95> *cloud
<Odd_Bloke> Moving units around is very heavyweight (and could cause upgrade problems), it would be better to add additional dependencies IMO.
<kqkq95> andras-kovacs: cloud-init target does not exist, I have cloud-config.service, cloud-final.service, cloud-init-local.service and cloud-init.service
<andras-kovacs> which version of cloud-init do you have there? :O
<kqkq95> 18.5-7.el8
<andras-kovacs> Odd_Bloke: these are not the units just the symlinks :) Easy to revert/disable them.
<Odd_Bloke> Aha, right.  Still, better to express a dependency on cloud-init.target, probably?
<andras-kovacs> with systemctl edit yes
<andras-kovacs> but it was the shortest and fastest idea to test it
<andras-kovacs> but cloud-init.target is not there
<kqkq95> yes
<apollo13> ha I have the same question as andras-kovacs, disable_root: false and ssh_authorized_keys still writes to root as well as the configured user
<apollo13> just tested on ubuntu focal
<andras-kovacs> runcmd:
<andras-kovacs>  - rm -f /root/.ssh/authorized_keys
<andras-kovacs> it will be fine for me
<apollo13> that will work, but if my user actually set root as the default user I'd remove that againâ¦
 * apollo13 checks cloud-init source
<andras-kovacs> oh you are right :S
<andras-kovacs> but the funny thing is runcmd doesn't work if there is LVM on the server (which is sick in cloud env, I know I know.. .but still a requirement)
<andras-kovacs> And there is no info in the logs about it why :(
<andras-kovacs> at the end of the day I made a custom systemd unit which replaces the runcmd part and destroys itself
<apollo13> looking at current master of cloud-init: https://dpaste.org/zi5J/raw
<andras-kovacs> and noone should use the root user as the default one
<apollo13> doesn't look as if there were __any__ way to disable setting the key for root
<andras-kovacs> openssh is not bulletproof (nothing is)
<blackboxsw> apollo13:/andras-kovacs hrm right disable_root is actually saying any configured ssh keys are put in both root and <default_user> because we expect users may have accidentally tried to login as root and we steer them to the actual default user instead with a printed message breadcrumb.
<rharper> andras-kovacs: there's no interesection between lvm and runcmd ...     are you seeing an error in cloud-init status or cloud-init.log ?    or expecting output from runcmd to be somewhere on the filesystem and it's not ?
<apollo13> blackboxsw: yeah I'd like to prevent that because it leaks user information
<blackboxsw> so apollo13 andras-kovacs is the feature, that you want cloud-init to avoid touching root at all?
<blackboxsw> and only setup default_user
<kqkq95> andras-kovacs: no idea for me :(
<andras-kovacs> yes
<apollo13> blackboxsw: if use X is configured as default_user I do not want the ssh keys applied to root
<apollo13> yes
<apollo13> if user X *
<andras-kovacs> blackboxsw: I totally got your point and I was thinking previously like how smart was it.
<apollo13> andras-kovacs: funny though that we have the same problem the same day :D and I am pretty sure I do not know you
<andras-kovacs> to think about this scenario
<blackboxsw> I know there is a way to avoid dropping keys in root && default_user. just trying to think it through.
<andras-kovacs> set an immutable flag on it (but cloud-init would probably fail?)
<apollo13> blackboxsw: looking at the link I just posted from cloud-init master I doubt it
<andras-kovacs> I'll remove it at the end, that should be ok.
<apollo13> andras-kovacs: setting immutable would work, but that also means you have to generate .ssh/authorized_keys in the first place since it's not there by default
<andras-kovacs> apollo13: yes and I have a feeling cloud-init would fail (maybe silently) if it can't write there
<andras-kovacs> so I'll remove the file in the runcmd part at the end
<apollo13> is there any way to allow password logins?
<apollo13> and yes I know it's insecure, but it is just to allow initial login on the box and easy testing, our cfg mgmt will forbid it globally anyways
<rharper> ssh_pwauth: True
<rharper> password: XXXXXX
<apollo13> ah thx
<rharper> chpasswd: { expire: False }
<rharper> that's in my debugging user-data
<blackboxsw> andras-kovacs: apollo13 yeah looks like a feature request for cloud-init to avoid reflecting/disabled keys into 'root' user's authorized_keys file.
<blackboxsw> as you mentioned.
<blackboxsw> a bug would be nice for that and upstream can discuss whether this is a feature is something that we will be tackling in short term.
<apollo13> rharper: mhm, I am having a hard time finding what "password" does
<apollo13> ah it's from the default user, nevermind
<apollo13> the docs are confusing sometimes
<apollo13> interesting enough proxmox sets chpasswd.expire: false but not ssh_pwauth :D
<rharper> apollo13: yeah;  some of this config is quite old so it's not well scoped under which module uses it;
<apollo13> blackboxsw: https://bugs.launchpad.net/cloud-init/+filebug ?
<blackboxsw> yes please apollo13
<blackboxsw> just describe the feature and we'll triage it and determine how best to address it.
<apollo13> ffs I tried searching if such a bug already exists but timeout error on search
<apollo13> so might be a dupe :/
<blackboxsw> no worries, can check that.
<blackboxsw> thanks
<apollo13> blackboxsw: https://bugs.launchpad.net/cloud-init/+bug/1871879 -- not the end of the world for me, but would be nice if there were an option to disable it. Thanks for caring!
<ubot5> Ubuntu bug 1871879 in cloud-init "Configuring a user should not configure root's authorized_keys" [Undecided,New]
<blackboxsw> Odd_Bloke: rharper I have updated the branching/cherry-pick process spec https://hackmd.io/VbmtcZLyR4650aqqmfMMYg  to reflect yesterday's conversation and I have updated tooling to codify that.
<blackboxsw> I'm about to push up doc changes to uss-tableflip PR#45 and cloud-init PR 308 with the steps to fix daily build recipe
<blackboxsw> ok rharper Odd_Bloke I've closed 308  in favor of manual PR https://github.com/canonical/cloud-init/pull/312 for fixing ubuntu daily build recipe
<blackboxsw> thanks for the discussions yesterday about the best approach for branch management to avoid push/pop of cpicks in the ubuntu/series branches
<rharper> blackboxsw: ok ... I'll try to look;  grinding through some more curtin bits first
<blackboxsw> rharper: responded to the big question https://github.com/CanonicalLtd/uss-tableflip/pull/45#discussion_r406505316
<blackboxsw> and updated that I misread your 2nd question there
<blackboxsw> I think I may have missed the point of your question about 'only needing the commit' from ubuntu/devel in ubuntu/daily/devel in order to revert.
<blackboxsw> so I think on rereading the 3rd time yoy may have meant that ubuntu/daily/devel could actually just 'git cherry-pick cpick_commit_from_ubuntu_devel; git revert cpick_commit_from_ubuntu_devel' ? maybe
<blackboxsw> the issue with just git cherry-picking a single commit into ubuntu/daily/devel and reverting it is that ubuntu/daily/devel is essentially a downstream, so that revert we'd perform there is on a comittish that is local only do ubuntu/daily/devel branch, so it won't revert the parent's commitish.
<blackboxsw> ... when we merge back ubuntu/daily/devel into ubuntu/devel for daily recipe builds
#cloud-init 2020-04-10
<blackboxsw> happy easter folks. off for a few days.
<remilapeyre> Hi everybody! I'm trying to run a per-instance script after user_data has been run. Is this possible whithout having to loop and wait? During my tests I noticed that per-instance scripts were run before user_data but we depends on files that are created by it.
<kqkq95> hi all,hello, I'm trying to configure cloud-init on my vmware model with foreman, but after configuring, I still get the same error: "no instance data source found!" Probably low things to come ", can someone help me?
<kqkq95> how can i debug this error : "Seed from https://myforemanurl/userdata/ not supported by DataSourceNoCloudNet [seed=None][dsmode=net]
<AnhVoMSFT> rharper Odd_Bloke blackboxsw is there anyway to dump out the entire config of what cloud-init used to deploy a particular instance, as if the user was to supply this humongous user-data file, it would generate this exact deployment ?
<rharper> AnhVoMSFT: for user-data only, this is present in /var/lib/cloud/instance/user-data.txt
<rharper> AnhVoMSFT:   assuming the image has not been modified with other cloud-config; that should be sufficient;    we also now write out the system_config in /run/cloud-init/instance-data.json  which incoporates the sourced config from /etc/cloud/cloud.cfg.d/* ;    I suspect that the user-data is sufficient though
<AnhVoMSFT> thanks rharper
<andras-kovacs> kqkq95: write a custom systemd unit / shell script what can do the stuff for you and make it happen in the runcmd part. What datasource do you have there exactly?
<andras-kovacs> kqkq95: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/s1-one-time_script_on_next_boot_using_systemd_unit_file
<andras-kovacs> apollo13: how did you resolve the root + key problems?
<andras-kovacs> Did any of you see this.... WALinuxAgent whatnot? I have a project where I should upload custom vm templates in Azure also. But this thing is really... such a Microsoft way of thinking. All the others accept cloud-init, but them...
<andras-kovacs> Moreover it does that telemetry-thing like windows does.
#cloud-init 2020-04-11
<apollo13> andras-kovacs: I did not resolve it at all :)
<apollo13> andras-kovacs: in our special case we are using root as user now and then create the other users wvia ansible and lock root down again
