#cloud-init 2014-08-04
<RaginBajin> Does anyone know if Cloud-init works with Freebsd? in particular with cloudDrive.  I saw that there were some changes made to it 
<harmw> there is some preliminry support, though I should realy take some time to get on par with linux :)
<harmw> plus, thre is no pkg yet
<RaginBajin> Totally understand.  At this point, what we have is suffient, (keys, and users creation). 
<RaginBajin> I looked at the bsd-cloudinit that is recommended by the openstack people, but that is only support the HTTP metadata service, and I really need clouddrive as well to work 
<RaginBajin> I guess, one thing is that I am having an issue in getting it built on FreeBSD 10. I cloned out the repo, ran python setup.py build and python setup.py install 
<RaginBajin> but it seems to have installed for ubuntu and not recognizing freebsd
<RaginBajin> I'm wondering if I am missing a trick or a setting when I did all this
<harmw> bsd-cloudinit? whats that :)
<harmw> and you mean configdrive, right?
<harmw> and the default config file asumes ubuntu, so you'll need to change it to 'distro: freebsd'
<harmw> ah: http://pellaeon.github.io/bsd-cloudinit/
<RaginBajin> Ok, and there isn't anything special during the setup.py install that I have to do to reflect freebsd?
<RaginBajin> and yes I meant configDrive, sorry 
<harmw> you need to tell cloud-init which distro it should configure, which in this case would be freebsd
<RaginBajin> Maybe I missed that. I did change it in the /etc/cloud/cloud.cfg file 
<RaginBajin> but not during the install or build. 
<RaginBajin> when I try to run /etc/rc.d/init.d/cloud-init it just brings me back to the prompt. Doesn't look like it actually run anything. 
<RaginBajin> So I'm wondering if it maybe installed the wrong scripts. (ubuntu instead of freebsd) 
<harmw> it should just install everything it knows :)
<harmw> I'm sorry, I've just nuked my fbsd testvm so I can't verify anything at the moment :(
<RaginBajin> Hmm ok. I'm not sure how it knows that it's Freebsd. It is a custom build and kernel, but that's the only thing thats really different. Uname still says freebsd. 
<RaginBajin> So, I'm definitely closer.  I can now run it manually /usr/local/bin/cloud-init (which by the way doesn't get set right in the install, it sets it to /usr/bin/cloud-init).  
<RaginBajin> NOw it's trying to run ubuntu based commands. 
<RaginBajin> util.py[WARNING]: Failed to run debconf-set-selections for grub-dpkg
#cloud-init 2014-08-05
<RaginBajin> harmw Did you ever get ConfigDrive to work with freebsd or did you do everything through the HTTP service?
<RaginBajin> harmw I got ConfigDrive to work successfully in BSD.  There are a couple of things that I needed to do, but it's now working. What I do see is that setting the hostname in freeBSD doesn't work. It keeps failing on that part
<harmw> RaginBajin: I didn't put in code for it to read anything from configdrive
<RaginBajin> harmw: That's ok. I got it working. Once I figure out the issue with setting the hostnames, I planned on cleaning it up and submitting up the patches for it 
<RaginBajin> but this hostname thing is a bit weird. 
#cloud-init 2014-08-06
<smoser> harmw, if you see RaginBajin around or know who he is, patches are welcome for that.
<harmw> ofcourse :)
<harmw> oh, he's gone now 
<harmw> anyway, Ill see if I can take a look again at that fbsd code. It should be setting a hostname in /etc/rc.conf just fine
<harmw> smoser: any words on the buildroot merge?
<harmw> I noticed the absense of ssh keys btw, up untill first connect - meaning cirros can't print them in the final stage, since the files simply aren't there :p
<smoser> harmw, i so appreciate your help and persistance.
<smoser> i'm really sorry that i just never find time.
<harmw> haha, np
<harmw> thats the difference between doing stuff for $work and doing them out of personal intrest :)
<harmw> or perhaps even penalty :p
<pquerna> smoser: is there anything extra I should be doing to be getting make test to work on trunk?  https://gist.github.com/pquerna/116d43838844cce47304 (after having pip install -r test-requirements.txt requirements.txt)
<vbernat> Does CentOS provide cloud-init-enabled images?
<smoser> pquerna, hm.. i'm not sure. it "should work".
<smoser> the second failure i think you dont have 'file'
<pquerna> smoser: ah. confirmed, installing file fixes the second case.
#cloud-init 2014-08-07
<RaginBajin> Can someone help me understand the difference between say DataSourceConfigDrive and DataSourceConfigDriveNet?
<RaginBajin> For some reason, when I boot up, cloud-init can't find the configDrive. But if I log into the box, and run cloud-init init, it works just fine. 
<RaginBajin> When I run cloud-init on boot, it fails to find my configDrive. If I log into the console and then run it, it seems to work just fine.  Any ideas
<harmw> RaginBajin: still fbsd?
<RaginBajin> yeah
<RaginBajin> It looks like I got everything working, and then tried on a new host and rebooted it 
<RaginBajin> harmw I just don't get. Everything looks right. I am just wondering if a part of the boot processes hasn't finished and it holding back certain data from being accessed. 
<RaginBajin> harmw: Figured it out. BLKID needed to be set to an exact path, since it wasn't being found. 
<harmw> hm ok
<harmw> well, I haven't ever tested it with 1) a recent version of c-i and 2) clouddrive 
<harmw> if you have patches you're more than welcome to submit those :)
<RaginBajin> I definitely will. I need to do some thinking about them though. I had to insert some specific fbsd things to make it work. I think if I can figure out how to know what distro that is being used, then I can make this nicer
<RaginBajin> right now, it's pretty rough. 
<harmw> Ill look into running it again sometime soon as well
<harmw> did you run into an issue regarding hostnames btw?
<harmw> (setting the hostname)
<RaginBajin> I think I did, and let me see how I fixed it. 
<RaginBajin> Ahh, yes.  It's actually in the loadrcconf method.  It doesn't handle comments very nicely.  Since we try to split on = it doesn't find that match and basically stops working. Never to update the hostname
<RaginBajin> so,  I did a match for ^# and if the line didn't start with a comment, then to actually do the split. If it does, it just skips over it. 
<RaginBajin> The problem with that is that someone will lose all their comments in the rc.conf
<harmw> oh lol, it's not taking care of #?
<harmw> I knw that regex was a bit harsh, but I didn't know it was going to break stuff
<RaginBajin> I was thinking of really jsut getting rid of the loadrcconf, and moving it into the updatercconf. So if a value was to be updated, then it just parsed the file, and updated it once it found it. 
<RaginBajin> That would leave the comments in place and the order in place as well. 
<harmw> Ill have a look, somewhre soon
<harmw> thanks for pointing this out :)
<RaginBajin> Not a problem. I'm not sure I mentioned it, everything else seems to be working, and its been a life saver to have all the functions you've added. 
<RaginBajin> and I'm not sure if the changes were more of a FreeBSD10 thing than anything else. I want to try it on 8 and possibly 9 as well. 
<harmw> its only tested on 10
<harmw> though the things being done so far aren't realy specific to any version
<harmw> well, perhaps the ifconfig rc.conf syntax
<RaginBajin> Yeah, I think it may be really just blkid that may not exist in those other versions
<harmw> ahyes, well, since that regards configdrive I'm not ready yet to comment on that :p
<harmw> btw, you're always welcome to file bugs at launchpad, and even supply patches with them :)
<RaginBajin> will do 
<harmw> RaginBajin: how did you create your fbsd image? using oz?
<alexpilotti> smoser: good morning!
<RaginBajin> harmw:  I created the image using the bootonly iso.  Then I checked out the latest build of freebsd, and rebuilt the environment.  (make world, etc)
<harmw> RaginBajin: ok, you could try looking into oz then
<harmw> nice little tool to create al sorts of images which can be directly inserted into glance 
<harmw> ok, uploading a shiny new fbsd10 image to glance now so I'm almost ready to checkout the new ci on that
#cloud-init 2014-08-08
<harmw> ah, Icehouse.2 is out
<harmw> RaginBajin: I can confirm fbsd support is pretty broken, based on a first try to run c-i on it
<RaginBajin> harmw:  I should put a copy of all the horrible hacks I put into the code so you can see it. That may help a bit. 
<harmw> rm -rf /var/lib/cloud/ ; /usr/local/bin/cloud-init -d init
<harmw> thats how I 
<harmw> just started it
<harmw> but I realy should first have it print more debug info, because it used to print stuff like which datasource and distro it was using - info I don't get to see now
<harmw> I think the original code was against 0.7.4
<harmw> and worked :p
<harmw> aha, this is kinda lol
<harmw> RaginBajin: there is a pending mergerequest to adress some issues :p
<harmw> https://code.launchpad.net/~harmw/cloud-init/freebsd-static-networking
<harmw> smoser: boohooo
<RaginBajin> harmw: That's how I was initially testing it as well, till I put it into an image and rebooted it. That's when I found problems the PATH problems.  I had the debug issue not printing for a while, but I'm not sure what I did, it then started printing to a debug.log file which provided more info than --debug
<harmw> exactly
<harmw> ah, ofcourse
<harmw> RaginBajin: there is a /etc/cloud/cloud.cfg.d/05_logging.cfg that takes care of certain log stuff
<harmw> the dirty solution: move that file to some place lse
<RaginBajin> ahh. I bet I wasn't making it to that stage of the init. So that wasn't even getting started. That's whey when things started to work, I started seeing that file 
<harmw> and now I can clearly see a regex IndexError
<harmw> regarding the reading of rc.conf
<harmw> ok, on a newly created branch I see yet another issue: error: You must specify one of (systemd, sysvinit, sysvinit_deb, upstart) when specifying init system(s)!
<harmw> fixed
<harmw> copying sysvinit/redhat/cloud-config -> /etc/rc.d/init.d
<harmw> copying sysvinit/redhat/cloud-final -> /etc/rc.d/init.d
<harmw> copying sysvinit/redhat/cloud-init -> /etc/rc.d/init.d
<harmw> copying sysvinit/redhat/cloud-init-local -> /etc/rc.d/init.d
<harmw> thats something to look into as well
<harmw> tht and installing the config file in /etc/cloud/
<harmw> (that should be /usr/local/etc/..)
<harmw> RaginBajin: while the code should just leave comments in /etc/rc.conf alone instead of silently deleting them, I think one can perfectly defend the current behavior from a cloudy point-of-view :p where there is no real need to put anything in rc.conf because thats is managed from either nova (with userscripts) or (as far as bsd concerns: someday) from tools like puppet
<smoser> harmw is awesome.
<harmw> oh lol
 * smoser has to go afk. later all.
<smoser> i'll be more vailable next week
<harmw> :)
<RaginBajin> harmw:  Yeah that makes sense.  That's what I was thinking as well. Somehow those comments got into that file and was causing the problems. But I do know puppet normally leaves comments in those files. Not sure what that effect has just yet 
<harmw> well, the code was expecting key=val 
<harmw> and ONLY lines that match that pattern
<harmw> hm, for some reason its trying to set hostname to the current hostname instead of what it got from nova/metadata
<harmw> hm, or is it...
<RaginBajin> That should mean that it's using None datasource, so it's using the current values and going through the init process. 
<RaginBajin> I believe. 
<harmw> yup, just got to that conclusion :)
<harmw> importer.py[DEBUG]: Failed at attempted import of 'DataSourceOpenStackEc2' due to: No module named DataSourceOpenStackEc2
<harmw> that doesn't sound good
<harmw> (and same goes for all the other datasources)
<harmw> aha, so when I specify datasource_list: ['OpenStack'] in cloud.cfg, it all works
<harmw> https://code.launchpad.net/~harmw/cloud-init/freebsd, thats current now
<harmw> it includes the staticnetwork branch as well
#cloud-init 2014-08-09
<harmw> ah great, my FreeBSD image has / mounted on the middle partition; making it impossible to resize
<pquerna> smoser: https://github.com/pquerna/cloud-init/pull/1 -- mostly a heads up, don't intend it to be quite submitable to y'all yet, but it takes the JSON similiar to the one seen in https://review.openstack.org/#/c/85673/ to build a debian or rhel net conf.
<pquerna> smoser: i'll get it into bzr, tests, etc in the next week i guess.
#cloud-init 2015-08-03
<minfrin> Quick question - I'm trying to debug the following crash inside cloud-init when it tries to partition disks:
<minfrin> [CLOUDINIT] util.py[DEBUG]: Failed partitioning operation#012'list' object has no attribute 'splitlines'#012Traceback (most recent call last):#012  File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py", line 57, in handle#012    func=mkpart, args=(disk, definition))#012  File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 1875, in log_time#012    ret =...
<minfrin> ...func(*args, **kwargs)#012  File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py", line 682, in mkpart#012    if not overwrite and (is_disk_used(device) or is_filesystem(device)):#012  File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py", line 308, in is_disk_used#012    if len(use_count.splitlines()) > 1:#012AttributeError: 'list' object has no...
<minfrin> ...attribute 'splitlines'
<minfrin> Does this look familiar?
<Odd_Bloke> smoser: If someone wanted to reset passwords on their (OpenStack) instances, could vendor-data specify a module to re-run at each boot to do it?
<clouduser> Hi everyone. Is there a way to configure cloud-config to not use an sshkey for login.
<clouduser> I only wish to provide a password for the user.
<tpeoples> root user or some other user?
<clouduser> some other user. 'ubuntu'
<tpeoples> clouduser: can try something like http://paste.openstack.org/show/406770/
<tpeoples> can also do something like http://paste.openstack.org/show/406772/
<clouduser> Thanks.
<clouduser> I tried the first method. I still get "Permission denied (publickey)".
<Odd_Bloke> clouduser: Instances, by default, only allow you to use SSH keys (because they are substantially more secure than passwords); I think the second should allow password auth, so it's worth trying that.
<tpeoples> yeah, i think the lock_passwd set to false is the key to get password login working
<smoser> tpeoples, ssh_pwauth: True
<smoser> thats what you need to allow password loging
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L562
<number1> Hello, I've been reading through the docs and I'm just trying to figure some things out. I get that there are 3 for cloud-init; init, config, and final. What I don't understand is where user-data files fit in. Do those get executed in the config stage, or after the final stage?
<number1> I get that there are 3 stages for cloud-init*
<doesntunderstand> Hello, I've been reading through the docs and I'm just trying to figure some things out. I get that there are 3 for cloud-init; init, config, and final. What I don't understand is where user-data files fit in. Do those get executed in the config stage, or after the final stage?
<doesntunderstand> I get that there are 3 stages for cloud-init*
<smatzek> doesntunderstand:  while there are probably more technical answers, you can think of user_data as providing input to the modules that run at those 3 stages.
<smatzek> run cat /etc/cloud/cloud.cfg  You can see the modules listed in each stage.  The userdata can provide input to any/all of them depending the module.
<doesntunderstand> Ah, so user_data applies to all stages?
<smatzek> doesntunderstand: yes, depending on the content of your userdata
<doesntunderstand> Cool. Thanks, I appreciate it!
#cloud-init 2015-08-04
<smoser> joy https://github.com/testing-cabal/mock/issues/259
<smoser> curent tox doesn't support 2.6
<smoser> er... mock. not tox
<openstackgerrit> Daniel Watkins proposed stackforge/cloud-init: Pin mock at 1.0.1.  https://review.openstack.org/209036
<Odd_Bloke> smoser: ^ fixes that.
<smoser> yeah, but that kinda sucks
<Odd_Bloke> True.
<smoser> i'm not sure but iknow that some recent version of mock actually had a bunch of fixes
<smoser> that broken lots of bad tests.
<Odd_Bloke> https://github.com/testing-cabal/mock/commit/a6367a9a2b6166d7d032ec91288294ec47177649
<Odd_Bloke> Looks like it might be re-supported, actually.
<smoser> nice
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<smoser> Odd_Bloke, your thoughts on that are appreciated.
<arnaud_orange1> h gyus
<arnaud_orange1> hi guys
<Odd_Bloke> smoser: Looking at it now.
<Odd_Bloke> arnaud_orange1: o/
<arnaud_orange1> until ubuntu 14.04 I was using a metadata file like this one:
<arnaud_orange1> http://paste.ubuntu.com/12000034/
<arnaud_orange1> but with ubuntu cloud image 15.04 it does not work anymore
<arnaud_orange1> the /etc/network/interfaces file is well configured
<arnaud_orange1> but the network is not going up
<arnaud_orange1> for info, i am providing the meta-data and user-data in a /dev/vdb drive
<arnaud_orange1> is there any new way to configure network with datasourcenocloud on ubuntu 15.04?
<doesntunderstand> I'm looking for some clarification. Although user-data and meta-data are different, meta-data can be assigned through user-data correct?
<smatzek> doesntunderstand: no, not really.
<smatzek> this is what userdata can have:  http://cloudinit.readthedocs.org/en/latest/topics/format.html and as mentioned yesterday, it can provide input to the various modules at the 3 stages.
<smatzek> metadata, at least from Openstack provides these items:  system hostname, VM UUID, SSH public named key injection if you're passing a key name on the OpenStack create server request, and a pointer to the network configuration.
<smatzek> the pointer to the network configuration may only be there if you're using config drive, I'm not 100% sure on that.
<doesntunderstand> so meta-data can only be assigned from the datasource interface?
<smatzek> let's change the conversation, what are you trying to do, what problem are you trying to solve?
<doesntunderstand> I'm just trying to understand how all the pieces work together. I'm not currently trying to solve any problem.
<smoser> cloud-init reads some things (such as hostname) from meta-data
<smoser> but the user can override those values in user-data.
<smoser> it would e a nice featureto allow user data to specifically patch over any meta-data generically. but that doesn't happen now.
<doesntunderstand> Ah, thanks for clearing that up for me
<arnaud_orange1> is there any way to reboot a cloud VM and rÃ©initialise the cloud-config to simulate a first boot?
<smatzek> arnaud_orange1: depends what you want to do.  You can simulate a first boot or if you're interested in re-running a particular stage you can do different steps.  To simulate a first-boot do:  "Modify the instance ID found in /var/lib/cloud/data/instance-id and then rename the corresponding directory in /var/lib/cloud/instances/ to the new instance ID, then reboot the VM to have cloud-init re-execute your module."
<smatzek> or you could just delete the instances directory and the instance-id file and reboot, depends if you want to keep the old instances directory around for some reason
<arnaud_orange1> smatzek: ok tahnks
<smoser> arnaud_orange1, rm -Rf /var/lib/cloud && sudo reboot
<smoser> will do pretty much what you want
<doesntunderstand> Is the default frequency for modules once-per instance?
<harlowja> i think so doesntunderstand , from what i remember
<doesntunderstand> Alright, that makes sense with what I've been seeing
<harlowja> ya, unless overriden by the module, once-per-instance
<clouduser_> Hi All. I'm trying to provide static ip configuration in cloud-config as shown in below but it doesn't work. can anyone help.
<clouduser_> http://paste.openstack.org/show/407229/
<clouduser_> I also tried with to bring down the interfaces and restart using "bootcmd"
 * harlowja didn't think anyone processed 'network-interfaces' yaml sections
<harlowja> not in the cloud-init version i know of
#cloud-init 2015-08-05
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add cloud-init main  https://review.openstack.org/202743
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<arnaud_orange> smoser: thanks for the tip yesterday
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Make ReportingHandler a proper base class  https://review.openstack.org/209454
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Make ReportingHandler a proper base class  https://review.openstack.org/209454
<openstackgerrit> Merged stackforge/cloud-init: Make ReportingHandler a proper base class  https://review.openstack.org/209454
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/209520
<smatzek> smoser:  Last week you mentioned a weekly 10ET Wed meeting.  Is that going to be held here or on one of the openstack-meeting channels?
<smoser> it'd be here.
<smoser> and that is now.
<smoser> and ... hm..
<Odd_Bloke> smoser: We're in the hangout. :)
<claudiupopa> hey guys.
<smoser> claudiupopa, Odd_Bloke i'm sprinting, not erally available for call.
<claudiupopa> We're already here.
<Odd_Bloke> smoser: Ack.
<smoser> :-(
<claudiupopa> Ah.
<Odd_Bloke> smoser: Available for IRC meeting instead?
<smoser> i've been working on the reporting stuff.
<claudiupopa> Well, I think we can talk on IRC then.
<Odd_Bloke> Or just plain busy?
<smatzek> I'm on the phone on a sprint scrum as well.
<smoser> Odd_Bloke, probably i can sort of do this. but i need to change location . so 5 minutes ?
<Odd_Bloke> Well claudiupopa and I have abandoned the hangout.
<Odd_Bloke> So we can meet here. :)
<smoser> k
<smoser> k. back.
<Odd_Bloke> Cool.
<Odd_Bloke> So I'm planning on _actually_ getting the webhook stuff done by the end of the week.
<Odd_Bloke> At the very least an implementation without support for any sort of complex auth.
<Odd_Bloke> And hopefully a patch on top of that to add OAuth.
<smoser> Odd_Bloke, that would be fabulous
<smoser> Odd_Bloke, i have a oauthhelper that i had worked on last niht that might be useful for you
<smoser> the gist of it is in http://paste.ubuntu.com/12007052/
<Odd_Bloke> smoser: Oh, if you're already working on a web hook handler, why don't you just take that all the way and I can start looking at something else?
<smoser> Odd_Bloke, and then the use of it http://paste.ubuntu.com/12007107/
<smoser> in that first paste, i changed the reporting a bit.. we can talk later.
<smoser> claudiupopa, 'childrens_finish_info' is "the children's finish info"
<smoser> as opposed to multiple childrens
<smoser> that make sense >
<smoser> wrt https://review.openstack.org/#/c/209045/4/cloudinit/reporting/__init__.py
<claudiupopa> Ah, it wasn't obvious. ;-)
<claudiupopa> smoser: by the way, a review of this and the general approach will be appreciated: https://review.openstack.org/#/c/209520/
<smoser> k.
<smoser> if you want me to drop the s, i can do that.
<claudiupopa> it's okay, it's not a big deal.
<smatzek> can you refresh my memory on where / how the reporting framework will be used?  I see it being used to write log entries.  Will it also be used to send events for parallel datasource discovery?  Is it also intended to be used if we do post-first boot configruation changes via some notifiation from the metadata service?
<Odd_Bloke> smatzek: The intent is that it will be used to report on events that are happening in cloud-init; the log is the simplest case.
<Odd_Bloke> smatzek: It will also be used to update MAAS/Juju on the status of instances as they come up.
<smatzek> ok, and I could see a future use to notify OpenStack Heat?
<smoser> primarily its "status" at this event.
<smoser> so something (in this case maas or juju) could see that stuff is happening
<smoser> or failing
<Odd_Bloke> smatzek: Yeah, I don't see why not in principle.
<smatzek> so given https://trello.com/b/HoPNdiTI/cloud-init-development-roadmap is the general direction of the work to get several of the datasources working and then move into the stages like the network stage and get the OS distros in likely as part of that?
<smoser> yeah, we'd like to get the 'main' going. . i have a very stub branch for that
<smoser> but we'd like a basic main first.
<smoser> one that can find a datasource, and import ssh keys as iminila
<smoser> Odd_Bloke, i'd appreciate your thoughts on these changes
<smoser>  http://paste.ubuntu.com/12007369/
<smoser> claudiupopa, what does list_all_modules actually do ?
<claudiupopa> Getting all the modules that it can find, while list_valid_modules will return only those which are interesting for a particular finder.
<claudiupopa> I'm working on changing the API, since I don't like it so much that implementation details leak outside (finder from pkgutil).
<smoser> :)
<smoser> but what does it do ?
<smatzek> I've been studying that code for the past 30+ minutes and was working on some comments.
<smoser> i dont basically want to stat every single possible python path
<smatzek> We intend to reuse the finders for config modules and possibly distros right?  We probably don't want to have the design be that the import of the module does some registration of the module given that.
<smoser> that will be slow
<smoser> right
<claudiupopa> It will not stat every possible path, only those starting from a given root, such as cloudinit.sources.
<smoser> ok, that wasnt clear.
<claudiupopa> smatzek: yeah, that's the point, to reuse the same infrastructure for config modules.
<claudiupopa> While distros are already loaded specifically.
<claudiupopa> smoser: check cloudinit.sources.base.DataSourceLoader, that's the place where list_all_modules is used.
<smoser> because of 'search_paths'
<smatzek> my comment was in reply to  Odd_Bloke's comment on BaseModuleFinder's find_module method.
<smatzek> v0.7 has the ability to set the datasource list in cloud.cfg.  This is very handy in customized images in private clouds.  Could we get the ability to trim the datasource list before class initialization based on that in a future patch set?
<claudiupopa> yeah, why not.
<Odd_Bloke> smoser: It's much easier to see what the changes are if you submit a WIP code review. :p
<Odd_Bloke> smatzek: I'm not sure I follow; if the registration is performed by the module, then the plugin loading code can be even more generic (as it doesn't need to know _what_ it's importing, just where it should be looking).
<Odd_Bloke> I still don't actually have a strong opinion either way though. :p
<Odd_Bloke> Except to say that if we want {meta,user,vendor}-data to supply plugins, then we are going to need to have a way for them to register themselves anyway.
<Odd_Bloke> smoser: So if you take the webhook stuff, I can look at what the next step towards getting something bootable is.
<Odd_Bloke> Whether that is picking up your main stuff, or looking at a distro for Ubuntu.
<smoser> i'd say picking up main, Odd_Bloke .
<smoser> i will want some of your help... but i'll try to get you a review proposal
<smatzek> I see your point.  One thought is that we let the modules do the initialization / registering it would make it harder to use cloud.cfg to trim out datasources from beinig inited that you don't want or need in private clouds.
<Odd_Bloke> True.
<Odd_Bloke> Yeah, in fact, sold.
<Odd_Bloke> We will need to do things differently for config though, I think.
<Odd_Bloke> (Though probably not that differently)
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<smoser> claudiupopa, or Odd_Bloke if you could review ^
<smoser> that'd be good. i have somethign to stack on top of it.
<smoser> i think i addressed each of your concerns.
<smoser> fudge.
<smoser> and i jsut realize that having reporting/__init__.py call 'add_configuration' is a pita
<smoser> as anyuthing that imports that gets the default config added
<Odd_Bloke> smoser: Yeah, that's not necessarily a long-term thing.
<Odd_Bloke> (As in, once we have actual configuration stuff, that should disappear)
<smoser> Odd_Bloke, well, i think i have that coming
<smoser> this is just so painful.
<smoser> just getting there (not your code)
<Odd_Bloke> smoser: A module should only get imported once, and be cached from then on.
<Odd_Bloke> smoser: Are you seeing something other than that?
<smoser> oh.
<smoser> embarrasing i didn't know this.
<smoser> so then your
<smoser>  add_configuration(DEFAULT_CONFIG)
<smoser> will only be called once
<Odd_Bloke> That _should_ only happen once.
<smoser> thank you.
<Odd_Bloke> Unless we're doing something weird (i.e. plugin loading).
<smoser> right
<Odd_Bloke> And even then, we might hit the module cache depending on how we do it.
<Odd_Bloke> Looking at #209045 now.
<smoser> https://gist.github.com/smoser/6714b6c741a6f658b41e
<smoser> hey
<smoser> harlowja, around ?
<harlowja> sup dawg
<smoser> http://paste.ubuntu.com/12010061/
<harlowja> cool
<harlowja> u been busy
<harlowja> ha
<clouduser_> hey. what is the key for setting the hostname. i see two are there. 'hostname' and 'local-hostname' does anyone know where these should be user-data/meta-data
<harlowja> smoser feel free to use https://github.com/openstack/taskflow/blob/master/taskflow/types/tree.py if u  want :-P
<harlowja> seems like u made a tree like thing, ha
<harlowja> if u so desire
<smoser> i want to do update_configuration(DEFAULT_CONFIG, reset=True) to set instantiated_handler_registry
<smoser> but it wants to set the local variable instantiated_handler_registry
<smoser> what is the proper way to do that ?
<harlowja> global instantiated_handler_registry in 'update_configuration'
<harlowja> first line
<harlowja> 	global instantiated_handler_registry
<harlowja> *makes python know to look for global
<smoser> i dint' know if that was generally considered accetpab.e
<harlowja> other option
<harlowja> make update_configuration a method on DictRegistry
<smoser> clouduser_, https://github.com/openstack/taskflow/blob/master/taskflow/types/tree.py
<harlowja> seeing that its like def update_configuration(config, reset=False):
<harlowja> that makes me wonder if it should be `def update_configuration(self, reset=False):`
<smoser> thanks i'll look at that
<harlowja> suree
<harlowja> smoser https://review.openstack.org/#/c/209661/ since i know u care, haha
<harlowja> * no haha, serious talk only
<smoser> harlowja, help my gerrit foo
<harlowja> smoser ?
<harlowja> whats up
<smoser> http://paste.ubuntu.com/12010267/
<harlowja> git-review -R ?
<harlowja> try that one instead i thnk
<smoser> same thing
<smoser> i have the the one review at https://review.openstack.org/#/c/209045/4
<smoser> and wanted to put another one that depended on it
<openstackgerrit> Merged stackforge/cloud-init: add ReportingEventStack  https://review.openstack.org/209045
<harlowja> hmmm
<harlowja> typically what i do is say checkout the code @  https://review.openstack.org/209045
<harlowja> then add new code, commit it locally, then git-review -R
<harlowja> which creates new review with parent of the other review
<harlowja> typically that works for me :-/
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add unregister and reset to DictRegistry and use  https://review.openstack.org/209696
<smoser> got there one way or another
<smoser> https://review.openstack.org/209696
<harlowja> cools
#cloud-init 2015-08-06
<claudiupopa> smoser, Odd_Bloke: any reason why cloudinit.logging is called `logging`?
<claudiupopa> Are we expecting it to always replace the builtin logging module?
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add unregister and reset to DictRegistry and use  https://review.openstack.org/209696
<smoser> Odd_Bloke, https://review.openstack.org/#/c/209696/
<smoser> it is nonobvious to me how to test the udpate_configuration
<smoser> could you quickly add 2 ?
<Odd_Bloke> smoser: 2?
<smoser> id ont know why i said 2
<smoser> i guess 3
<Odd_Bloke> smoser: Oh, add some tests?
<claudiupopa> why isn't the reset separated?
<smoser> one for reset, update and update_with_null
<smoser> claudiupopa, as in you want to reset with no config
<smoser> right?
<claudiupopa> yeah.
<claudiupopa> I tend to not like boolean flags that modifies something.
<claudiupopa> I mean functions with boolean flags.
<claudiupopa> Just a question, is expected from DictRegistry to be thread safe?
<Odd_Bloke> claudiupopa: No.
<Odd_Bloke> claudiupopa: But only because there was no reason to at this point, rather than from some deep opposition to thread-safe data structures. :p
<smoser> claudiupopa, i think i'd just drop the 'reset'
<smoser> and if you want reset, you just instantiated_handler_registry.reset()
<claudiupopa> smoser: yeah, that would be better imo.
<Odd_Bloke> smoser: If you still want me to add tests after that, let me know when I can pull the change to play with it. :)
<smoser> Odd_Bloke, if you could, i'd really appreciate it.
<Odd_Bloke> smoser: Cool, give me a shout when it's ready.
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: add unregister and reset to DictRegistry and use  https://review.openstack.org/209696
<smoser> Odd_Bloke, ^
<claudiupopa> by the way.  any reason why cloudinit.logging is called `logging`?
<Odd_Bloke> smoser: ^
<smoser> to be like loggin
<smoser> logging
<claudiupopa> So we'll use that instead of the builtin logging module?
<smoser> yeah. by name it is the same and the intent is it looks the same.
<claudiupopa> Because in that case, it should be made explicit in url_helper and templater.
<claudiupopa> as in from cloudinit import logging instead of a plain `import logging`.
<claudiupopa> Since absolute import is not activated in those modules.
<smoser> you're right. those should use tcloudinit.logging
<smoser> at least that is the intent.
<smoser> good catch claudiupopa
<Odd_Bloke> smoser: Going in to a meeting now; will look at those tests in ~30 minutes.
<smoser> k.
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/209520
<claudiupopa> Odd_Bloke, smoser ^ still work in progress, but I'll appreciate a review regarding the direction.
<smoser> so as you suggest4ed, https://review.openstack.org/#/c/209520/2/cloudinit/plugin_finder.py
<smoser> shoudl have
<smoser>  from . import logging
<smoser> right ?
<claudiupopa> Yeah, I noticed right now that.
<claudiupopa> Thanks.
<smoser> https://review.openstack.org/#/c/209520/2/cloudinit/sources/openstack/httpopenstack.py
<smoser> can we jsut tlak here.. ?
<claudiupopa> yeah.
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Use an explicit absolute import for importing the logging module  https://review.openstack.org/210035
<openstackgerrit> Daniel Watkins proposed stackforge/cloud-init: Add unregister and reset to DictRegistry and use  https://review.openstack.org/209696
<smoser> so there, data_sources() does not know if it should do network data sources or local ?
<Odd_Bloke> smoser: I think it just needed the one test, which I've pushed up.
<claudiupopa> No, not at that point.
<claudiupopa> That was the intent of the strategies.
<claudiupopa> Which is a fancy word for implementing filters.
<claudiupopa> The idea is that if you want the network data sources only or any other kind of data source, you implement a strategy and you pass it to get_data_source.
<Odd_Bloke> Now I'm EOD'ing. :)
<smoser> Odd_Bloke, htanks
<claudiupopa> I did it in this way since it really separates the concerns.
<claudiupopa> Each data source will not be interested in how it will be chosen.
<smoser> claudiupopa, ok, i think thats probably fine, but the example strategy of 'serial' implied to me 'parallel'
<claudiupopa> yep.
<claudiupopa> At that point you can combine network + parallel.
<claudiupopa> Or any other combination you like
<claudiupopa> https://review.openstack.org/#/c/209520/2/cloudinit/sources/base.py
<claudiupopa> check valid_data_sources.
<smoser> i think that is ok. ... but i want to be clear somehow when loading the datasource to search
<smoser> that it does not have (or is not guaranteed) functional network
<smoser> ie, the network could be wrong at this poitn an dit shoudlnt consider using it
<claudiupopa> yep, I see your point. Probably the actual strategies that will be used will be specific per each step in the stages.
<smoser> claudiupopa, thats the real extent of my comemnts then i think
<claudiupopa> no other obvious issue?
<claudiupopa> thanks!
<claudiupopa> I'll push the tests tomorrow.
<smoser> claudiupopa, can you https://review.openstack.org/#/c/209696/ ?
<smoser> just as it is mine, i think you agree with it at this point.
<smoser> can you push yes for workflow ?
<claudiupopa> yep, looks nice.
<openstackgerrit> Merged stackforge/cloud-init: Add unregister and reset to DictRegistry and use  https://review.openstack.org/209696
#cloud-init 2015-08-07
<openstackgerrit> Merged stackforge/cloud-init: Use an explicit absolute import for importing the logging module  https://review.openstack.org/210035
<Odd_Bloke> claudiupopa: Could you workflow +1 https://review.openstack.org/#/c/202743/ ?
<Odd_Bloke> Oh, is it short a +2, actually?
<claudiupopa> Is Scott happy with it?
<Odd_Bloke> claudiupopa: I think so.
<Odd_Bloke> claudiupopa: And I think we said that I'd push forward with main stuff.
<claudiupopa> Cool.
<claudiupopa> Then I'm happy with it as it is.
<claudiupopa> +1ed
<claudiupopa> For workflow.
<claudiupopa> By the way, could you take a look again at the plugin patch?
<claudiupopa> I don't have tests, but I'll appreciate a comment regarding the direction.
<openstackgerrit> Merged stackforge/cloud-init: add cloud-init main  https://review.openstack.org/202743
<Odd_Bloke> claudiupopa: So with parallel discovery, we'd still load the code from the disk serially?
<claudiupopa> Good question. I think it depends on the iterator's flavour.
<claudiupopa> Right now the loading is serial.
<Odd_Bloke> claudiupopa: Should filtering by name be a strategy?
<claudiupopa> It could be.
<Odd_Bloke> claudiupopa: We don't actually have anywhere calling get_data_source with a list of strategies yet, right?
<claudiupopa> Yep.
<Odd_Bloke> claudiupopa: How would a FilterByNamesStrategy be created?
<claudiupopa> writing right now an example.
<Odd_Bloke> Thanks!
<claudiupopa> Something like this http://paste.openstack.org/show/412159/
<claudiupopa> Although _names should be passed somehow to the strategy.
<Odd_Bloke> Yeah, that was the bit I couldn't quite work out.
<Odd_Bloke> The strategies could be instantiated, and have a method that does the filtering?
<claudiupopa> You mean a separate method?
<claudiupopa> One for loading the data sources and another one for filtering?
<claudiupopa> Mm, the idea is to combine multiple of them to do the filtering, since trying to see if a data source is available or not is still considered a filtering operation.
<claudiupopa> I could instantiate them beforehand, in get_data_source.
<claudiupopa> And I could pass names only to the FilteringByNameStrategy.
<Odd_Bloke> So BaseSearchStrategy.__init__ wouldn't take any parameters by default, and search_data_source would become search_data_sources(<list of data sources>).
<Odd_Bloke> And you'd pass the return of that in to the next search_data_sources.
<Odd_Bloke> (Rather than in to the constructor of the next strategy, as you do now)
<claudiupopa> Oh, that could work.
<Odd_Bloke> So I think you would instantiate them in get_data_source, yeah.
<trueneu> Hi. How can I run a cloud init script on an already installed instance? I've found that I gotta trick cloud-init into thinking this is a fresh boot, but I can't understand where should I place my cloud init file.
<Odd_Bloke> trueneu: Why do you want to run cloud-init, rather than just running a shell script etc.?
<trueneu> It's in a neat cloud config form, and it failed to execute at boot somehow, so I need to re-do it.
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/209520
<smoser> Odd_Bloke, or harlowja or claudiupopa your thoughts on my https://code.launchpad.net/~smoser/cloud-init/trunk.reporting/+merge/266578 (0.7) woudl be appreciated.
<Odd_Bloke> smoser: Are registry and reporting copy-paste backports from 2.0?
<Odd_Bloke> smoser: Oh, no, there's a WebHookHandler in there?
<Odd_Bloke> smoser: Still don't know why you aren't getting stuff in to 2.0 so we can do a copy-paste backport.
<Odd_Bloke> Rather than doing a copy-paste backport, a change, and then a forward-port.
<smoser> copy & paste + imports + http://bazaar.launchpad.net/~smoser/cloud-init/trunk.reporting/revision/1155
<smoser> and the webhookhandler.
<smoser> Odd_Bloke, because of time line is all.
<smoser> and now that i think about it i think that code in that one doesnt work.
<smoser> the goal of the change there is to re-initialize if different.
<smoser> but i think the check there is comparing a dict to a class.
<openstackgerrit> Daniel Watkins proposed stackforge/cloud-init: Fix running cloud-init with no arguments on Python 3.  https://review.openstack.org/210381
<Odd_Bloke> smoser: claudiupopa: Minor fix to main. ^
<claudiupopa> Why doesn't parsed have the func attribute?
<smoser> because it didn't have a subcommand.
<Odd_Bloke> claudiupopa: It's a bug in Python 3, I think.
<smoser> maybe you can set_defautls on func to get it to call help ?
<Odd_Bloke> smoser: That works on Python 3, but not on Python 2.
<Odd_Bloke> smoser: claudiupopa: So that change gives us consistent behaviour on Python 2 and 3.
<Odd_Bloke> smoser: claudiupopa: Getting Python 2 to do something different will mean pre-empting the parser, because just parsing the arguments is what throws up the error.
<claudiupopa> I see.
<claudiupopa> Then it seems fine for me.
<Odd_Bloke> smoser: We have several different stages defined in cloudinit.shell, but I thought we were going to be running cloud-init as an agent (which would, presumably, only involve a single call to cloud-init).
<Odd_Bloke> smoser: claudiupopa: harlowja: I'm trying to work out how to name things; I'm going to work on persisting a discovered data source to disk (so that future runs don't have to perform discovery).  What should I name the data that cloud-init has derived from its environment?
<Odd_Bloke> It's not metadata, vendor-data or user-data; those are all inputs.
<Odd_Bloke> Maybe 'configuration', but that would seem to be more appropriate as the stuff in /etc that defines how cloud-init will run on an instance.
<Odd_Bloke> Any thoughts?
<claudiupopa> persisting data source to disk, as in caching?
<Odd_Bloke> claudiupopa: So one of the stub commands in cloudinit.shell is 'search', which will 'search available data sources'.
<smoser> ok. Odd_Bloke sorry, didnt' responde before
<smoser> so the stages... there are still stages that have to run in boot
<smoser> there might be a daemon that starts very early, and the stages communicate with that daemon. that is a possible implementation.
<smoser> also possible is that a daemon just starts later.
<smoser> but either way, as far as my vision can see, we'll have upstart or sysvinit or systemd jobs that run at points in boot
<smoser> that is what those stages are for.
<Odd_Bloke> I think making it possible to not run a daemon would be good; I can imagine people who are happy with cloud-init as-is not wanting an extra process running.
<smoser> wrt storing data, i think 'cache' sounds reasonable
<smoser> you'll never have to run the daemon
<smoser> even if it ran in boot, that'd just be an imlementatyion detail
<smoser> and then it'd shut itself down.
<smoser> but we can worry about that laer.
<Odd_Bloke> I'm not sure it is, strictly speaking, a cache though; some data sources will only be able to fetch information a single time.
<Odd_Bloke> (For example, CloudStack passwords can only be read once)
<claudiupopa> so metadata, userdata and vendordata all represents the same thing, an input data that's used to drive cloud-init.
<claudiupopa> How about drive data?
<claudiupopa> Sau execution data.
<claudiupopa> Or*
<Odd_Bloke> Actually, this is basically what would go in /var/lib/cloud/instance ATM; how about 'instance data'?
<claudiupopa> Yep, that sounds good as well.
<Odd_Bloke> smoser: Thanks for the info on the commands. :)
<Odd_Bloke> claudiupopa: smoser: So, next question: what do we want the data to look like when serialised on-disk?
<Odd_Bloke> claudiupopa: smoser: I'm thinking we could persist a dictionary as JSON, but I don't know if we have lessons from 0.7.x that suggest that's a bad idea.
<claudiupopa> why should it be a bad idea? I was thinking on JSON as well.
<Odd_Bloke> claudiupopa: Well, that's not how we do it in 0.7.x; I wasn't sure if that was intentional or not. :p
<smatzek> JSON would be nice if there aren't gotchas from 0.7.x that Odd_Bloke refers to.
<claudiupopa> by the way, the caching is persistent per cloud-init's run or it's always there?
<Odd_Bloke> claudiupopa: I would expect it to always be there.
<claudiupopa> because some portions of data shouldn't stay there for longer, such as passwords.
<Odd_Bloke> Potentially the consumers of that data should be responsible for clearing it out?
<claudiupopa> before it"s serialized on disk?
<Odd_Bloke> It would be good to be able to separate the "fetch all the data we need" step from the "use the data" step.
<Odd_Bloke> No, I think it would be serialised to disk.
<Odd_Bloke> And then whatever handles passwords removes passwords from the serialised data.
<Odd_Bloke> (Side note: If someone can read the password from the disk, they're probably already in a position to do whatever they want anyway. :p)
<claudiupopa> that doesn't seem very good, since it's not separating the concerns properly.
<claudiupopa> Yeah, that's also true.
<claudiupopa> But anyway it's harder to read it from memory rather than from disk. ;-)
<Odd_Bloke> I'm thinking that special-casing passwords isn't particularly useful.
<Odd_Bloke> claudiupopa: It's easier to just set it to whatever you want than read it from disk. ;)
<Odd_Bloke> Because there could be other private data that shouldn't be persisted long-term.
<claudiupopa> Maybe having a way to specifiy that a piece of data should never be serialized?
<claudiupopa> @dont_serialize_this
<Odd_Bloke> claudiupopa: That does mean (e.g.) setting passwords in the same process as fetches the password from wherever the password is fetched from
<smoser> agree with most of what is a bove.
<claudiupopa> in order to avoid ipc? If the agent is not involved, I  would expect it to happen in the same process nevertheless.
<smoser> json i think is fine with me. i used pckl in cloud-init 0.7 largely because it is simpler (i picked the class).
<Odd_Bloke> What if we just deprecate passwords in cloud-init 2.0 (and Ubuntu 16.04 cloud images)? :p
<smoser> i think we kind of *have* donethat
<claudiupopa> well, on windows they're still somehow required.
<Odd_Bloke> You never know, perhaps 2016 will finally be the Year of Windows on the Cloud. ;)
<Odd_Bloke> smoser: What are your thoughts on persisting passwords to disk?
<Odd_Bloke> Hmm, could we hash the passwords ourselves before putting them on disk?
<Odd_Bloke> (This is, obviously, special-casing passwords like I said I didn't want to do :p)
<claudiupopa> how about specific exemption?
<claudiupopa> Having a decorator that marks a particular piece of data as non serializable.
<Odd_Bloke> Right, but that then means that we have to use that data before this particular process dies.
<smoser> wll, you may need to persist them for some time
<smoser> right.
<smoser> yeah.
<smoser> we can do some thing. like hashing i dont think its unreasonable.
<smoser> if the perms on the data are correct, its sane
<smoser> and then after we consume it we can remove that data.
<smoser> it obviously did get written... maybe we'd need to shred
<claudiupopa> The same thing happens with hashing, the password will not be available anymore after deserialization.
<claudiupopa> as in we'll have a hash that can't be used.
<Odd_Bloke> Why couldn't it be used?
<Odd_Bloke> Ah, I'm guessing you can't use the hash of a password to set a password on Windows?
<claudiupopa> Nope. ;-)
<Odd_Bloke> *buys a cheap Windows laptop on eBay, so he can throw it out of the window* :p
<Odd_Bloke> OK, I think I can implement the first pass as 'serialise all the things' and then we can work out the nuance later.
<smatzek> we still have operators that use password and may want it set.  I'm not defending the practice but it is still done.
<smatzek> do we know for sure we'll have separate processes serializing the data vs those that consume it?
<Odd_Bloke> smatzek: Currently there are two different cloud-init sub-commands defined which would do each bit.
<smatzek> as stated above I think there may be other cases of private or sensitive data that we may not want sitting around on disk, so the sensitive tag idea might be worth pursuing.
<smoser> Odd_Bloke, this does go towards a larger thread.
<smoser> with the goal of cloud-init query
<smoser> whether that hits a daemon or hits a cache, we want user to be able to get some bits of data
<smoser> and some bits to be privildged access only
<smatzek> another item that may be sensitive is the chef module's validation_key which is a private RSA key.  That might be good to delete/shred once the chef module is done running.
<Odd_Bloke> So my proposal is (1) we persist all the data to disk, and then (2) individual modules are responsible for shredding whatever data they consider sensitive (and no longer needed).
<Odd_Bloke> Actually, we could have data sources provide a way of fetching passwords.
<Odd_Bloke> And then the modules that care about passwords use that.
<Odd_Bloke> But that doesn't solve the case where the password(s) are in user-data.
<claudiupopa> why they are two steps?
<claudiupopa> data retrieval and persistance and execution?
<claudiupopa> I think I'm missing context here.
<Odd_Bloke> I don't see why they would be one step (except for the issue we are discussing now). :p
<Odd_Bloke> I'm taking my lead from smoser having stubbed out 'search' and 'config' as separate subcommands.
<Odd_Bloke> 'search' need not necessarily encompass actual fetching of the data, I guess.
<Odd_Bloke> Which I have been assuming.
<Odd_Bloke> I guess cloud-provided data can also change in the meantime.
<Odd_Bloke> So maybe we shouldn't be persisting much of this stuff at all...
<harlowja> hmmmm, Odd_Bloke i suck at naming things :-P
<Odd_Bloke> :D
<harlowja> put stuff into a little sqlite.db file , profit?
<harlowja> persistence.db
<harlowja> there u go
<harlowja> lol
 * harlowja is brillant
<harlowja> honest question, why not just store it in some /var/cloud/persistence.db or something
<harlowja> might be nice to have a little sqlite thing
<harlowja> i know i know the filesystem is currently used for this
<Toger> Hello, I am trying to use cloud-init on centos7, v0.7.5.  from cloud-init-0.7.5-10.el7.centos.1.x86_64.  I am using it to install chef, however the AMI I have is pre-hardened and has noexec set on /tmp.  The chef init script tries to download and run the installation out of /tmp which fails. The chef script honors tmpdir, so if I can reset the tmpdir environmental variable prior to the chef module then it'll work. Is there a way to do that in
<Toger> cloud-init?
<smatzek> chef runs during cloud_config_modules.  Looking at that module list I don't see any module where you could run arbitrary commands, scripts before it runs.  You may be able to use bootcmd, which runs in cloud_init_modules to change the system env, so the process that cloud_config_modules would pick it up, but I'm not sure if that would work.
<Toger> bootcmds would run in a subshell though, wouldn't it?
<Toger> so the env change would be lost
<Toger> I was hoping there was a way in cloudinit natively to set environmental variables for commands
<Toger> Or, changing       util.subp([tmpf], capture=False)      to       util.subp(['sh', tmpf], capture=False)
<Toger> For the chef module, I'd like node_name to be something like 'prefix-$INSTANCEID' as opposed to a static prefix
<Toger> and not just instance-id
<Toger> is there anyway to do that?
<Toger> in other words, when using this in a autoscale group I can't use one single node-name w/chef, but its not very friendly to use just i-a234tg names
<Toger> so for each autoscale group I'd put something like groupname-instanceid
<Toger> ideally
<Toger> the chef mechanism also needs a way to lay down the encrypted data bag key
<Toger> mm perhaps with write_files
<Toger> but it needs a way to at least specify the location
<Toger> and chef only seems to run if its installed via gems?
#cloud-init 2015-08-08
<krisi> hi!
<krisi> is there a way to preseed deb packages with cloud-init?
<krisi> so their installation is automatic
#cloud-init 2015-08-09
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/209520
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/209520
#cloud-init 2016-08-08
<pava2> Hello> Is there some one who can spare 5 minutes for helping me with a cloud-init problem ?
<smoser> pava2, ask
<pava2> smoser: I want to config growpart to expand /dev/vdb1 partition
<smoser> you can just arrange for it to do that. via runcmd or boothook.
<smoser> if you know its going to be /dev/vdb1
<pava2> smoser: I know I just saw the .cfg version
<pava2> smoser: I was trying that but it's ignored for some readon
<pava2> reson
<smoser> pava2, i'm sorry,, i dont follow
<smoser> #cloud-config
<smoser> bootcmd:
<smoser>  - [growpart, /dev/vdb, 1]
<pava2> smoser: thank you
<smoser> probably quote the 1
<smoser> maybe not needed.
<pava2> smoser: ok, thanks.
<harlowja> ok, @smoser any idea whats up with this https://gist.github.com/harlowja/4419d6dfa3c16cd2795d55ac11a89cb8 :-P
<harlowja> seems to be borking now ;)
<harlowja> did something just merge
<smoser> harlowja, works here. i think
<harlowja> hmmm
<harlowja> is that acutally opening a google url
<harlowja> hmmm
<smoser> no. doesnt work here.
<smoser> wth
<harlowja> ya, jenkins started puking on this one
<smoser> something in tox must have changed
<smoser> in pip
<harlowja> hmmm
<harlowja> durn
<harlowja> ok, i'll see about that, killing my cloud-init piepline of recent
<smoser> can you look ?
<smoser> i know that i ran tox on friday
<smoser> and fucntional
<harlowja> k
<smoser> harlowja, did you dig at all ?
<smoser> i have a tox on one system that works. need to compare pip list in working versus current
<smoser> harlowja,
<smoser> -requests (2.10.0)
<smoser> +requests (2.11.0)
<smoser> horay!
<harlowja> smoser haven't yet
<harlowja> it was requests?
<harlowja> weird
<harlowja> did mocking bork though, or is some mock missing
<smoser> i dont know. one system it works for me one doesnt
<smoser> http://paste.ubuntu.com/22735242/
<harlowja> k, i'll mess around
<smoser> harlowja, just confirmed, this does "fix" http://paste.ubuntu.com/22738487/
<harlowja> hmmmm
<harlowja> kk
<harlowja> wonder what changed
<harlowja> maybe httpretty needs a change
<harlowja> though 'KeyError: 'public-keys'' seems like a valid error, just need to dig into it
<harlowja> ya, smoser the whole httpretty crap isn't getting called
<harlowja> which seems odd
<smoser> does it with the older ?
<smoser> i agree, i'm kind of confused on how it was working.
<harlowja> ya
<smoser> not sure what to do. i planned on releasing something as 0.7.7 tomorrow
<harlowja> https://gist.github.com/harlowja/363ac5a228a79abca0c7c90ab7a3dff3 if u want to see
<harlowja> :(
<harlowja> that whole request callback never gets called with the newer requests
<smoser> harlowja, what do you think of this:
<smoser>  http://paste.ubuntu.com/22741578/
<harlowja> seems ok with me
<smoser> harlowja, so i was / am expecting to make 0.7.7 release like tomorrow with what is in trunk.
<smoser> this tox not working is a pita.
<harlowja> ya, pretty sure its cause of some urllib3 bundling chane
<harlowja> *change
 * smoser thinks we'll ultimately end up with tox envs of 'xenial', 'trusty', centos7...
<smoser> and 'pip-tip' or something
<harlowja> cools
<smoser> harlowja, do you have a handle on this ? or should i just put == 2.1.0
<smoser> or 2.10.0 rather
<harlowja> looking into it, unsure exactly where it got broke
<harlowja> trying to find out :-p
<harlowja> but blocking it will work until then
<smoser> k. i will just do that for now.
<harlowja> k
<harlowja> ok i think https://github.com/kennethreitz/requests/commit/bd9e8f2271 did it
<harlowja> at least before that it was working (minus a key error i think we need to handle)
<harlowja> so thats at leaset part of the puzzle
<harlowja> let me see if i can fix that easily
<smoser> well, if you get a fix and its proper, please let me knwo.
<smoser> if not i'll fix with 2.10.0
<harlowja> k, give me a few :-P
<harlowja> need more time captain
<harlowja> lol
<harlowja> ok smoser i got it
<harlowja> https://gist.github.com/harlowja/428fd8efb769c1efa57597873bc4b7d2
<harlowja> smoser ha
<harlowja> thats it :-P
<harlowja> the outgoing header validation not liking bools
<harlowja> *new outgoing header validation
<harlowja> https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+merge/302350
<harlowja> don't use non-string in headers
<smoser> harlowja, works with 2.10.0 too
<smoser> ?
<smoser> wow.
<smoser> thats nice. thankyou.
<harlowja> yaya
<harlowja> np
<harlowja> np
<harlowja> here to help
<harlowja> will unbork my jenkins, ha
<smoser> harlowja, can you rebase ?
<smoser> and push
<harlowja> sure
<smoser> you're one commit behind
<harlowja> k, updated
<harlowja> should now be in sync
<smoser> harlowja, pushed. dankeshan
<harlowja> np
<harlowja> dankemeinbubble
<harlowja> dankemoser
<harlowja> lol
<harlowja> (i made those up)
<harlowja> lol
<harlowja> ok, time to see about packaging being built now (the part of my pipeline thats missing)
#cloud-init 2016-08-09
<harlowja> so one thing i'm seeing smoser on cent6
<harlowja> https://gist.github.com/harlowja/9ee63a2e66f62c91128c31d18fad74ea
<harlowja> seems like it doesn't know about tar.gz, lol
<harlowja> https://gist.github.com/harlowja/bf96bb96cd1d59aeeff5ae3e3df99859 is from cent7
<harlowja> so perhaps thats still missing some things
 * harlowja thought one of larsks commits fixed that
<harlowja> ok, fixed that, had to explicitly install cheetah
<harlowja> gotta get that spec template to jinja
<smoser> hm.. harlowja  did i miss something.
<smoser> oh. ok.
<smoser> we can for sure fix that.
* smoser changed the topic of #cloud-init to: cloud-init 0.7.7 due today . reviews: https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+ref/master/+activereviews
* smoser changed the topic of #cloud-init to: cloud-init 0.7.7 due 20160809 . reviews: https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+ref/master/+activereviews
<harlowja> smoser cool, i'll get it up and fixed
<harlowja> so that cent6 builds
<harlowja> and maybe so that without cheetah brpm dies, lol
<rharper> smoser: updated and rebased add-ntp
<harlowja> smoser have u seen
<harlowja> $ ./tools/make-tarball  HEAD
<harlowja> fatal: No annotated tags can describe 'b56d7a191fc695be364430f8428a17591c523403'.
<harlowja> However, there were unannotated tags: try --tags.
<harlowja> is that script supposed to work?
<harlowja> $ ./tools/make-tarball  0.7.6
<harlowja> fatal: No annotated tags can describe '797de394e5395f39b7f17403999e25cbe7f7a126'.
<harlowja> However, there were unannotated tags: try --tags.
<harlowja> :-/
<harlowja> (thats on git version 1.7.1 in cent6)
<harlowja> same happens on mac
<smoser> harlowja, you have to get the tags
<smoser> git pull --tags ?
<smoser> errr. fetch
<smoser> there are annotated tags though.
<harlowja> ah
<harlowja> ok
<harlowja> that would make sense
<harlowja> error was useless :-P
<smoser> harlowja, i'm going to do release of 0.7.7 today
<smoser> do yout ihnk anything there is neecessary from your perspective ?
<smoser> that is not in now
<harlowja> ummmmm
<harlowja> ya
<harlowja> smoser  https://gist.github.com/harlowja/d8797ed6c8db9e119020c64a160118f4
<harlowja> the git on cent6 has tar mode, lol
<harlowja> just not tar.gz mode :-/
<harlowja> want to pull that in, it should be harmeless
<harlowja> cause git on those systems is stupid, ha
<smoser> yeah.
<harlowja> though @smoser the following is still weird, ha
<harlowja> $ ./tools/make-tarball
<harlowja> fatal: Not a valid object name make-tarball
<harlowja> is that supposed to happen :-P
<harlowja> argument parsing fail/
<harlowja> ?
<harlowja> (when no args)
<harlowja> $ ./tools/make-tarball  HEAD
<harlowja> fatal: Not a valid object name make-tarball
<harlowja> maybe its just a mac thing, ha
<smoser> harlowja,
<smoser> $ ./tools/make-tarball HEAD
<smoser> cloud-init-0.7.6+1027.gd0b2863.tar.gz
<smoser> $ ./tools/make-tarball
<smoser> cloud-init-0.7.6+1027.gd0b2863.tar.gz
<harlowja> hmmm, ya, must be a mac thing
<smoser> can you run that with sh -x
<smoser> and pastebinit ?
<harlowja> k
<harlowja> https://gist.github.com/harlowja/5704eb5dff529416240577aff3abb5ab
<smoser> harlowja, that is really weird
<smoser> can you just add this at top:
<harlowja> # Don't use this on a mac
<harlowja> lol
<smoser> i=0; for arg in "$0" "$@"; do echo "$i: '$arg'"; done
<harlowja> @smoser https://gist.github.com/harlowja/cf39b846f72cc5ff08b78a7783ffd865
<smoser> harlowja, getopt is busted i think on your platform
<harlowja> damn apple, lol
<smoser> harlowja, what does this show
<smoser> getopt --name "arg0" --options="h" --long="help," -- 1 2
<harlowja> $ getopt --name "arg0" --options="h" --long="help," -- 1 2
<harlowja>  -- arg0 --options=h --long=help, -- 1 2
<smoser> wth
<smoser> $ getopt --name "arg0" --options="h" --long="help," -- 1 2
<smoser>  -- '1' '2'
<harlowja> xlol
<harlowja> macs
<harlowja> maybe this is a bsd thing, ha
<smoser> what ives you your getopt?
<smoser> $ dpkg -S `which getopt`
<smoser> util-linux: /usr/bin/getopt
<harlowja> probably whatever mac built
<harlowja> lol
<harlowja> mr.steve built
<harlowja> ha
<harlowja> which i'm going to guess came from the bsd getopt from sometime ago?
<smoser> i really dont know what to do there.
<harlowja> its ok
<harlowja> just won't run that on a mac :-P
<smoser> harlowja, :-(.
<smoser> i will pull your tarball creation thing tonight and release
<smoser> and then we can get other things, possibly fixing this in some other way.
<smoser> maybe dropping sh there and using python
<harlowja> fine with me
#cloud-init 2016-08-10
* smoser changed the topic of #cloud-init to: cloud-init 0.7.7 released 2016-08-09. 0.7.8 open. reviews: https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+ref/master/+activereviews
<smoser> harlowja, larsks i just uploaded a 0.7.7 and pushed a signed tag.
<smoser> https://launchpad.net/cloud-init/trunk/0.7.7
<cpaelzer> smoser: has that check_version any good use-case other than to block one from just running make check to get all while developing :-)
<cpaelzer> well the check was alwas there, but if now you take cloud-init and make any change it seems it creates a new evrsion (based on git hash) which then blocks the check_version
<smoser> cpaelzer, check_version shoudl work
<smoser> its there to stop me from releasing something without the change to the cloudinit/version
<cpaelzer> smoser: http://paste.ubuntu.com/22917489/
<cpaelzer> it fails as soon as any commit is made
<cpaelzer> which is probably ok to block you from releasing
<cpaelzer> but stopping "make check" from checking anything else as it does an early exit on that
<smoser> cpaelzer, you need to pull tags
<smoser> git pull --tags
<smoser> i'm open to sugestions as that bit harlowja also
<cpaelzer> smoser: tags don't unblock me (tells me already up to date and still fails the version check)
<cpaelzer> I need to read what the check is actually doing
<cpaelzer> smoser: for the start I'd suggest putting it at the end of the line in the check: target
<cpaelzer> smoser: and then probably only calling it for release-checks or so
<cpaelzer> smoser: where do you refer to the check when building a release - in the d/rules?
<cpaelzer> ah yeah found the packages/debian/rules.in
<cpaelzer> smoser: how about this
<cpaelzer> create a release-check target
<cpaelzer> which holds: check check_version
<cpaelzer> and remove check_version from the "check" target
<cpaelzer> call check_release from the packages/debian/rules.in
<cpaelzer> smoser: if that would be ok for you I can quickly send an MP
<cpaelzer> what do you think?
<smoser> cpaelzer, you're right. its busted.
<cpaelzer> if LP wouldn't have had network issues you'd already have an MÃ
<cpaelzer> MP
<cpaelzer> I'll retry later
<smoser> cpaelzer, http://paste.ubuntu.com/22922872/
<smoser> when i changed the format, i missed fixing that.
<cpaelzer> ok, if you are on it I consider it done
<cpaelzer> thanks
<harlowja> smoser cools
<harlowja> i'm wondering if people would be interested in knowing what godaddy is doing with jenkins (via myself, ha) for this
<harlowja> https://gist.github.com/harlowja/748ddc47dd327ba25de6f60d77c5c5e0 is the http://docs.openstack.org/infra/jenkins-job-builder/ stuff for my little jenkins (currently)
<smoser> cpaelzer, i pushed fix for make check
<smoser> harlowja, thanks.
<harlowja> i'm thinking though that i might try to get out the pkg_resources entrypoints stuff soon though
<harlowja> cause i really don't like our injecting a module crap i am doing, lol
<harlowja> it it makes version numbering all weird also :-P
<smoser> what makes version numbering wierd?
<smoser> do you not like my git-describe versions ?
<harlowja> nah, just when i have to copy in a 'patch' it sort of distorts the real version of cloud-init
<harlowja> vs just having the module loading look outside for modules, and then i can have a cloud-init-godaddy-addons package or something
<harlowja> and can manage that myself and ...
<smoser> rangerpbzzzz, around ?
<smoser> rangerpbzzzz, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/302604
#cloud-init 2016-08-11
<harlowja> smoser larsks https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+merge/302609 this should be the start of using entrypoints for (at least for now) config modules
<smoser> harlowja, will look tomorrow
<harlowja> cools
<smoser> you have some comments i left on some of your other MPs
<harlowja> oh
<harlowja> hmmm
<smoser> https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+ref/master/+activereviews
<harlowja> kk, will get to those
<boolman> how do I set the timezone correctly ? I use timezone: Europe/Stockholm, it sure displays the correct time when running 'date'. but our application still logs in the wrong format.
<boolman> running the command 'timedatectl', is showing a weird timezone: "Timezone: # Created by cloud-init v. 0.7.5 on Thu, 28 Jul 2016 11:59:14 +0000 (CEST, +0200)"
<boolman> so, either I have to recreate the symlink for /etc/localtime or running 'timedatectl set-timezone Europe/Stockholm'. then it works, but its a workaround
<boolman> seems like this bug: https://lists.launchpad.net/yahoo-eng-team/msg18386.html
<mgagne> smoser: so I tried fixing the bonding support and there is a lot of problems and challenges
<mgagne> smoser: I updated the bug report with latest details
<mgagne> https://bugs.launchpad.net/cloud-init/+bug/1605749
<smoser> mgagne, yeah, i'm aware theres an issue there.
<smoser>  i pulled it and looked.
<smoser> we need to look up the name of the interface
<mgagne> so far, I'm stuck at 3.2)
<smoser> mgagne, i'll tyr to take a look today.
<smoser> harlowja, https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+merge/301728
<smoser> is failing tox
<harlowja> kk
<smoser> harlowja, also rebase that thing.
<harlowja> k
<smoser> doesn't even have the fix you put in for newer httpretty
<smoser> later.
<harlowja> doesn't it rebase when merged? automagically?
#cloud-init 2016-08-12
<rangerpb> smoser, looks like 0.7.7 is out now right? Care to hit the button to gimmie some merge love?
<smoser> rangerpb, yeah, i pinged you more than once...
<rangerpb> oh?
<rangerpb> oh crap, we had a power outtage here and I lost my irc proxy for a while
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/302604
<rangerpb> ok, i will test this afternoon
<smoser> you can diff it against the state of your last bzr. its not much different.
#cloud-init 2017-08-07
<ys__> Hi I am running openstack with cloudinit pre-installed in all images. Just wondering how cloudinit recognize whether a vm is a new vm or it is a old vm if vm is rebooted?
<rharper> ys__: hi, cloud-init uses an instance-id, provided by OpenStack's metadata service, this is stored (along wtih other per-instance variables) in /var/lib/cloud/instances/<instance-id>;  if you boot the same image in a new instance (say you captured the disk and booted a  new one) Openstack assigns a new instance-id, and cloud-init knows that it's not run as the new instance id and does it's init as the first time
<ys__> rharper: Hi thanks. What I actually wants to do is that, I am running openstack and boot vm from cinder volume. After I terminate the vm, I can keep the root volume and boot a new instance from this volume. However, I don't want to reset the password for that instance since the password has been configured previously. Is there a way I can do this?
<rharper> ys__: I'm not sure how we'd do that; but let me look; in general, openstack is going to give you a new instance-id when you boot a new instance
<ys__> rharper: Thanks!
<blackboxsw> ok couple minutes folks and we'll be starting the cloud-init meeting
<blackboxsw> smoser: is out this week so I'll chair. Just digging up the meeting details now
<blackboxsw> #startmeeting
<blackboxsw> hi all, Ok here we go cloud-init status meeting starts now we'll go through recent changes, inprogress work and office hours for open discussions
<blackboxsw> #topic recent changes
<blackboxsw> here is a list of content that has landed on tip  over the last 2 weeks
<dpb1> o/
<blackboxsw> - We no longer run locale-gen if requested system locale is default . saving time on instances which are already configured with the appropriate locale
<blackboxsw> - small SRU verifcation of #1690430
<blackboxsw> - Removed Yakkery tests due to series end of life
<blackboxsw> - cc_ntp timesyncd- support (when missing ntp in environment) Ubuntu Core or any image that doesn't have package manager  bug#1686485
<blackboxsw> - Fix /etc/resolve.conf comment added on each reboot (LP: #1701420)
<blackboxsw> - Fix integration test building local tree (avoid symlink chasing of sockets)
<ubot5> Launchpad bug 1701420 in cloud-init "Created by cloud-init comment being added on every reboot to /etc/resolv.conf" [Low,Confirmed] https://launchpad.net/bugs/1701420
<blackboxsw> rharper: dpb1 can you think of anything else I've missed here?
<blackboxsw> if not, we can head to InProgress work
<dpb1> nothing here
<blackboxsw> ok yeah looks like smoser also landed a quick one related to     archlinux: Fix bug with empty dns, do not render 'lo' devices.
<blackboxsw> ok next topic
<blackboxsw> #meetingtopic Ongoing (In-progress) Work
<blackboxsw> As smoser mentioned last meeting we were trying to get AWS IPv6 support in shape to land. 2/3's of that work is up, DataSourceAWSLocal will proform an initial dhcp4 discovery on a nic and query the meta-data source to find any ipv6 support designated for the nics on the instance.
<blackboxsw> we will likely land the 3rd branch (querying the ipv6 from ec2 metadata and configuring all nics accordingly).
<blackboxsw> this change required the datasource to know about a newer datasource 2016-08-02 I think instead of the current default which is 2009-04-04.
<dpb1> newer "metadata format" ?
<ajorg> this still relies on DHCPv6 to configure the address, right?
<blackboxsw> correct dpb1 , the 2016-08-02 version surfaces ipv6 info we care about consuming.
<blackboxsw> ajorg: the branch you looked over does the initial dhcpv4 on the fallback nic(eth0) then hits the 2016-08-02 metadata if it exists.
<blackboxsw> ajorg: the followup (work in progress) branch will check ipv6 configuration defined in the metadata and then properly configure all devices which need dhcpv6  because the metadata service itself doesn't quite give us enough information to fully (statically) configure ipv6
<ilianaw> is it just missing the gateway?
<blackboxsw> ilianaw: I believe it was minimally missing gw. let me see what else (....digging up the ec2 metadata description page)
<ajorg> okay, that's correct, afaik. configuring it statically probably isn't the right choice anyway.
<blackboxsw> ajorg: ok thx. Yeah so this dhcp4 discovery we do initially is only required because we can't contact the link-local metadata addr 169.254.169.254 with our own statically set link-local 169.254.x.y address. Something in aws only allocated/authorizes a specific source dhcp IP address to allow communication to the metadata service
<blackboxsw>   so our discovery lifecycle is now   : cloud-init init-local   - dhcp4 discovery eth0 - > wait for base to metadata 2009-04-04 up -> check for 2016-08-02  -> check ipv6 true/false - > dhcp6 on desired interfaces
<rharper> it could be cool to pass in the correct ip address that was assigned via non-network methods (like DMI table entry)
<ajorg> right.
<blackboxsw> right on DMI/smbios or some systrem env variable. it would save the first couple steps (timewise)
<blackboxsw> but that's a cloud platform change and would take some time we assume, so expedience of ipv6 support is currently the above
<blackboxsw> so ajorg I noticed comments on my branch. I'll add more docs to it, but that's the general approach for now
<blackboxsw> and as mentioned before DigitalOcean datasource I believe also does the init-local timeframe link-local network setup to discover more net config info from their metadata service too. So this approach is probably one we will grow in other datasources in the future
<blackboxsw> if there are other question, feel free to fire 'em off.  I'll move to the next topic, but we can still chat about it
<blackboxsw> We started another ubuntu SRU getting some bug fixes back into xenail and zesty https://trello.com/c/CAjwe8LX/273-cloud-init-sru-zesty-xenial
<blackboxsw> hopefully that trello board we use is public for all folks. If you can't see it holler :)
<ajorg> yup, it's visible
<blackboxsw> feel free to track what we are doing there. we should have most interesting things updated
<blackboxsw> sweet
<blackboxsw> we hope to get that SRU in this week. waiting on some checks and balances to queue it up for consumption.
<blackboxsw> we also started work on pulling in a cloud-init analyze tool rharper wrote into the cloud-init commandline tool so folks can debug their cloud runs
<blackboxsw> Pulling cloud-init analyze scripts into cloud-init devel sub-command
<blackboxsw> https://trello.com/c/k5F3KftA/126-cloudinitanalyze-mp-and-land-blackboxsw-smoser
<blackboxsw> this'll be really cool for tracking (and blaming) what costs clouds instances the most time during boot/discovery etc.
<blackboxsw> so we can improve effeciency across all datasources
<rharper> *cough* locale-gen *cough*
<blackboxsw> rharper: - We no longer run locale-gen if requested system locale is default . saving time on instances which are already configured with the appropriate locale ?
<ajorg> this was discovered using the analyze thing?
<blackboxsw> what should we chat about there do you think?
<rharper> blackboxsw: right, if the image pregenerates and configures locales, we'll skip unless user sets it (and it differs from the in-image setting)
<blackboxsw> rharper: do you recal how much time that costs to rebuild?
<rharper> it depends on the system, but it was roughly 1 second or so on a 3 year old laptop (with ssd)
<blackboxsw> I thought it was on the order of seconds
<blackboxsw> yeah
<blackboxsw> so anything we can do to expedite is a big win.   ohh I BTW the Ec2 init-local extra dhcp4 query/response seems to cost around a 10th of a second (so not too bad)
<blackboxsw> though it'd be great if we could either use link local, or be told by the environment somehow what the approporiate address is in order to contact the metadata service.
<blackboxsw> anyhow. I guess that's it for initial topics we can move to office-hours for anyone that has questions, concerns, requests etc.
<blackboxsw> #meetingtopic Cloud-init open office-hours
<blackboxsw> regarding these #commands, I filed a request to get a meetingbot to join this channel.  I'll followup on that to see if we can't get something in here to harvest links/notes etc and publish them somewhere for easy reference in the future
<blackboxsw> I think this probably wraps up our meeting if there are no pressing needs to discuss.  If anyone has any bugs in their bonnet, please raise them here. We'll be hitting up a small summit in a couple weeks and plan a working session to squash bugs.
<blackboxsw> Otherwise thank you all for your eyes/cycles and have a good one
<blackboxsw> #endmeeting
<ajorg> :)
<blackboxsw> thx again ajorg ilianaw
<ilianaw> blackboxsw: i do worry about not enabling dhcpv6 on interfaces because at a later point someone can go add an IPv6 address to that interface
<ilianaw> i.e. not running dhclient (or equivalent) because at boot there weren't any v6 addrs
<rharper> ilianaw: are you thinking of hotplug? or something else?
<ilianaw> rharper: a simple `aws ec2 assign-ipv6-addresses` call on an already-plugged-in interface
<ilianaw> that didn't have v6 addrs at boot
<rharper> ilianaw: does that update the metadata service configuration ?
<ilianaw> it does (i think? i hope so)
<ilianaw> the only way it informs the instance, as far as i know, is an ICMP6 router advertisement
<rharper> ok, I Don't think there are expectations that an OS reconfigure itself at runtime *yet* but we do want to have cloud-init react to metadata changes
<blackboxsw> sorry was grabbing coffee
 * ilianaw nods
<rharper> if cloud-init was watching the metadata service, and someone issues the aws update, that could trigger cloud-init to update network-config (at least rerender it);  possibly trigger a network-update;
<rharper> it's not clear if it's desirable (dhcp6 could take over default gateway)
<rharper> which could affect applications/routes
<rharper> so automatically doing that reconfigure has impacts that need to be weighed
<ilianaw> yup
<blackboxsw> it'd be interesting if we surfaced a configuration option to allow the instance to reconfigure network on metadata change...
<ilianaw> although would detecting a change just be "hit the metadata service every five minutes looking for updates"?
<rharper> well, that's certainly under discussion w.r.t cloud-init handling metadata changes
<blackboxsw> but this work would probably fall into the camp of some of the hotplug feature discussions you've already started w/ smoser rharper.
<rharper> ilianaw: normally we'd do this through something like a long-pool
<rharper> poll
<rharper> websocket
<rharper> so, consumes some resources, but more dynamic (and lower latency) than regular polling intervals (and lower resource cost w.r.t socket setup/teardown)
<blackboxsw> it's an interesting proposal, so does anyone have familiarity w/ cfn_hup? I just google hit
<blackboxsw> http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/updating.stacks.walkthrough.html
<blackboxsw> was just wondering generally in the aws case whether a pub/sub service for watching metadata deltas was "out there" somewhere
<ilianaw> i need to test what all cfn-hup listens at
<ilianaw> i was pretty sure it was only cloudformation related but i suppose i ought to check
<blackboxsw> yeah per the docs it looks fairly specific, but was wondering if the foundation is uses is something generic to all instances.
<ilianaw> i would probably not be as hopeful. but i'll definitely check
<blackboxsw> .... but seems like the same general concept rharper mentioned. occasional polling loop on data we care about, check for deltas and react
<blackboxsw> welcom meetingology :)
<blackboxsw> ok ready for next week
<blackboxsw> s/week/meeting/
<msaikia> Hi, I just submitted a changeset addressing a review comment. But it is failing continuous integration test for tip-pyflakes. It states that subprocess is imported but not used in tools/read-dependencies. That is not a part of my changeset. What should I do?
<rharper> msaikia: do you have a link to your changeset ?
<msaikia> yes. https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/322991/+index?ss=1
<rharper> let me see
<rharper> msaikia: it looks like you need to rebase your branch to origin/master ;
<msaikia> i actually did that as well. let me try it one more time
<rharper> if you ahve, you may need to force push to your launchpad git repo
<msaikia> ok..
<rharper> what I cloned from your repo reproduces the failure;  so on your local rebased branch, you can run the check directly yourself, tox -e tip-pyflakes
<rharper> once that passed locally, you can push to up date your launchpad repo;  then the jenkins ci job should pick up when you push and confirm it's fixed as well
<msaikia> rharper: Thanks. its passing locally. so i will force push it now.
<rharper> great!
<blackboxsw> ajorg, I just ran some additional tests on AWS with the Ec2 init-local with the dhcp initialization to talk to the metadata service. Overall cloud-init reaps a 1 second benefit from setting up in init-local phase instead of the init-network phase, even despite the dhclient setup & teardown.
<blackboxsw> ajorg, I added comments to my merge proposal. Even with the dhclient discovery in init-local  and the attempt to load minmal metadata version 2009-04-04 before checking 2016-09-02 , we still are a big win with this approach.
<blackboxsw> meh, he's not here at the moment.
<blackboxsw> rharper: I'm pretty happy about this metadata approach. Things are happy.  https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/328241 we'll await ajorg's feedback on the 'Needs information' but I think it's a go (so I'll pull together that IPv6 metadata query/setup branch which should be minimal.
<blackboxsw> s/things are happy/things appear fast/
<rharper> blackboxsw: looking
<blackboxsw> msaikia: will put some eyes your branch now too. thanks.
#cloud-init 2017-08-08
<ys__> Hi I am trying to use cloudinit to reset password, but not sure if it is doable.
<ys__> My idea is running a user script on every boot to query a url. If the response is True, then rm sem/config-set_passwords file. After reboot vm, the password will be reset to original password
<ys__> wondering will it work?
<rharper> ys__: if you're going to query a URL to determine if you want to reset a passwd;  you might avoid using the cloud-config entirely and just have your script run; and modify the passwd  itself
<ys__> rharper: yes you are right, it might be cleaner if just use a script to reset the password after URL query
<ys__> rharper: BTW, I saw a file cloud-config.txt in /var/lib/cloud, is it retrieved from metadata service endpoint (169.254.169.254) ?
<ys__> I tried to rm this file along with user-data.txt and user-data.txt.i in the same dir, then reboot the vm, cloudinit will re-created those files again. Is it retrieved from 169.254.169.254 this time? or it was cached somewhere?
<rharper> it's the combined cloud-config that was sent to the instance from the datasource used;   depending on the cloud (and datasource) it may come from a metadata service url (like the one you see)
<ys__> the reason i ask is because after I create the vm, I manually updated userdata in openstack and curl 169.254.169.254 reflected the update.
<ys__> Then I removed those files, reboot the vm, however cloud init re-created those files with original user-data
<rharper> different types of configuration is applied either once per instance, once per boot, or always, depending on the type
<rharper> cloud-init does not (yet) dynamically watch metadata service and automatically reapply if the configuration has changed
<ys__> ok
<rharper> if you want to re-run cloud-init on the same instance with new user-data, you'll need to do somethign like:  rm -rf /var/lib/cloud/*  and reboot;  this wipes out the per-instance data, and cloud-init will run like firstboot
<rharper> much of the data consumed is cached in /var/lib/cloud/instances/<instance-id>/*
<ys__> yes i actually rm /var/lib/cloud/instances/<instance-id>/cloud-config.txt, user-data.txt and user-data.txt.i (any file with userdata)
<ys__> but still re-created with original user-data (guess didn't query 169.254.169.254)
<rharper> you'll need to remove anything under /var/lib/cloud/
<rharper> if it finds the instance-id of an instance matches an existing directory in /var/lib/cloud/instances/ then it will assume it can use the cache and reloads it from the pickled object file
<rharper> I've got to step out for a bit, but blackboxsw is still here and can help answer any more questions you might have
<ys__> ok thanks
<blackboxsw> +1
<blackboxsw> yeah generally for clean reboot testing and validation we 'sudo rm -rf /var/log/cloud-init* /var/lib/cloud; sudo reboot' .. In this case, cloud-init will perform all initialization as if it were greenfield (unbooted) install and it will query your metadata service to pull all updated data
<ys__> yeah, i didn't realize there is a pickled obj file...
<ys__> i thought if cloudinit found some missing file, it will back to metadata service and retrieve the data. Now it make sense.
<ys__> thanks!
 * blackboxsw also wondered initially about also attempting something like 'sudo cloud-init single --name cc_users_groups --frequency always  since the user password was configured via userdata, but I *think* that might run into the caching problem too.
<ys__> blackboxsw: will that reset user's password (to default) on every boot if frequency always?
<blackboxsw> that'd be something that would have to be manually run each time I believe (it doesn't permanently designate frequency == always for the module). It just allows the module to look at the existing cache I thought and determine if the module was already run. I'll test now to confirm.
<blackboxsw> ys__: validated that even  --frequency always hits the cache despite changed user-data. only fully proven approach is to rm -rf /var/lib/cloud in that case currently.
<ys__> blackboxsw: Thanks. I guess best way i can do currently is to use a user script to query a url and rely on another script to reset password.
<blackboxsw> I like the idea of ignoring the cache with some of these "cloud-init single --frequency always runs" I'll look at how tough that would be to implement
<blackboxsw> but yeah probably rely on a script to reset password for the time being
<ys__> ok thanks
<blackboxsw> ok just integrated cloud-init analyze subcommand into cloudinit.cmd.main http://pastebin.ubuntu.com/25272516/
<blackboxsw> took a  bit of restructuring on this today to allow an existing standalone tool (cloudinit-analyze)  to be called as a subcommand under cloud-init
<blackboxsw> but it'll set a good precendent (once we review/agree) for pulling other tools into the fold
#cloud-init 2017-08-09
<blackboxsw> man solid test coverage takes a while.... it's been all day to pull together unit tests on a functional branch.
<rharper> blackboxsw: indeed
#cloud-init 2017-08-10
<blackboxsw> rharper: what's the best way to test this https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/328800
<blackboxsw> kvms?
<rharper> net-convert
<rharper> basically what the unittest is doing
<blackboxsw> true ok.
<blackboxsw> I have a branch for net-convert as cloud-init devel subcommand that we can look at after I get that analyze in shape.
<rharper> yeah, I really should see about doing v2 passthrough to targets that can render netplan
<rharper> otherwise, I'll keep finding incompatiblities where we map v2 into v1 for network_state, but miss something
<rharper> however, we still want the v2 to v1 mapping for v2 -> non-netplan rendering
<blackboxsw> rharper: approved  https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/328800 take inline comments as you will.  :) onto curtin
<rharper> blackboxsw: thanks, I'm going to push an update to that to handle passing through the original configuration if it's in v2 format when netplan is the renderer;  this avoids alot of v2 to v1 back to v2 conversion issues
<blackboxsw> will watch for that
<blackboxsw> rharper I don't see any readthedocs content from uson using cloud-init on the commandline. Am I missing something or is it worth a section describing cmdline client usage?
<blackboxsw> which would pull in cloud-init analyze too
<rharper> I don't think there are any, but things like running cloud-init single would be super useful
<blackboxsw> Yeah, I thought so given we sometimes point folks (or some of our SRU validation scripts)  at using that as a test/validation framework
<rharper> yeah
#cloud-init 2017-08-11
<boxrick> Hello! I have a simple question. Will cloud init provide me with a mechanism to configure a network within an LXD container before it comes online?
<rharper> boxrick: yes;  lxd already supports this; lemme find some docs
<rharper> boxrick: https://askubuntu.com/questions/617865/is-there-a-way-to-configure-lxd-containers-with-cloud-config-at-provision-time ;  in there the "lxc config set CONTAINER user.network-config - < CONTAINER.network-config.yaml"  is what you want
<rharper> the v1 config format is here: http://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html#network-config-v1
<boxrick> Cheers for that, I was having some problems. But its an image problem as opposed to a cloud init problem. But that will definately help along the way.
<blackboxsw> heh
<blackboxsw> usage: cloud-init [-h] [--version] [--file FILES] [--debug] [--force]
<blackboxsw>                   {init,modules,query,single,dhclient-hook} ...
<blackboxsw> cloud-init: error: unrecognized arguments: --force
<blackboxsw> it seems our help doesn't match up w/ allowed args in all cases
<blackboxsw> ahh --force needs to be before the subcommand
#cloud-init 2017-08-12
<marlinc> How does cloud-init device on which DataSource to use? I'd like to test this locally using the EC2 datasource to use it internally (with a metadata service) without the need for custom DataSource code which probably takes a few years to get in main Ubuntu
#cloud-init 2017-08-13
<marlinc> After a lot of experimenting I figured it out
#cloud-init 2018-08-06
<htaccess> are there any other docs about https://cloudinit.readthedocs.io/en/latest/topics/examples.html#adding-a-yum-repository besides this? I want to know if i can point gpgkey: at a url rather than a file
<htaccess> found it here https://www.zetta.io/en/help/articles-tutorials/cloud-init-reference/
<TJ-> I'm trying to diagnose why a Vagrant ubuntu/bionic64 image isn't accessing the userdata in the 2nd box, and I can't find any docs on how that is expected to happen. /run/cloud-init/cloud-init-generator.log reports "...enabled but no datasource found, disabling"
<TJ-> OK, seems that vagrant-libvirt is setting the wrong disk type ('raw' instead of 'qcow2').
<otubo> Hey guys, any plans to have a NetworkManeger renderer for cloud-init in a near future?
<dpb1>  netplan can render to nm
<TJ-> otubo: the cloud-init package (on Ubuntu 18.04) has a NM hook
<dpb1> read here: https://cloudinit.readthedocs.io/en/latest/topics/network-config.html
<TJ-> otubo: I see /etc/NetworkManager/dispatcher.d/hook-network-manager in the cloud-init package
<otubo> dpb1: TJ- thanks I'll take a look
<powersj> o/
<blackboxsw> #startmeeting Cloud-init bi-weekly status meeting
<meetingology> Meeting started Mon Aug  6 16:04:05 2018 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<dpb1> o/
<blackboxsw> hi folks, let's kickoff another cloud-init status meeting. Welcome back. Lot's of summer vacations disrupting our typical meeting schedule.
<blackboxsw> Our last meeting's minutes should be up on our github site
<blackboxsw> #link https://cloud-init.github.io/
<blackboxsw> for this meeting we'll go through the following topics: previous actions, recent work, in-progress development and office hours
<blackboxsw> #topic Previous Actions
<blackboxsw> from our last meeting we had a couple of actions to carry over
<blackboxsw> we landed the folowing branch which added support for a datasource to re-write network config across each boot.  https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000
<blackboxsw> #action rharper: and I need to review https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/333904
<meetingology> ACTION: rharper: and I need to review https://code.launchpad.net/~rjschwei/cloud-init/+git/cloud-init/+merge/333904
<blackboxsw> the above is still a carryover
<blackboxsw> that's all for actions from last meeting
<blackboxsw> #topic Recent Changes
<blackboxsw> the following has landed in cloud-init tip:
<blackboxsw> * oracle: fix detect_openstack to report True on OracleCloud.com DMI data (LP: #1784685)
<blackboxsw> * tests: improve LXDInstance trying to workaround or catch bug.*
<blackboxsw> * update_metadata re-config on every boot comments and tests not quite right [Mike Gerdts]
<blackboxsw> * docs: note in rtd about avoiding /tmp when writing files
<blackboxsw> * ubuntu,centos,debian: get_linux_distro to align with platform.dist
<blackboxsw> * Fix boothook docs on environment variable name (INSTANCE_I -> INSTANCE_ID) (Marc Tamsky)
<blackboxsw> * update_metadata: a datasource can support network re-config every boot
<ubot5> Launchpad bug 1784685 in cloud-init "Oracle: cloud-init openstack local detection too strict for oracle cloud" [High,Fix committed] https://launchpad.net/bugs/1784685
<blackboxsw> * tests: drop salt-minion integration test
<blackboxsw> * Retry on failed import of gpg receive keys.
<blackboxsw> * tools: Fix run-container when neither source or binary package requested.
<blackboxsw> * docs: Fix a small spelling error (Oz N Tiram)
<blackboxsw> * tox: use simplestreams from git repository rather than bzr.
<blackboxsw> generally speaking we had been spending some cycles on a stable release update (SRU) for cloud-init into Xenial and Bionic with top of tree cloud-init.
<blackboxsw> notably, we discovered a potential regression in Oracle datasource detection of their OpenStack implementation so that fix is queued for publish into xenial and bionic
<blackboxsw> 18.3-9 is what folks are looking for. in xenial/bionic/cosmic for latest cloud-init containing all the above fixes
<blackboxsw> Also powersj has been working on an auto-lander for cloud-init branches to get a few of us out of the way once a branch hits acceptm
<blackboxsw> Also powersj has been working on an auto-lander for cloud-init branches to get a few of us out of the way once a branch hits "Approved" status.
<blackboxsw> #link https://jenkins.ubuntu.com/server/job/admin-lp-git-autoland/
<powersj> yep that is live and with a recent fix to remove the extra "Author" line now
<powersj> hopefully it is saving blackboxsw, smoser, and rharper time ;)
<blackboxsw> powersj: can you explain what it does (so I don't have to type)
<blackboxsw> :)
<powersj> If a merge request is put in the "Approved" state, it will get test merged with the master branch
<powersj> the tests will run the same as during a review and verify that it can merge cleanly
<powersj> the commit message will get linted to verify it fits our format
<powersj> and if everything looks good, get merged in and pushed to master
<blackboxsw> thanks for that work powersj. it looks/works great so far.
<blackboxsw> #topic In-progress Development
<blackboxsw> All our current work is visible at the following trello board
<blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
<blackboxsw> I expect we'll have a couple of branches landed shortly in the following areas:
<blackboxsw> - smoser is working: A datasource specific to Oracle, because of their specific implementation of Openstack. Oracle will no longer use just stock DataSourceOpenStack.
<blackboxsw> - I
<blackboxsw> - I'm trying to wrap up a branch for Azure to write network data from their IMDS per-boot
<blackboxsw> #link https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348704
<blackboxsw> - Joyent (SmartOS) per-boot network config review
<blackboxsw>   - a couple netplan config option bugs for bionic ++
<blackboxsw> - and standardize instance-data sourcing in #cloud-config files (like referencing the hostname as detected from instance metadata)
<blackboxsw> I think that probably wraps it up for stuff in progress
<blackboxsw> anything I'm missing?
<blackboxsw> ... without further ado
<blackboxsw> #topic Office Hource (next ~30 mins)
<blackboxsw> eyes will float on this channel for any bug/feature discussions, review requests  etc. any cloud-init topic is acceptable.
<blackboxsw> a number of us are going to be prepping for a cloud-init summit meeting in the weeks to come.  A number of attendees from various vendors and clouds are attending as well to do a bit of planning on what cloud-init should look like next year. If folks get a chance, think about any feature or topic suggestions that would benefit cloud-init users and we'll see if we can discuss them at the summit.
<blackboxsw> while I'm at it, I think I'll set the topic to next status meeting time so folks know it's coming.
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 8/20 16:00 UTC | cloud-init 18.3 released (06/20/2018)
<blackboxsw> also just noticed the following branch, which admitedly is a bit stale, but adds hyperv logging via kvp. kinda cool for stuffing data into the registry on windows vms. Might have to get a review on that before the next status meeting.
<blackboxsw> #link https://code.launchpad.net/~andyliuliming/cloud-init/+git/cloud-init/+merge/351742.
<blackboxsw> it looks a bit noisy on the debug front with adding out/err messages for all subp calls, but other than that fairly straight forward.
<blackboxsw> looks like that's a wrap for today.
<blackboxsw> #link https://cloud-init.github.io   for meeting minutes
<blackboxsw> see you next time: 2 weeks from today
<blackboxsw> #endmeeting
<meetingology> Meeting ended Mon Aug  6 17:04:11 2018 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2018/cloud-init.2018-08-06-16.04.moin.txt
<blackboxsw> smoser: I'm not quite sure how to handle your comments that DataSourceAzure._is_platform_viable should be a separate function in the Azure module. Ultimately, what I want is for get_data to call _is_platform_viable  prior to re-running the private _get_data on a given datasource instance. If we separate it out to only a module function instead of a class method, then we may have to deal with importing the right function
<blackboxsw> per module in the DataSource parent class when running get_data().   I could see wrapping the module function as a classmethod so we have that class method available to us when we are operating on the DataSource.get_data() call of _is_platform_viable. Having the separate module function does ease testability without all the other datasource setup required so I'm +1 on the approach: separate function plus
<blackboxsw> DataSource._is_platform_viable.
<blackboxsw> Long term, I want all datasources to have an is_platform_viable publc method we can call to quick-check DMI/SMBIOS/environment for 'maybe' before attempting to walk metadata services (because that costs more). is_platform_viable would only do the quick sniff/test ds-identify(cloud-id) does and exit False if known not-compatible
<blackboxsw> if we surface that method as a public class method, we can also easily test if via CLI to ensure platform changes didn't break cloud-init python datasource detection
<smoser> blackboxsw: having all datasources have a 'is_platform_viable' is a reasonable goal for sure.
<smoser> i was just suggesting that that method is either a @classmethod
<smoser> and/or it simply (for ease of test) wraps a _is_platform_viable
<smoser> in the .py file
<blackboxsw> +1 smoser thanks. so you don't like side-effect in Azure.network_config to remove cpc image netplan yaml. I tend to agree, what since get_data has side-effects, and needs to be run before network_config anyway, why not do maybe_remove_ubuntu_network_config_scripts
<blackboxsw> after we detect we are on azure
<blackboxsw> wow thinkos.   maybe_remove_ubuntu_network_config_scripts from network_config -> _get_data
<smoser> is DataSource.activate too late ?
<blackboxsw> checking,
<blackboxsw> I think that should be good
<blackboxsw> bah wrong, ok can't do it at activate timeframe as we are attempting to generate network config in init-local, ds.activate doesn't happen until init stage.  So, we'd have collisions with cloud-init's netplan from IMDS and /etc/netplan/90-hotplug-azure.yaml
<blackboxsw> which we are trying to remove
<blackboxsw> during network-config generation
<blackboxsw> ok I'll move it to _get_data and out of network config at least. But not sure where a better home would be for this call.
#cloud-init 2018-08-07
<smoser> ~/join #ubuntu-release
<blackboxsw> smoser: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348704 is up with your comments addressed
<blackboxsw> just tested on azure to make sure it works
<smoser> blackboxsw: its perfectly fine to tell me to go away when i suggest thing like additional functions that just get called from other functions.
<smoser> i just find that easier to mock the method '_is_platform_viable' than mock.patchObject
<blackboxsw> heh.
<blackboxsw> I agree with testability. I actually forgot to address your separate network_config logic into a standalone function.
<smoser> do you think that 'is_platform_viable' should be a class method ?
<smoser> i guess there may well be some datasource that needs to look at system config in order to figure that out.
<blackboxsw> smoser: at some point yes as the superclass will be able to walk through is_viable check before get/crawl_metadata processing once subclasses all have it.
<blackboxsw> it's an evolution though. I don't mind taking small steps to get there (and simpler test coverage)
<blackboxsw> the unittests in azure are notably bad as there are so many mocks/setup
<blackboxsw> decoupling from the datasource makes it easier as you said
<blackboxsw> simpler test  == more certainty and better test coverage
<smoser> ok. i'm fine with instance method for now.
<blackboxsw> smoser: instance method for  network_config? or are you talking about _is_platform_viable
<smoser> blackboxsw: maybe the diff just dint updat e?
<smoser> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348704
<smoser> shows <<<<<
<smoser> i was talking about _is_platform_viable
<blackboxsw> smoser: yeah that diff is out of step, I've tried --force etc.
<smoser> please do move the convert out to a methoc
<smoser> function
<smoser> wahtever
<blackboxsw> ok network_config -> function
<smoser> (/me probably used 'method' confusingly above)
<smoser> yeah
<blackboxsw> yeah I got what you meant
<blackboxsw> smoser: just pushed separate _parse_network_config function for Azure branch
<smoser> blackboxsw: did you ping about sru release ?
<blackboxsw> smoser: I did, yesterday I pinged sil2100, this morning I pinged RAOF
<blackboxsw> smoser: no response to either ping https://pastebin.ubuntu.com/p/DDYsvBmd8Y/
<blackboxsw> same comment set both days
<blackboxsw> shall we ping robie?
<smoser> :-(
<blackboxsw> smoser: I'll add net-convert format option type for azure. good idea (from a test/validation perspective)
<[42]> is there an easy way to skip the first boot config? i'm trying to migrate a vm without reinstall to use cloud-init for network config
<[42]> without applying anything else
<[42]> basically how do i mark it as "first boot already run"?
<smoser> [42]: what is it that you do not want to run
<[42]> i definitely want to keep my ssh hostkey
<[42]> and i don't need user generation
<smoser> but you need cloud-init to read networking information and apply it?
<blackboxsw> smoser: pushed net_convert --kind to support azure-imds  https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348704
<[42]> yes
<smoser> what cloud platform?
<[42]> kvm on proxmox
<smoser> [42]: well, its not really a common / supported option
<smoser> but if you look in /var/lib/cloud/instance/sem
<smoser> you'll see a bunch of files
<[42]> doesn't exist before first run
<smoser> that represent markers for things in /etc/cloud/cloud.cfg 'cloud_init_modules', 'cloud_config_modules', 'cloud_final_modules'
<smoser> hm.. ignore the first part of that statement.
<smoser> but you can probably basically go and comment out anything youthink you might not want to have run/re-run in /etc/cloud/cloud.cfg
<[42]> i guess that's a better option to just comment out everything but network for now
<blackboxsw> added test instructions to  https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348704 for the new cli
<[42]> what's /etc/cloud/templates for?
<smoser> files from there are rendered into other places.
<smoser> blackboxsw: annoying
<smoser> i think launchpad went to lunch permenantly on updaitng the diff there.
<blackboxsw> smoser: I can remove the branch and resubmit and it'd be fine. shall I do that (it's lose our comment review history though)
<blackboxsw> *it'll lose*
<smoser> well you can just reject it
<smoser> and then link to it from the other
<smoser> then its not delted.
<smoser> but maybe you forgot to push ?
<[42]> which module sets the network config?
<smoser> [42]: it doesnt happen in a module
<[42]> where does it happen?
<smoser> so it will happen if cloud-init thinks its a new instance.
<[42]> okay
<smoser> from cloud-init init (possibly --local )
<blackboxsw> smoser I forgot I rebased. just hit --force on the MP
<blackboxsw> should see 1d429c3cb514b35b84efc40533f5935ec2abdf33 committsh
<smoser> blackboxsw: ?
<smoser> im confused
<[42]> why does it explicitly add `post-up ifup eth0:1` to the interfaces config?
<[42]> at least when i used it in the past it would automatically start anyways
<smoser> what is "it" ?
<[42]> `cloud-init init`
<smoser> well... iirc it was related to getting some static routes to be guaranteed aplied.
<smoser> intrfaces is kind of messy.
<smoser> can you show what the input netowork config was ?
<smoser> blackboxsw: i still see <<<<
<smoser> (line 455 of visual diff ?)
<[42]> smoser: https://gist.github.com/Nothing4You/fadcc21d17a35da106842fdd7101ebf1
<blackboxsw> smoser: I just hit resubmit in LP. https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/352639
<blackboxsw> it should clear up the diff
<blackboxsw> I don't see it when I merge into master
<blackboxsw> the diff is devoid of <<<<<<
<blackboxsw> weird, new merge proposal has it too
<smoser> your maybe_remove_ubuntu_network_config_scripts is in _get_data
<smoser> were you going to move that to activate ? or is that too late.
<smoser> i have to run though. i'll lok more tomrrorw.
<blackboxsw> smoser: last try https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/352660
<blackboxsw> smoser: activate is too late
<blackboxsw> it happens in init stage instead of init-local when we need to render netplan
<blackboxsw> and if we don't remove /etc/netplan/99-azure-hotplug.yaml  cloud-init's netplan will collide
<blackboxsw> in init-local timeframe
<smoser> [42]: fwiw, you must hvea a downlevel cloud-init. newre cluod-init will render that with multiple 'iface' entries
<blackboxsw> smoser: woo hoo last merge proposal listed doesn't contain merge conflict markers
<[42]> smoser: 0.7.9
<[42]> debian stable
<smoser> http://paste.ubuntu.com/p/ddyXzmBwxs/
<smoser> [42]: 0.7.9 is old :-(
<smoser> i really do have to run.
<smoser> later
<[42]> are there pre-built packages for debian?
<[42]> thanks smoser
#cloud-init 2018-08-08
<otubo> Just recaping the NetworkManager support: I was told that netplan supports NetworkManager,but only when using netplan, right? There's no renderer for networkmanager directly.
<otubo> Fedora 29+ won't use initscripts anymore by default, it should use only networkmanager, any objections to develop a renderer for NM?
<otubo> I think dpb1 and TJ- helped me last time ^^^
<rharper> otubo: no networkmanager renderer in cloudinit/net ;  I think there are some things in the sysconfig renderer that touch NM configs
<rharper> otubo: a NM renderer in cloudinit would be fine
<dpb1> or using netplan in fedora. :)
<rharper> dpb1: for sure; there are older images which have NetworkManager (fedora, centos, rhel) but won't/can't get netplan;
<[42]> are there pre-built recent packages for debian?
<powersj> [42], don't think so; this was discussed on the debian-cloud list earlier this year
<powersj> https://lists.debian.org/debian-cloud/2018/05/msg00027.html
<[42]> is there much work involved in creating that package?
<[42]> i never packaged stuff for debian so i have no idea
<rharper> it depends;  typically you're doing a  package up a new release; which means there may be some patches to the current version that may or may not be needed, and if needed, may not apply.  So it can take some time;  also the longer the time between current version and the upstream, it can become harder to maintain the delta
<rharper> ubuntu packaging and debian are very similar; so I don't think that part will be hard; but understanding what patches might be in the current debian package that apply against 0.7.9 and figuring out if they're still needed (and if so, porting the patch to apply against 18.3) is likely the be the bulk of the work
<[42]> i see
<blackboxsw> [42] you might want to try our build tools in cloud-init source.   On debian,   apt-get install devscripts git; git clone -b master https://git.launchpad.net/cloud-init; cd cloud-init; make deb   should give you hints on building
<blackboxsw> if there are things that prevent you from building the full package, it should be fairly straight forward to add the support to cloud-init with a patch
<[42]> i'll probably try that within the next few weeks
<blackboxsw> sounds good
<[42]> maybe also try to get the debian package to work with the latest version
<blackboxsw> hrm, per fginther's branch, ssh_disable_users in cloud-config:     would allow disabling a user just like we do w/ disable_root: True... The thing is, it also creates a user first. /me thinks that this 'feature'  doesn't quite make sense.
<blackboxsw> it feels like the config option ssh_disable_users: [...]  should disable a list of known users if they exist or are configured, not create a user and then disable it
#cloud-init 2018-08-09
<otubo> rharper: thanks for the reply. I'll start working on NM renderer for Fedora, then. Thanks for the support guys :)
<otubo> rharper: still about network-manager support: If I can hack the netplan.py available() function to detect 'netplan' OR 'network-manager', I could just use the netplan renderer to have as input a v2 config file and render network-manager configuration, is that correct?
<otubo> rharper: not as beautiful as having a proper nm renderer but perhaps a quicker and cleaner way.
<rharper> otubo: yes, we'd need to think about the policy about which netplan backend to specify when rendering;  for example, some images may have netplan which have both systemd-networkd *and* networkmanaeger
<otubo> rharper: but I think this is not exactly a problem, right? It should be specified on the configuration file which one to use.
<rharper> otubo:  possibly, but one has to consider these things for existing images when changing behavior
<otubo> rharper: I understand. I'll write a patch and create a pull request. Then we can continue the discussion from there. But apart from 'how to detect and enable', you think this would be a reasonable approach?
<rharper> otubo: for sure;  it does require netplan in the image;  but it's a good extension of the netplan rendering (to also render when networkmanager is present)
<otubo> rharper: oh so, I'll still need to have netplan present? Even if I hack the detection method?
<otubo> rharper: if there's no wait out of having netplan present I'll really have to write the network-maneger standalone renderer
<rharper> otubo: yes, cloud-init netplan.py writes netplan yaml to /etc/netplan/50-cloud-init.yaml  (which is then read by netplan's generate tool which converts yaml to either systemd or networkmanager configs)
<smoser> blackboxsw: do a cloud-init uplaod today to cosmic
<smoser> or just "right now"
<smoser> just lets get something up
<blackboxsw> agreed smoser
<blackboxsw> upload proposal for cosmic https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/352825
<smoser> blackboxsw: reading
<smoser> blackboxsw: approved... should i put 'Approved' to the Status ?
<smoser> that kind of scares me :)
<blackboxsw> smoser: I'm not sure here now that we have autolander.... let's not until we sort it.
<blackboxsw> yeah
<blackboxsw> I don't want to add the wrong commit message to the merge
<smoser> my build-and-push is
<smoser>  http://paste.ubuntu.com/p/7fJy6hPVBD/
<smoser> just make sure you do the things it does
<smoser> (basically, build it locally... and then push the branch (ubuntu/devel) and the tag
<blackboxsw> thanks. I found I have stale incorrect pgp keys on my dev box, so I'm going through the backup of those keys now
<smoser> :)
<blackboxsw> then I can sign where I built it
<blackboxsw> man now that I can finally import my keys, I'm seeing timestamps on the build outfile which don't exist.
<blackboxsw> smoser: do you have any sbuild env vars defined in your environment
<blackboxsw> like BIN_NMU_TIMESTAMP ?
<blackboxsw> when trying to run the sbuild cmd I get "E: Failed to open build log /home/csmith/ubuntu/logs/../out/cloud-init_18.3-24-gf6249277-0ubuntu1_amd64-2018-08-09T21:13:35Z.build: No such file or directory
<blackboxsw> E: Error creating chroot"
<blackboxsw> whereas ../out/cloud-init_18.3-24-gf6249277-0ubuntu1_source.build exsts
<blackboxsw> got it. fixed my sbuild env, I didn't have a cosmic schroot setup
<blackboxsw> though seeing lintian errors when I try to build
<blackboxsw> E: cloud-init source: untranslatable-debconf-templates cloud-init.templates: 6
<blackboxsw> E: cloud-init source: not-using-po-debconf
<blackboxsw> not sure if that should block upload or not
<blackboxsw> hrm well, it seems my upload was successful yet my signature failed to verify. https://pastebin.ubuntu.com/p/M7DCpqC4MP/
<blackboxsw> I don't see any launchpad emails yet about this failed upload
#cloud-init 2018-08-10
<smoser> blackboxsw: i dont know what that error checkign signature is.
<smoser> maybe you dont have trusted your own key?
<smoser> i do not set BIN_NMU_TIMESTAMP
<smoser> I think this is the only things i have"
<smoser> http://paste.ubuntu.com/p/gYKS7skDGd/
<blackboxsw> Yeah I hadn't earlier verified my key in launchpad. I have since pushed and verified.
<blackboxsw> smoser: forgot to ask about the lintian error that I saw which complained about debian/cloud-init.templates not having po translation files
<blackboxsw> E: cloud-init source: untranslatable-debconf-templates cloud-init.templates: 6
<blackboxsw> 16:13 E: cloud-init source: not-using-po-debconf
<blackboxsw> do we care?
<blackboxsw> enough to ignore the lint on that template file? or setup the pot files instead
<smoser> oh yeah... ENOCARE :)
<blackboxsw> excellent
<smoser> i'd like to get rid of it, but...
<smoser> not a big deal.
<smoser> we could and arguably should add potential translations for that i guess.
<blackboxsw> Soy increÃ­ble en la traducciÃ³n
<smoser> my keyboard doesn't have one of those funny i or o keys
<blackboxsw> just a couple of curls away against google.translate
<smoser> :)
<blackboxsw> smoser: or rharper if there is a chance today for the azure branch landing, then I can easily re-upload cloud-init :) https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/352660
<blackboxsw> shameless self-interest on that branch :)
<smoser> blackboxsw: i had asked about 'remove_ubuntu_network_config_scripts' in _get_data()
<smoser> and suggested in .activate
<blackboxsw> Smoser I responded I thought that activate was too late as we are generating network config in init local timeframe and we would collide with the cloud image's seeded netplan file
<blackboxsw> So we need something to clean out the /etc/netplan/90-azure-hotplu.yaml before we kick netplan
<smoser> oh. ok. yeah, that is why we should talk on the MP :)
<smoser> rahter htan me asking out of band
<smoser> :)
<blackboxsw> Yeah... and me deleting the original merger proposal broke that :/
<blackboxsw> ... bad solution to a stale  visual diff with merge conflict markers ãã
<smoser> blackboxsw: respondoned
<blackboxsw> rharper  one of smoser's comments on my branch which adds the ds-identify detect_Azure logic to DataSourceAzure._is_platform_viable was that the following test is not really valid  has_fs_with_label "rd_rdfe_*" && return ${DS_FOUND}
<blackboxsw> I just validated that adding data diskss on azure instances resulted in the following partitions and labels
<blackboxsw> sudo python3 -c 'from cloudinit.util import blkid; print(["%s:%s" % (k, v.get("LABEL", "")) for k,v in blkid().items()])'
<blackboxsw> ['/dev/sda1:cloudimg-rootfs', '/dev/sda15:UEFI', '/dev/sdb1:', '/dev/sdc1:', '/dev/sda14:']
<blackboxsw> ubuntu@SRU-worked:~$
<blackboxsw> so most disks have empty labels.  /me wonders about the initial dvd/cdrom that shows up on azure during first boot... maybe it had that label
<blackboxsw> my question was...
<blackboxsw> if we thinkg the re_rdfe_* prefix is not valid for DataSourceAzure._is_platform_viable, should I also drop it from ds-id
<blackboxsw> if we are thinking the re_rdfe_* prefix is not valid for DataSourceAzure._is_platform_viable, should I also drop it from ds-id
<blackboxsw> the thing that stinks about irccloud is that if my browser is sucking wind, it timesout on keyboard character input
<rharper> blackboxsw: that's prolly a question for the MS folks w.r.t  cdrom
<rharper> blackboxsw: can you confirm if /dev/sr[0,1] has anything in it ?
<rharper> I don't think we can yet drop it until we know that it's invalid;  we usually have a collection of information that jointly confirms which platform we're on
<smoser> blackboxsw: so the dvd/cdrom definitely used to start with 're_rdfe_'
<smoser> its possibly obsolete now.
<smoser> and i recall bein told that it might be.
<smoser> but as in the 'dscheck_Azure' function "header", you can see that is where it came from
<rharper> blackboxsw: AFAICT the check is still valid; we don't know of another cloud where there are (iso) filesystems with such labels;  they may not be in used in your instance
<rharper> but it's possible in other azure deployments (AZ, or private)
<rharper> it may still be used
<rharper> blackboxsw: what was smoser objecting to w.r.t the label check being invalid ?
<smoser> well, butthat only ever gets considered if azure asset tag is not present.
<rharper> a fallback
<rharper> sure
<smoser> and we really should have azure asset tag present everywhere.
<rharper> yes
<rharper> that makes sense
<blackboxsw> So smoser should I drop that filesystem prefix check from ds-id or leave our fallback in both places
<smoser> and if not... well, i think its completely unrelated to your changes.
<smoser> so if you want you can do that in a separate MP and we can just take that in first.
<blackboxsw> woot! +1 . let's do that as I'm itching to get an upload in
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/352921
<smoser> thats my no-unit test oracle ds
<blackboxsw> nice smoser will play with it
<blackboxsw> smoser: I'll pull out the prefix check from the python is_viable_platform
<blackboxsw> then we only have it in one place to remove
<blackboxsw> I won't touch ds-id logic until we confirm otherwise
<blackboxsw> ok pushed my branch. it can sit until monday as it's EOW for you gents
#cloud-init 2019-08-05
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting Aug 5 16:15 UTC | cloud-init v 19.2 (07/17) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> #startmeeting Cloud-init bi-weekly status
<meetingology> Meeting started Mon Aug  5 16:16:04 2019 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<tribaal> o/
<blackboxsw> Heya Chris!
<blackboxsw> Welcome to another cloud-init community status meeting folks.
 * blackboxsw finally back from a much needed vacation and have dug myself out of backlog
<blackboxsw> #chair rharper
<meetingology> Current chairs: blackboxsw rharper
<blackboxsw> cloud-init upstream uses this meeting as a platform for community updates, feature/bug discussions, and an opportunity to get some extra input on current development.
<blackboxsw> All interjections updates and questions  welcome
<blackboxsw> we may be a bit light this meeting as well as some folks have holidays and travel that coincide with this meeting
<blackboxsw> our format is the following topics: Previous Actions, Recent Changes, In-progress Development, Office Hours
<blackboxsw> we host the meeting every two weeks at the date and time indicated in the IRC channel topic ^
<blackboxsw> I'll update that topic now as I think we forgot to last meeting
<blackboxsw> #topic #cloud-init Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting Aug 19 16:15 UTC | cloud-init v 19.2 (07/17) | https://bugs.launchpad.net/cloud-init/+filebu
<blackboxsw> next meeting will be two weeks from today, same time
<blackboxsw> #topic Previous Actions
<blackboxsw> #link https://cloud-init.github.io/status-2019-07-22.html#status-2019-07-22
<blackboxsw> groking the meeting last episode, looks like rharper needed to update status on https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1832381
<ubot5> Ubuntu bug 1832381 in cloud-init (Ubuntu) "vm fails to boot due to conflicting network configuration when user switches from netplan to eni" [Undecided,Incomplete]
<blackboxsw> I think we were awaiting feedback there from AnvoMSFT for a specific reproduce
<blackboxsw> I think we were awaiting feedback there from AnvoMSFT for a specific reproducer
<blackboxsw> so that'll carryover til next meeting if if it a priority
<blackboxsw> the other action from last session was for rharper to ping me on status publishing to github.
<blackboxsw> I've pushed meeting minutes from last two cloud-init status meetings up to cloud-init.github.io so we are closed there
<blackboxsw> no other actions seen
<blackboxsw> #topic In-Progress Development
<blackboxsw> Upstream 19.2 was cut on 7/17 and there are plans to SRU cloud-init within the next week or two into Xenial, bionic, disco and Eaon. I know that we are waiting on closure of a few branches  in tip before we SRU cloud-init tip to Xenial ++
<blackboxsw> tribaal: your exoscale branch I believe is one of the ones we want landed before we start our SRU process
<tribaal> I was hoping to address Odd_Bloke 's comments today, but that didn't happen. Tomorrow, or "this week" at the very least is my new target.
<blackboxsw> #link https://code.launchpad.net/~tribaal/cloud-init/+git/cloud-init/+merge/369516
<tribaal> Most of the non-blocking comments should be easy - I want to double check the on-reboot behavior on an actual instance though
<blackboxsw> tribaal: excellent, Odd_Bloke was able to get the review in Friday as he knew he'd be on holiday today and wanted to get you feedback
<tribaal> ack
<tribaal> (the blocking comment about the copyright header should be trivial as well thankfully :) )
<blackboxsw> yeah agreed
<blackboxsw> we also have the following branches we'd like to get "in" and merged to tip before SRU
<blackboxsw> #link https://code.launchpad.net/~daniel-thewatkins/cloud-init/+git/cloud-init/+merge/370927   (doc updates)
<blackboxsw> #link https://code.launchpad.net/~vtqanh/cloud-init/+git/cloud-init/+merge/369785  (Azure telemetry)
<blackboxsw> and some of goneri's FreeBSD support look like they are straight forward for review/landing
<blackboxsw> #link https://code.launchpad.net/~goneri/cloud-init/+git/cloud-init/+merge/368507
<blackboxsw> #link https://code.launchpad.net/~goneri/cloud-init/+git/cloud-init/+merge/365641
<blackboxsw> If anyone else out there today is interested in getting reviews/merges before we SRU to Xenial, please feel free to raise a request in channel here or on the mailing list.
<blackboxsw> Also in progress, I just drew up a minispec for DataSourceOVF so that VMware can support merging configuration sources from IMC and OVF if both are present. This allows OVF datasource to configure both static IP config as well as do ssh user imports (which was previously not possible)
<blackboxsw> As always, our in progress development generally will also be represented on trello
<blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
<blackboxsw> #topic Recent Changes
<blackboxsw> the following has landed in tip of master since last cloud-init status meeting
<blackboxsw>  % git log --oneline --since 2019-07-22
<blackboxsw>     - net/cmdline: split interfaces_by_mac and init network config
<blackboxsw>       determination [Daniel Watkins]
<blackboxsw>     - stages: allow data sources to override network config source order
<blackboxsw>       [Daniel Watkins]
<blackboxsw> #topic Office Hours (next ~30 mins)
<blackboxsw> feel free to ask for help, reviews, discussions on any cloud-init items you're looking at.   Otherwise I'll spend some time today getting through the review queue for cloud-init branches.
<blackboxsw> and doing some bug triage
<blackboxsw> thanks tribaal for jumping in BTW.
<tribaal> blackboxsw: my pleasure :)
<tribaal> I'm working on the review points in parallel during office hours as well, that should move things forward hopefully.
<blackboxsw> excellent just ping when ready this week and we'll give a quick pass.
<cyphermox> blackboxsw: yeah, I'm not sure about that eni/netplan conflict; you do need to remove old config from one to the other, otherwise they might fight, but not something I'd expect to break boot.
<blackboxsw> hiya cyphermox. agreed, I *think* we decided that cloud-init needed to be smart in the transition from netplan -> eni if someone does that on a system and cloud-init can warn about the behavior change, cleanup old netplan config and render eni in that case.
<blackboxsw> since cloud-init should be smart enough to know what it '
<blackboxsw> used to render
<cyphermox> ack
<cyphermox> yeah, now that I think of it we said the exact same thing last meeting, I think
<blackboxsw> +1, I'm just dusting the vacation cobwebs off. so didn't know if something else happened on that front last week
<cyphermox> well, I recall the convo, that's what I meant
<blackboxsw> I think that about wraps cloud-init status meeting for today folks. Thanks again. And drop us a line on the mailing list (cloud-init@lists.launchpad.net) or here in IRC anytime with questions/discussions.
<blackboxsw> #endmeeting
<meetingology> Meeting ended Mon Aug  5 17:06:17 2019 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2019/cloud-init.2019-08-05-16.16.moin.txt
<ahosmanMSFT> Good morning, I was wondering where the Platform classes (image, instance, platform, snapshot) are used during cloud_tests? I've checked the testcases files and other main files.
<powersj> ahosmanMSFT, the collect.py file does the actual launching of the platform
<powersj> and setup_image.py does well... setting up the image :)
<robjo> rharper: still here?
<powersj> robjo, we just back from a sprint so I think he is off today
<robjo> OK, thanks
<powersj> sorry for the missing word lol
#cloud-init 2019-08-06
<otubo> powersj, hey! First time attending cloud-init summit this year. Just wanted to understand the format of the event. Should I (can I?) prepare some slides to talk about cloud-init and Red Hat?
<rharper> robjo: here now
<robjo> rharper: in a meeting will ping you after, thanks
<rharper> sure
<tribaal> Odd_Bloke: I think I addressed all of your reviews points, if you have some time to have another look :)
<Odd_Bloke> tribaal: Sure thing!  Today's looking pretty busy (meetings and catch-up after a public holiday yesterday), so it may be tomorrow before I get to it, FYI.
<amansi26> I am trying to install cloud-init v19.1 on Ubuntu16.04. I get install with status: done. But when I check logs for the cloud-final.service, it says http://paste.openstack.org/show/755571/. I tried doing live capturing and deploy another VM on the image, it get assigned with the earlier VM IP. Can someone please suggest something to resolve?
<Odd_Bloke> amansi26: cloud-init comes installed in the Ubuntu cloud images by default; when you say "trying to install", can you clarify what you mean?
<Odd_Bloke> (I'm heading in to a block of meetings in a couple of minutes, FYI, so I won't be as responsive here.)
<amansi26> I added some custom module and made a debian package. Then I install that package on a Ubuntu machine
<amansi26> The same version works fine for RHEL.
<tribaal> Odd_Bloke: no worries, thanks for your time!
<Odd_Bloke> amansi26: I would suggest you file a cloud-init bug (the link is in the topic) describing the issue in detail, and ensuring to attach the tarball generated by `cloud-init collect-logs` to it. :)
<blackboxsw> rharper: only blocker to shifting generate_fallback_config from speaking network v1 to network v2 seems to be that when generating driver/device_id udev rules by NetworkState, the  driver parameters are lost in a couple of cases. So, the rules get emitted like this:          DRIVERS=="?*"   instead of 'DRIVERS=="hv_netsvc".   I'm gonna track that down now
<amansi26> Odd_Bloke:Sure
<blackboxsw> amansi26: good to hear of people packagin custom plugin modules. I wasn't sure how often that 'feature' of cloud-init was used
<rharper> blackboxsw: yes, that sounds right;  I think we need a way to hang that in the NetworkState so that a v2 -> eni does the right thing;  for v2 -> v2; that should be converted to match: {'device_driver': 'hv_netscv'}
<blackboxsw> rharper: +1 though I thought the match section had 'driver' and 'device_id' keys. I'll double check.. maybe that's my misunderstanding due to cloudinit/net/__init__.py :extract_physdevs._version_2
 * blackboxsw checks netplan.io examples
<cyphermox> only driver, mac, or name
<rharper> driver works
<rharper> that's what we want
<rharper> the azure code wants to explicitly ignore than mlx4_core devices; and the dhcp should apply only to the devices with driver=hv_netscv
<blackboxsw> https://netplan.io/reference#common-properties-for-physical-device-types
<blackboxsw> roger
<rharper> so, we should be able to match with both driver and mac
<blackboxsw> +1
<blackboxsw> rharper: the reference for netplan doesn't say anything about device_id property in match. (only driver). Should v2 emit a device_id 0x3(or whatever?) or is there another reference that might document that
<rharper> driver is fine
<rharper> does the device_id even show up in the udev rule ?
<blackboxsw> don't think so SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:11:22:33:44:55", NAME="eth1"
<rharper> right
<blackboxsw> ok omitting
<rharper> right, azure only calls into fallback with the driver name to blacklist
<blackboxsw> agreed
<rharper> I think the fallback just stuffed the device_id in there because it was part of the device_driver() return
<blackboxsw> yeah and if we don't need/use it, then no need to call device_devid() anymore either
<blackboxsw> cyphermox: thanks for the confirmation on !devicE_id
<cyphermox> I'm not against adding new matching ways, but it largely depends on what networkd can realistically do, and that's a bit of a pain
<blackboxsw> +1, I don't think we have a use case that currently requires device_id. If we do we'll file a bug/feature request
<cyphermox> ack
<robjo> rharper: still have to get back to the other network issue, but something new popped up yesterday
<rharper> refresh my mind on other issues
<robjo> openstack setup in "dual stack mode, i.e. ipv4 & 6"
<rharper> this is the new one
<robjo> the metadatada server produces:
<robjo> curl http://169.254.169.254/openstack/2018-08-27/network_data.json
<robjo> {"services": [], "networks": [{"network_id": "4aae8709-b4e6-4cf7-84f7-b7cbddfe3ecb", "link": "tapc48a243a-e6", "type": "ipv4_dhcp", "id": "network0"}, {"network_id": "4aae8709-b4e6-4cf7-84f7-b7cbddfe3ecb", "type": "ipv6_slaac", "services": [], "netmask": "ffff:ffff:ffff:ffff::", "link": "tapc48a243a-e6", "routes": [{"netmask": "::", "network": "::", "gateway": "fd29:c112:2871::1"}], "ip_address": "fd29:c112:2871:0:f816:3eff:fe64:b0d6
<robjo> ", "id": "network1"}], "links": [{"ethernet_mac_address": "fa:16:3e:64:b0:d6", "mtu": 1450, "type": "ovs", "id": "tapc48a243a-e6", "vif_id": "c48a243a-e623-4ef6-a363-98564e59fade"}]}
<robjo> the openstack helper then produces:
<robjo> 2019-08-05 14:14:10,631 - stages.py[DEBUG]: applying net config names for {'version': 1, 'config': [{'mtu': 1450, 'type': 'physical', 'subnets': [{'type': 'dhcp4'}, {'type': 'static', 'netmask': 'ffff:ffff:ffff:ffff::', 'routes': [{'netmask': '::', 'network': '::', 'gateway': 'fd29:c112:2871::1'}], 'address': 'fd29:c112:2871:0:f816:3eff:fe64:b0d6'}], 'mac_address': 'fa:16:3e:64:b0:d6', 'name': 'eth0'}]}
<robjo> so we have a "static" and a "dynamic" subnet on the same interface
<robjo> these get processed in order and so the renderer clobbers "bootproto=dhcp" with "bootproto=static"
<robjo> that of course breaks ipv4 access to the system
<robjo> one fix is to declare dhcp the winner in the rendered, i.e.:
<rharper> is it ?
<robjo> yes because ifcfg-eth0 will have "bootproto=static" which menas the dhcp client to request an IP address will not staart and thus the instance only has the static IPv6 address
<rharper> but dhcp will allow static ip assignment in addition to dhcp ?
<robjo> yes, but the interface is configured as "static" therefore no dhcp request will be issued
<rharper> ok, is that consistent on rhel/suse ?
<robjo> the fix is to pick dhcp as the winner, i.e. in the renderer it should be:
<robjo> elif subnet_type == 'static':
<robjo>     if iface_cfg['BOOTPROTO'] != 'dhcp':
<robjo>         iface_cfg['BOOTPROTO'] = 'static'
<robjo> it's on openSUSE/SLES, but I would be surprised if RHEL behaves differently in this case
<rharper> https://paste.ubuntu.com/p/drPGpmW8KT/
<rharper> I can't reproduce with cloud-init master
<rharper> both suse/centos hav bootproto=dhcp
<rharper> robjo: the dual_stack.yaml is the v1 config you pasted from the log
<robjo> OK, looking at the code, the issue was reported with 18.5
<robjo> master produces the expected results, I agree
<robjo> going fishing through the code .....be back in a bit
<rharper> robjo: in your 18.5 branch, I think you can just repeat the net-convert command like I did and see if it renders differently
<robjo> OK, that's weird the 18.5 renderer does not set bootproto to static either, more digging required, thanks for the help
<rharper> robjo: sure
<blackboxsw> rharper: just force pushed generate_fallback_config -> talking network v2 to https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/370970 for azure v2 support
<rharper> cool
<AnhVoMSFT> Hi folks, I'm seeing this error from one customer's deployment
<AnhVoMSFT> 2019-07-31 17:09:16,279 - util.py[DEBUG]: Failed mount of '/dev/sr0' as 'auto': Unexpected error while running command.Command: ['mount', '-o', 'ro', '-t', 'auto', '/dev/sr0', '/run/cloud-init/tmp/tmphmsab0y2']Exit code: 32Reason: -Stdout: Stderr: mount: /dev/sr0 is already mounted or /run/cloud-init/tmp/tmphmsab0y2 busy
<AnhVoMSFT> looks like the command took about 2s and then failed
<AnhVoMSFT> there was a dump of /proc/mounts earlier but /dev/sr0 wasn't on there, so definitely it wasn't "already mounted" scenario, so likely /run/cloud-init/tmp/... was busy - that was created with "with temp_utils.tempdir() as tmpd:" in mount_cb , how can it be busy ?
<Odd_Bloke> AnhVoMSFT: Are you able to file a bug with `collect-logs` attached?
<AnhVoMSFT> what logs do collect-logs get? We have cloud-init-output log and cloud-init log, kernel log. The instance was deleted though
<smoser> if it said 'already mounted', then i'd think the most likely scenario is that it was already mounted.
<smoser> collect-logs will also get /run/cloud-init which is useful.
<Odd_Bloke> If that's what we have, that'll probably do.  I'd recommend doing a `collect-logs` in future if possible, though, because it gathers a bunch of useful stuff.
<smoser> lack of /dev/sr0 in /proc/mounts that is dumped could be a race. cloud-init tried mount, failed, and then unmounted.
<AnhVoMSFT> no, the dump of /proc/mounts is part of util.py mounting code, it checks before trying to mount. While there is potential race issue it's unlikely because other than cloud-init nothing else is mounting /dev/sr0 during init-local phase
<AnhVoMSFT> (which is quite early in the boot process)
<ahosmanMSFT> 3.14
<blackboxsw> rharper: Odd_Bloke powersj, just published tip of cloud-init to eoan. should be seeing tip in cloud images tomorrow
<Odd_Bloke> \o/
<blackboxsw>  cloud-init 19.2-5-g496aaa94-0ubuntu1 (Accepted)
<rharper> nice
<blackboxsw> rharper: if I'm parsing IMDS in azure and the vm has 3 nics, should I think about adding dhcp4-overrides: route-metric: 100 * <intf_num> so the larger the interface number, the higher the metric? or should all non-primary be 200?
<blackboxsw> powersj: I'm going to validate that eoan-proposed systemd=243~rc1-0ubuntu1 fixes Azure multi-ip on primary nic
<powersj> excellent
<blackboxsw> I expect it will.
<blackboxsw> but that'll tie off azure multi-ip primary nic support
<djhaskin987> If i run a `reboot` command as part of a runcmd, then I have subsequent commands after it, do all commands get run?
<djhaskin987> example:
<djhaskin987> ```
<djhaskin987> `runcmd:`
<djhaskin987> `  - echo 'hi' | tee /tmp/a-file`
<djhaskin987> `  - reboot`
<djhaskin987> `  - echo 'hee hee' | tee -a /tmp/a-file`
<djhaskin987> I want both `hi` and `hee hee` to be written to the file `/tmp/a-file` across reboot. Is this possible?
<rharper> djhaskin987: no, the reboot is going to run and the remainder of your commands won't be run again; it defaults to running command once per instance;
<Odd_Bloke> rharper: https://code.launchpad.net/~daniel-thewatkins/cloud-init/+git/cloud-init/+merge/370927 <-- ready for re-review
<djhaskin987> rharper thanks for the info, very helpful.
<rharper> djhaskin987: https://cloudinit.readthedocs.io/en/latest/topics/examples.html#reboot-poweroff-when-finished   will allow you to take the reboot out, and I would suggest that you write a script with write_files to /var/lib/cloud/scripts/per-boot/XXXX ; that script will be called on every boot; and in your script, you can check for some marker file that your firstboot runcmd would touch;
<rharper> Odd_Bloke: ok
<djhaskin987> rharper sounds good thanks
<djhaskin987> i'll do that
<blackboxsw> rharper and Odd_Bloke, also Azure support for route-metrics on secondary nics is up https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/370970
<blackboxsw> just pushed changes. (I think I have lints to fix)
<rharper> blackboxsw: nice
<rharper> blackboxsw: so that one 'driver: hv_netsvc' ; that's in the example comment for the method right ? It's hard to see from the diff
<blackboxsw> rharper: right yeah, I just extended the docstring on a method
<blackboxsw> to represent expected method/function input
<kbZ> What about things?
<kbZ> Who knows about things
<kbZ> I have some questions
<rharper> ask away
<kbZ> in this path, cloud-init/cloudinit/sources
<kbZ> I'm looking to add a new source
<rharper> yes!
<kbZ> but I am pure dog scientist
<kbZ> if I wanted to query a metadata URL, does it have to be via IP or will I be able rock'n'roll with a domain?
<rharper> you can use a dns name,  DataSourceGCE.py for example uses that
<kbZ> so it does, thanks, will look there
<kbZ> I've never used Launchpad before :|
<kbZ> and uhh, I mean, I can barely write python, so this should be fun
<rharper> https://cloudinit.readthedocs.io/en/latest/topics/hacking.html
<rharper> that should have most of the steps you can use to get started on a contribution
<kbZ> why spend ten minutes in the docs when I can spend ten hours toiling away?
 * kbZ looks
<kbZ> >It assumes you have a Launchpad account
<kbZ> assumptions already, yikes
<rharper> well, at least it told you
<kbZ> lol, thanks :D
<kbZ> in Launchpad's defense, I usually only end up here angry and at the end of a bug
<kbZ> so we kinda got off on the wrong foot
<rharper> fair enough
<kbZ> one thing I noticed
<kbZ> DataSourceAzure.py has different permissions than the rest
<kbZ> -rwxr-xr-x vs -rw-r--r--
<kbZ> I want to make jokes
<kbZ> Microsoft bully you into this? (the joke)
<rharper> hehe
<kbZ> is that an accident? if I'm in here fixing things?
<rharper> the execute bit isn't needed on the Datasource, so sure, send in a fix
<kbZ> cool
<kbZ> well, apparently I have been an Ubuntu One member for a long time
<kbZ> Member since:    2005-11-04
<kbZ> awe
<rharper> nice
<kbZ> someone sent my real Ubuntu 6 CDs in the mail a long time ago
<rharper> I've a few of those
<kbZ> lets see if I can slip in this chmod thing and then I'll try taking a crack at the bigger issue
<kbZ> >To contribute, you must sign the Canonical contributor license agreement
<kbZ> who gets my first born?
<rharper> up to you =)
#cloud-init 2019-08-07
<kbZ> powersj you around?
<rharper> kbZ: he's out already, but should be back tomorrow morning Pacific times,
<blackboxsw> rharper: for tomorrow https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/370970/comments/970219 my azure validation on v2 metrics branch for secondary nic. Why isn't route-metric 200  showing?
 * blackboxsw skips out
<otubo> Hi guys, I still can't seem to find out the reason why this thing happens, does anyone have a little bit of time to help me debug this issue? https://bugzilla.redhat.com/show_bug.cgi?id=1593010
<ubot5> bugzilla.redhat.com bug 1593010 in cloud-init "cloud-init network configuration does not persist reboot [RHEL 7.8]" [High,Assigned]
<otubo> (oh, cool bot!)
<otubo> Right now I'm using 18.5 to work on, but it's no big deal to have anything newer to have backported
<otubo> The only environment I have right now is ESXi, and I can reproduce it.
<rharper> blackboxsw: that sure looks like a bug to me w.r.t the metric on eth1
<rharper> I suspect that the ESXi bug is related to the OVF datasource resetting instance-id =(
<Odd_Bloke> rharper: blackboxsw: We discussed yesterday having the new Oracle network code off-by-default for now; I'm thinking I'll name the config option "apply_network_config", following the naming in the Azure DS.  Does that sound reasonable?
<rharper> Odd_Bloke: that sounds reasonable, yes
<kbZ> á( â° â½ â° )á
<blackboxsw> +1 Odd_Bloke.
<blackboxsw> rharper: ok Azure 2 nics Eoan plus route-metric.... again I am not getting any route represented with a route-metric cost of 200. Will probably need to screen share and discuss with your sometime today
<blackboxsw> and maybe it's just that nic 2 is also on the same subnet, so the route is omitted as it's a dup
 * blackboxsw tries switching the vnet to 10.1.0.0/16 instead of 10.0.0.0/16 (which eth0 is also on)
<blackboxsw> hrm, looks like I am just not certain how to setup multiple nics with different routes in azure vms. two nics on separate subnets result in only one interface with routes defined from the dhcp server.
<rharper> blackboxsw: hrm
<rharper> lemme verify on my openstack instances
<rharper> blackboxsw: is that with or without the systemd with classless static route fix ?
<blackboxsw> rharper: right, eoan is without the classless route fix. lemme bump it to -proposed
<blackboxsw> and grab the queued fix for that too
<rharper> no I was hoping without
<blackboxsw> ok
<rharper> I've a bionic instance that's doing it right
<rharper> so I need to see what's different
<rharper> can you import my ssh keys? lp:raharper  ?
<rharper> and let me know where the instance is at ?
<rharper> http://paste.ubuntu.com/p/gkhGkYqQfG/
<Odd_Bloke> rharper: blackboxsw: On reflection, I don't think apply_network_config is the right name; in other places that disables DS-provided network config entirely which wouldn't be the meaning here.  I'm leaning towards apply_imds_network_config.  What do you think?
<rharper> would you update the Azure one as well ? I agree we had some confusion over what it meant
<rharper> Azure would accept both
<rharper> or check both ?
<blackboxsw> rharper: Odd_Bloke +1 on Azure ds_cfg needed to accept both, as apply_network_config on Azure currently means read IMDS and use it.
<blackboxsw> hrm Odd_Bloke which datasource has apply_network_config disabling network config completely? OpenStack? I'd suggest that it is the same meaning in OpenStack as Azure. OpenStack's network_data.json is complex network config for the instance from OpenStack's IMDS. If we ignore it, w/ apply_network_config false, we aren't disabling network completely, just relying on fallback dhcp on primary nic (like azure does)
<Odd_Bloke> I didn't say we disabled network completely, I said we disable _DS-provided_ network.  But, yeah, I see that that's not quite what it's doing on Azure.
<blackboxsw> ... I'm probably misreading sorry.
<Odd_Bloke> (On OpenStack it does just return None from .network_config, which is a full disable.)
<Odd_Bloke> (On Azure it generates its own fallback config with slight modifications to the call.)
<blackboxsw> ahh right, shoot. I thought cloudinit/stages.py did a fallback config if ds.network_config == None. I see now the LOG.info("network config is disabled by %s", src)
<Odd_Bloke> I'm pretty sure stages _does_ do a fallback config.
<Odd_Bloke> Yeah, a DS has to return {'config': False} to disable _all_ networking.
<blackboxsw> looks like true: Init._find_networking_config walks through each potential network source, if ds.network_config == None, it still walks to self.distro.generate_fallback_config()
<Odd_Bloke> Returning None from ds.network_config is just indicating that the DS doesn't have any network configuration to apply.
<blackboxsw> right +1
<blackboxsw> cloudinit/stages.py:line 644 is the catchall if no applicable network config from ds initramfs kernel_cmdline etc
<blackboxsw> ok
<Odd_Bloke> So I think the Azure behaviour and the OpenStack behaviour are quite similar, in that they both result in fallback configuration being emitted.
<blackboxsw> in either case, I'm not opposed to a more specific config option name apply_imds_network_config
<blackboxsw> better to avoid vague interpretations as to the config option meaning
<Odd_Bloke> The Oracle behaviour of this flag is different; we will always generate the initramfs configuration (i.e. _not_ fallback), the flag only affects whether you get configuration for your _secondary_ NICs.
<Odd_Bloke> I think this being interaction with an IMDS is a red herring naming-wise, actually.
<Odd_Bloke> Perhaps `configure_secondary_nics`?
<blackboxsw> AHH
<blackboxsw> sure, though in Azure's case apply_network_config == True has the side-effect of being the only option that'll get you secondary_nic config too, because otherwise it's just fallback :)
<blackboxsw> same with OpenStack as it were
<Odd_Bloke> Sure, apply_network_config is a stronger statement than configure_secondary_nics.
<blackboxsw> but it is the most appropriate config option for Oracle... and I suppose we could add that alias in Azure && OpenStack to mean apply_network_config= True  if we wanted to grow similary config options for each DS
<Odd_Bloke> I'm not sure that disabling secondary NICs is something we want to enable generically.
<Odd_Bloke> At least, not without some intentional discussion about it.
<blackboxsw> ahh, flipping that on it's head right, the converse would not be true in Azure/OpenStack. configure_secondary_nics == False. I get you. agreed
<blackboxsw> rharper: ssh ubuntu@20.186.46.127
<blackboxsw> id imported
<rharper> y
<blackboxsw> I'd like to have you walk me through triage if you would
<blackboxsw> hangout?
<rharper> yes
<Odd_Bloke> rharper: blackboxsw: https://code.launchpad.net/~daniel-thewatkins/cloud-init/+git/cloud-init/+merge/371053 <-- Oracle secondary VNICs (WIP because I need to document the new behaviour, but otherwise ready for review)
<chillysurfer> unfortunately i can't seem to get the emailing to the mailing list working. i've pinged the launchpad channel to see if they have any ideas. but in the meantime, the message i've been trying to send is a question about vendordata getting overwritten by userdata: https://gist.github.com/trstringer/be871f87efca0cd41e7017db0fd31798
<chillysurfer> i dumped a gist there which was the content of the email
<chillysurfer> let me know if you have any thoughts!
<rharper> chillysurfer: sorry about the email trouble
<chillysurfer> rharper: not a problem at all :)
<rharper> https://git.launchpad.net/cloud-init/tree/tests/unittests/test_data.py
<rharper> chillysurfer: I think looking there, that might help get the formatting right
<rharper> I think it covers several of the jsonp merging between user/vendor data
<chillysurfer> rharper: so my jsonp formatting is off?
<rharper> it _looks_ like (and I need to read closer) that user-data might need to include vendor_data: {enabled: True}
<rharper> to merge with vendor data
<chillysurfer> oh wow
<rharper> but I need to look at those examples too
<rharper> because user_data can also disable vendor_data completely
<chillysurfer> yep i see that with the /vendor_data path
<chillysurfer> rharper: and merge_how doesn't apply to vendor data, is that correct? so even if user data and/or vendor data specify merge_how then it still won't be "included"
<chillysurfer> is that a fair statement?
<rharper> it can
<rharper> but the user-data needs to include it as well
<rharper> this is awkward
<chillysurfer> "this is awkward" -> you mean that user-data includes merge_how to take vendor data into account?
<rharper> requiring user-data to specify merge-how in each section
<rharper> that is if you pass in 3 user-data mime parts
<rharper> each one has to say the merge-how
<chillysurfer> ah i see
<rharper> you cant, set a _default_ merge-how
<rharper> via cloud-config
<chillysurfer> so for instance if user data specifies runcmd then that section needs to specify a merge_how?
<rharper> I'd like to say, list: append : dict: recusrive: append et c
<rharper> yes, in vendor-data and in each user-data
<rharper> which is a real PITA
<chillysurfer> yeah that sounds pretty rough
<rharper> the lxd folks had a more common append and recursive update for dicts; and users could *clear* previously merged data by inserting empty list or things like that
<chillysurfer> but how else could a vendor communicate to a user "hey we want to do this thing, but you need to specify a merge_how in runcmd, bootcmd, etc"
<rharper> I need to read up on their implementation but was thinking that might be nicer for cloud-init
<rharper> documentation
<rharper> which is why it's not great
<chillysurfer> rharper: right i see what you mean. so for lxd the user would have to deliberately clear sections, but for cloud-init it's the opposite... no action is clearing vendor data, and requires action to retain vendor data
<rharper> well, at least I;d like some cloud-config to set the default merge mode
<metsuke> I'm attempting to use cloud images on ESXi.  Am I able to provide a "Url to seed instance data from" on ESXi and then it will use the host's network to grab the config, or do I need to configure the VM's network first?
<Odd_Bloke> metsuke: o/ I believe cloud-init will attempt to DHCP on the first interface in order to fetch that data, but let me double check.
<Odd_Bloke> metsuke: OK, no, it doesn't do an ephemeral DHCP (which is what I was thinking of), but I _believe_ that it will only attempt to fetch metadata after networking is up in the host.
<Odd_Bloke> metsuke: I'm not very familiar with ESXi or the OVF data source though, so your best bet is probably to try it and see what happens. :)
<metsuke> That makes sense, thank you. I'm trying to make these images as barebones as possible without having to set up things like network configs manually
<Odd_Bloke> cloud-init will configure the first interface to DHCP by default.
<Odd_Bloke> So if you have DHCP set up, then you shouldn't need to touch network config to get the instances up, at least.
<metsuke> Odd_Bloke: indeed, though in my particular case, a specific static IP is required for connectivity
#cloud-init 2019-08-08
<Japje> hey guys, im trying to get debian10 into maas as an image (which i've got working). However for some reason even though the cloud-init is enabled it will not run on first boot for some reason
<Japje> as soon as i manually run cloud-init init --local i can restart the network to get it going
<Japje> (the interfaced.d/50-cloud-init.cfg file isnt created therefor no network)
<Japje> anyone perhaps an idea what could be preventing it from running on boot?
<Japje> systemctl show cloud-config, cloud-final and cloud-init-local services as enabled
<Japje> the image i used is the latest debian10 openstack iamge
<Japje> image*
<Japje> hmnz, cloud-init status says disabled, but i have neither a /etc/cloud/cloud-init.disabled nor a commandline in /proc/cmdline that would cause that
<Japje> any other ways to have it disabled?
<Japje> it seems that cloud-init gets disabled when there are no datasources, and it looks like maas throws away the only datasource there is, so i guess i found the cause
<Odd_Bloke> Japje: Glad you tracked it down!  Is there anything else we can help with?
<Japje> neah, i guess im mostly fighting maas for trying to run debian in it
<Japje> nothing thats cloud-inits fault
<Odd_Bloke> Japje: It's cool that you're trying to get that to work!
<Japje> its booting and the network comes up
<Japje> but none of the user_data i push on deploy actually gets in the vm
<Odd_Bloke> It looks like there is a #maas on Freenode, have you tried asking for help in there?  I believe https://discourse.maas.io also gets pretty frequent use, so that might be another good forum to seek assistance?
<Japje> yeah im on the irc channel
<Japje> but irc is very quiet lately
<Japje> so i might check out their discord
<mrtmr> Hi everyone, in openstack/bifrost we can pass network_data.json file to server for cloud-init with configdrive. Today I tried to pass a user_data file and I succeed it, with biforst-configdrive-dynamic role. but there is a problem with cloud.cfg file I must to add resolv_conf module to in it otherwise my user_data is not working  at boot time. in m
<mrtmr> y user data there is a basic expamle for configure /etc/resolv.conf file do you have any idea how to i re-write cloud.cfg file
<Odd_Bloke> mrtmr: o/  What distro are you using?
<mrtmr> in my default cloud.cfg file, there is not inside resolv_conf module
<mrtmr> debian 9 it created with disk image builder by bifrost
<mrtmr> if I add this module name by my hand to cloud.cfg and re-run again it execute that module
<Odd_Bloke> Yep, the default Debian cloud.cfg won't include cc_resolv_conf.
<Odd_Bloke> Looking at the documentation for that module (https://cloudinit.readthedocs.io/en/latest/topics/modules.html#resolv-conf) suggests it's only supported for Fedora, RHEL, and SLES.
<Odd_Bloke> It also says: "And, in Ubuntu/Debian it is recommended that DNS be configured via the standard /etc/network/interfaces configuration file."
<mrtmr> okey my bad sry missed that part of it
<Odd_Bloke> Not a problem!
<Odd_Bloke> What is it that you're doing that leads you to want cc_resolv_conf to run?
<mrtmr> cloud-init is configure my interfaces but it is not edit my resolv.conf file
<mrtmr> even my network interfaces up and running
<Odd_Bloke> Hold on, just launching a Debian instance so I can check if my assumptions are Ubuntu-specific before I respond. :)
<Odd_Bloke> mrtmr: OK, so if you're passing network configuration, I think I would expect the network configuration backend to configure DNS appropriately.
<Odd_Bloke> Which I think should mean that you don't need the cc_resolv_conf module to do it for you.
<Odd_Bloke> Note the "I think"s in there, though. ;)
<mrtmr> is there a way to add cc_resolv_conf module to cloud.cfg
<Odd_Bloke> mrtmr: Well, yes, you can just edit the file, or drop in a cloud.cfg.d snippet.
<Odd_Bloke> mrtmr: Note that you need to replicate the entire list in a drop-in, though.
<Odd_Bloke> Otherwise you'll _only_ run cc_resolv_conf.
<Odd_Bloke> mrtmr: But I don't think you should need cc_resolv_conf to run if you're specifying network_data.json.
<Odd_Bloke> (So it may be better to dig in to why you aren't getting the configuration you want from the network rendering.)
<mrtmr> yeah im specifying network_data.json
<Odd_Bloke> Would you be able to pastebin your network_data.json?
<mrtmr> btw my debian image is created with disk-image-builder and my network_data.json is rendered by bifrost
<Odd_Bloke> mrtmr: Right, I'm trying to work out if my assumptions about what your cloud is doing are correct; there are a lot of ways to deploy/configure OpenStack. :p
<mrtmr> https://gist.github.com/mrtmrcbr/505e335f464203707a367666d1bfccd1
<mrtmr> here is my network_data.json
<Odd_Bloke> OK, I would expect to see 195.226.196.87 configured as your DNS server without cc_resolv_conf; does that not match what you're seeing?
<Odd_Bloke> And, if it doesn't, could you file a bug (https://bugs.launchpad.net/cloud-init/+filebug) with the output of `cloud-init collect-logs` from a failing instance so we can dig in to what is going on?
<mrtmr> i did some changes should i send this logs with clean instalation
<Odd_Bloke> Ideally, yes, please.
<mrtmr> Odd_Bloke '''cloud-init collect-logs''' this command, which release does come with
<Odd_Bloke> mrtmr: Ah, that's a question I should have asked earlier: what version of cloud-init are you using?
<mrtmr> my debian image came with cloud-init 0.7.9 by default. disk-image-builder puts it there
<Odd_Bloke> OK, 0.7.9 is pretty old at this point, which would explain why the network config isn't being consumed properly.
<Odd_Bloke> blackboxsw: rharper: https://code.launchpad.net/~daniel-thewatkins/cloud-init/+git/cloud-init/+merge/371090 <-- small MP to make doc builds not hang randomly due to an external dependency
<Odd_Bloke> mrtmr: Are you able to try Debian stable (which has 18.3) and see if you get different behaviour?
<mrtmr> yeah it is dissapointing, in debian repos there is not avaliable new releases
<Odd_Bloke> rharper: https://bugs.launchpad.net/bugs/1839061 <-- presumably /etc/ssh/... is root owned, not user owned, so the "7" applies to a different person?
<ubot5> Ubuntu bug 1839061 in cloud-init "Wrong access permissions of authorized keys directory when using root-owned location" [Medium,Triaged]
<Odd_Bloke> mrtmr: Debian does have 18.3, which is ~17 months newer than 0.7.9, at least.
<rharper> Odd_Bloke: it is root owned, but why can sshd read user's '7' but not roots?
<Odd_Bloke> Does it drop permissions down to the SSH'ing user or something?
<mrtmr> okey it is avaliable from debian strecht repo but i installed with deb file and i rebooted the machine
<mrtmr> sry - it is not avaliable*
<Odd_Bloke> OK, we'll see if that works; I was suggesting using a Debian stable image, FWIW.
<mrtmr> Odd_Bloke after reboot resolv.conf didnt change
<Odd_Bloke> rharper: I'm now +1 on tribaal's Exoscale change; shall I mark it Approved to land it?
<tribaal> \o/
<tribaal> thanks Odd_Bloke
<Odd_Bloke> rharper: Or, rather, I'm going to mark it Approved unless you have objections. :)
<Odd_Bloke> tribaal: Thank you, and thanks to Mathieu too!
<tribaal> will pass the thanks forward
<rharper> Odd_Bloke: +1
<Odd_Bloke> tribaal: Approved, should be auto-landed Real Soon Now. :)
<tribaal> Odd_Bloke: rharper: nice! Thanks a lot!
<tribaal> blackboxsw: smoser: thanks a lot to you both as well!
<AnhVoMSFT> rharper - any idea how I can wrap an events reporting context manager around this         with EphemeralDHCPv4(fallback_nic):
<AnhVoMSFT> we want to use the event context manager around the dhcp's lease obtain, which is a context manager itself
<AnhVoMSFT> I can create our own context manager and in the __enter__ I can probably call the enter method of EphemeralDHCPv4 and wrap it within the event's reporting context mgr, but want to check if you can think of a better way
<blackboxsw>  AnhVoMSFT I'll peek at that to see what would work. I think we have a second example of that (calling the __enter__ directly within a different context mgr)
<AnhVoMSFT> thanks blackboxsw
<blackboxsw> also AnhVoMSFT I'll send you an email. looking at how best to create a separate subnet on separate nic in Azure that has it's own separate gateway. Mostly to validate this branch behavior: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/370970
<blackboxsw> I couldn't find an easy way in Azure portal to create a separate subnet w/ a unique gateway. Not really sure that that's a frequent use-case on Azure
<AnhVoMSFT> Sure blackboxsw - if I don't know how I can go find the answer
<rharper> AnhVoMSFT: http://hoardedhomelyhints.dietbuddha.com/2014/02/python-aggregating-multiple-context.html; the ExitStack in the comments here looks like what we want,
<AnhVoMSFT> strange, that link gave me "Sorry, the page you were looking for in this blog does not exist."
<blackboxsw> AnhVoMSFT: no worries thought I'd ping in case. it's slightly related to the multi-ip support on the primary nic in Azure
<blackboxsw> hrm shows for me
<blackboxsw> til multi-context mgr
<blackboxsw> from the blog:
<blackboxsw> with contextmanager1, contextmanager2, contextmanager3, contextmanager4:
<blackboxsw>     pass
<AnhVoMSFT> oh there was a semicolon at the end
<AnhVoMSFT> it shows for me now
<AnhVoMSFT> thanks , I will look into it closely
<AnhVoMSFT> it might not work for us though, as I only want to capture the time of obtaining dhcp, (__enter__ call of EphemeralDhcp), not the E2E time of the whole block
<blackboxsw> AnhVoMSFT: also, EphemeralDHCPv4 context manager is doing the same kind of nesting you suggested in obtain_lease/clean_network method using EphemeralIPv4Network.__enter__()
<blackboxsw> to it manages direct calls EphemeralIPv4Network.__enter__ && __exit__ when needed
<AnhVoMSFT> yeah I looked into that yesterday, but I was hoping that I am missing some cool trick of Python :)
<AnhVoMSFT> I think I'll go with that mechanism of calling the __enter__ method
<rharper> AnhVoMSFT: oh trailing semi-colon
<rharper> sorry
<rharper> http://hoardedhomelyhints.dietbuddha.com/2014/02/python-aggregating-multiple-context.html
<Odd_Bloke> You can nest context managers: with A(): with B(): thing
<Odd_Bloke> rharper: blackboxsw: Do either of you have strong feelings about the default behaviour of host key publication?  It's a no-op on data sources that don't support it, so I think it's fine to keep it that way until we have a data source which introduces support but _doesn't_ want it to be the default.  What do you think?
<Odd_Bloke> (This is in reference to https://code.launchpad.net/~wrigri/cloud-init/+git/cloud-init/+merge/370348)
<blackboxsw> reading now
<rharper> Odd_Bloke: yeah; because I think it's a really great feature and something that DataSources should want to do; I'm generally in favor of enabling by default;
<blackboxsw> sorry I relooked at AWS instance connect functionality to see if it was comparable. ok so they want a callback to publish host keys  to their backplane for security of connecting services to their spawned vms. I was wondering if this was like AWS instance connect functionality (not in cloud-init) but I think instance connect for was was the other way around (backplane publishing backend service keys to the host).   It
<blackboxsw> seems unique to GCE for the moment, and a noop is a good enough placeholder for the future.
<blackboxsw> good to have the method API defined up in the base DataSource to guide future implementation
#cloud-init 2019-08-09
<Odd_Bloke> GCE host keys branch is now on its way to landing, so I'll cut an eoan upload once it's in.
<Odd_Bloke> blackboxsw: rharper: Can I get a quick eoan upload review: https://code.launchpad.net/~daniel-thewatkins/cloud-init/+git/cloud-init/+merge/371135
<blackboxsw> looking now
<Odd_Bloke> Thanks!
<blackboxsw> will do the upload prep on my end and diff between yours
<rharper> Odd_Bloke: yeah
<Odd_Bloke> (Probably only need one of you.)
<Odd_Bloke> rharper: Thanks!
<Odd_Bloke> Uploaded and accepted.
<rharper> \o/
<Odd_Bloke> rharper: So with the Exoscale DS changes to cloud-init.templates, I don't know if adding that now is the right thing to do, in case we need to do a cherry-pick fix to (e.g.) ubuntu/bionic.
<rharper> Odd_Bloke: right, I guess we need to wait for SRU
<Odd_Bloke> Perhaps this is needlessly conservative, though, because we can always revert it in such a case?
<rharper> I mean, it's only for the daily
<rharper> and yes we could revert, but the daily ppa doesn't get pulled into images or anything just for testing
<Odd_Bloke> If we needed to cherry-pick a fix to bionic, it would no longer just be for the daily PPA though, right?
<rharper> yes, I see what you mean, the release branch itself
<rharper> so I guess that means waiting for the SRU
<rharper> which isn't terrible since we've one coming soon
<Odd_Bloke> Yeah.
<Odd_Bloke> OK, cool, I'll leave it for now then.
<blackboxsw> Odd_Bloke: thanks for the back on forth on https://bugs.launchpad.net/cloud-init/+bug/1839659   I was all for conformity and then a flag day going and cleaning up everything if we decided to go strict
<ubot5> Ubuntu bug 1839659 in cloud-init "cloud-init should stop accepting a plethora of values for true/false configuration" [Low,Confirmed]
#cloud-init 2019-08-10
<marlinc> How can I best mark a systemd service as dependant on cloud-final but still start it at boot?
<IvanKr> Hi! Can someone help me figure out how run single cloud-init module without providing user data? When I', trying to run 'cloud-init single --name cc_growpart', it ends up with 'Failed to fetch your datasource'. Any help will be appreciated!
#cloud-init 2020-08-03
<beantaxi> Hi all. Is this channel for cloud-init development only? Or is it ok for general cloud-init questions
<rharper> beantaxi: always ask away ... someone might be able to help
<beantaxi> Haha thanks! I'm _very_ new to cloud-init, though I'm very happy my EC2 startup actually is following an open standard. Anyway, I've excitedly moved to launch template & user data based startup, since why not just use that instead of learning terraform or what have you.
<rharper> heh, cool.  welcome
<beantaxi> Trouble is, it appears (perhaps misleadingly) that my userdata is not being executed till completion. Almost as though at some point, cloud-init says "ok, you've had long enough" and kills my script and decides to finish booting.
<beantaxi> I'd be very surprised if that's what's happening, so I'm tring to dig in and get some more detail
<beantaxi> For example, my user data is basically a bunch of apt installs, then a few mounts + writes to fstab, then some could downloads from S3 and some systemctl enables. But I keep getting these no good instances, because eg in cloud-init-output.log it appears to die somewhere in the middle, eg after my first mount
<rharper> right, typically we look at the cloud-init logs;  if you can get into your system; then cloud-init collect-logs will create a tarball of cloud-init logs and state ..  it will package up /var/log/cloud-init* /run/cloud-init*  and include user-data, so if it's sensitive, you can edit those out and just paste a cloud-init.log;
<rharper> beantaxi: is your script run via runcmd:  in user-data ?
<beantaxi> I'm not sure about runcmd. But everything's in userdata. I have a launch template, where the base image is just EC2's Ubuntu 18.04 Server, and the userdata is my base64 encoded script
<beantaxi> Ultimately I was able to 'fix' my instances, by uploading and running the script by hand, from a sudo -i shell. There only seems to be an issue during startup.
<rharper> ok, so you should be able to find your decoded script in /var/lib/cloud/instance/scripts/
<rharper> I'd first confirm it looks the way you expect decoded;
<beantaxi> I've fired up a new instance, so I can grab the logs with collect-logs as you described. Thanks foe that! That sounds useful. And thatnks for the decoded script path! That'll be a great next step.
<rharper> second, you can try to re-run it like cloud-init would with:    cloud-init --debug single --name cc_scripts_user --frequency=always ;    cloud-init will call run-parts on the that scripts dir;
<beantaxi> Yesssss that sounds perfect
<rharper> and lastly, if you use a #!/bin/bash -x   for your shebang in your script, then you can see the execution tracing output in /var/log/cloud-init-output.txt
<beantaxi> That's the one thing I've actually done from the beginning. is it /var/log/cloud-init-output.txt or .log?
<beantaxi> Backstory - a buddy has started a new job, with runaway k8s issues. k8s for everything. Unsurprisingly nothing works, and no one knows how it's even supposed to work. I told him 'have you looked at cloud-init? I think that's 99% of what you need.' So I'm hoping to demonstrate that (and perhaps get a little contract out of it.)
<rharper>  /var/log/cloud-init-output.log
<beantaxi> Ok good. That's what I've been looking at. It's unclear what it's relation is, to what AWS makes available in the console for 'Get System Log', but I presume that's some very AWS specific stuff going on.
<rharper> beantaxi: speaking of k8s and cloud-init,  https://bugs.launchpad.net/cloud-init/+bug/1888822
<ubot5> Ubuntu bug 1888822 in cloud-init "cloud-init does not respect declared MIME types in multipart archives" [Critical,Triaged]
<rharper> this was just worked on last week; and it had to do with some k8s bootstrapping of secret-user-data ...  may not be related but figured I'd pass it along in case that was the issue
<beantaxi> Thanks! It was a good read. Among other things, demonstrates people are successfully using cloud-init for much more elaborate scenarios than mine.
<beantaxi> I was little afraid my issue was 'dont use cloud-init for anything over a dozen lines or so; that's not what cloud-init is for'
<rharper> beantaxi: hehe, no there are some very elaborate and long scripts to setup hosts with cloud-init;
<beantaxi> Murhpys Law: I just built a new image, and then launched a new VM from the new image, and both came up flawlessly. And I'd terminated the bad guys so I couldn't run the above scripts. But those are fantastic to have for future use.
<rharper> =)
<beantaxi> Actually in looking at my successful run, I notice I have an rsync in there, to sync a local disk up from a volume, and perhaps that's not really part of 'system startup'.
<beantaxi> Do you guys have a recommendation, on whether to put that in a separate cloud-init step to run on start, or to use systemd, or other?
<rharper> cloud-init will run every boot, not every cloud-init operation runs every boot;  you can create a script which cloud-init will run every boot, or only once or once-per-instance;
<rharper> cloud-init can run things quite early ( a boot hook) ; user-scripts/runcmd typically run fairly late (by design, after networking is up and users created, files written, etc)
<rharper> so it really depends on when you need to run the rsync; how often, etc.
<beantaxi> That's actually how I found out about cloud-init. I wanted something to run on every VM start, not just VM creation, and I came across an AWS thing saying I could use a multi-part MIME file etc etc.
<beantaxi> I'm new to systemd as well, so I'm musing if I want to go the multi-part MIME route or the systemd route. I'm happy to know any technical pros/cons if there's more to it than personal taste.
<rharper> cloud-init only runs during boot; so after the bootup is finished, it's not active;  of course with a systemd unit you can start/restart it trivially;  having cloud-init re-run a script is also doable but likely more overhead of spinning up cloud-init to exec a script;  if it's meant to run more frequently than boot; I'd probably use write_files to create a systemd unit with my program being called from that
<beantaxi> Ah - this is the bit where you use cloud-init / cloud-config directives, instead of pure bash
<rharper> Right, write_files, and runcmd, you could use write_files to write out bot your unit and the script, and runcmd to invoke the script and the service if you like
<beantaxi> I saw that ... I was a bit hesitant to learn that, instead of just writing bash, largely because I wasn't sure how I'd troubleshoot my cloud-config or see exactly what was going on. Of course I'd get the benefit of any error checking etc -- all the stuff I _should_ be doing but probably am not.
<beantaxi> What's the implementation of write_files etc ... is it all little python functions?
<rharper> almost all of cloud-init is written python;  the syntax for the user-data in put is in yaml, we have examples on our docs page, https://cloudinit.readthedocs.io/en/latest/topics/modules.html#write-files
<rharper> for debugging/troubleshooting, we typically use LXD to run a system container with user-data attached to it;  that's faster than launching an image (if you don't have a dev setup with lxd, you can launch an ubuntu instance and use lxd from there)
<beantaxi> That lxd maneuver sounds incredibly helpful. Everytime I need to debug an image startup issue I lose half a day, just waiting for VMs to start.
<rharper> alternatively, if you deploy into an instance, you can test your configs with:  cloud-init --debug --file my-cloud-config.cfg single --name cc_write_files --frequency=always;  write your cloud-config that you want to text into the file and then repeatedly call cloud-init single , the --frequency=always means it will always execute that module
<rharper> yeah, lxd part is nice;  we do something like :  lxc init ubuntu-daily:bionic b1;   lxc config set b1 user.user-data "$(cat my-user-data.cfg); lxc start b2;
<rharper> s/b2/b1;
<rharper> then you can lxc exec b1 bash;  and run cloud-init status --wait (this blocks until all of cloud-init is done);  and check your results;
<beantaxi> lxd has seemed like black magic to me for quite some time. It's been bugging me, but I've never had a 'way in' to demystify it and actually use it as a productivity tool. This sounds perfect.
<beantaxi> So, I've been musing if cloud-init could be used to deploy containers on separate AWS regions, or even across cloud providers.
<beantaxi> And now, in a LXD youtube I'm watching from 2015, this guy talks about using LXD to migrate cloud_init _running containers_ from host to host. Wow.
#cloud-init 2020-08-04
<beantaxi> Can I use lxc config set b1 user.user-data "$(cat my-user-data.cfg), as @rharper suggested above, but with an existing bash file I use for user-data in an EC2 launch-template? Or do I need to convert it to something more cloud-config friendly
<meena> beantaxi: what does your bash script look like / do, what should it do? is it used to create stuff from the outside, or the inside?
<beantaxi> meena: Typical initialization stuff at the moment ... a lot of apt installs, a few mounts ... also downloads a few things from S3.
<beantaxi> I wrote the script, and others like it, before I'd heard of cloud-init. I like rharpers idea of using lxd to develop init scripts, instead of firing up fully blown EC2 instances all the time, but not if I need to add in a 'convert all existing bash scripts to cloud-config yaml' task at this time.
<rharper> beantaxi: to make a multi-part mime message as your user-data, you can use cloud-init's tools/make-mime.py ;  it's in the cloud-init source repot
<rharper> https://paste.ubuntu.com/p/N9HHZ9rQZ2/
<rharper> s/repot/repo
<beantaxi> rharper: Thanks. I saw a reference to make-mime.py ... I probably could have phrased my question as "can I use my bash-script as is, or will I have to use something like make-mime.py to create a full multipart MIME file." & I couldn't find any docs which said one way or the other.
<rharper> yeah; it's under-documented
<rharper> s/under/un
<beantaxi> haha! well played. make-mime.py seems fairly useful and painless. Btw, fuzzy question - how does cloud-init view supporting pure-bash initialization? I could understand if it was seen as an annoying legacy necessary evil
<Odd_Bloke> beantaxi: It's not considered legacy by any means; it's a totally valid way of specifying user-data, and will continue to be so.
<rharper> beantaxi: I'm not sure I understand what "pure bash initialization" ?  in terms of the contents of the payloads, cloud-init does not have an opinion;
<Odd_Bloke> I took it to mean "passing in a shell script as user-data", but maybe I misunderstood.
<beantaxi> Odd_Bloke: That's exactly right. rharper: I suppose I meant generally; eg if I have issues scripting my startup in bash, will those be seen as valid question, or will I be seen as annoying everyone by stubbornly not switching to cloud-config yaml (apologies if Im using the wrong terms of art)
<rharper> beantaxi: definitely valid;  there are always trade-offs between writing things your self in a script; vs. leaning on the cloud-config modules;  and you can always mix both;  with the multi-part mime archive, you can include any number of shell scripts and #cloud-config files
<beantaxi> rharper: That's definitely the answer that's best for me :) so no arguments here
<beantaxi> Wow guys ... I just went from yesterday's complete ignorance, to just now firing up an lxd Ubuntu 18.04 container, using lxc config set user.user-data with my make-mime'd bash script ... and I appear to have a working system, with my systemd-based web scrapers happily running and downloading?
<beantaxi> Did this all just happen?!?
<Odd_Bloke> ^_^
<beantaxi> Ok, here's something I like to see. It's an error message so that might seem odd. + mkfs -t xfs /dev/xvdf
<beantaxi> Error accessing specified device /dev/xvdf: No such file or directory
<beantaxi> This is on an EC2 instance, where I have attached volumes at /dev/xvdf thru h, so on the host system those devices are valid. But in my lxd container, those appear to be invalid ... which is what I was hoping for but had no idea if that'd work.
<beantaxi> What I wrote is a little unclear ... what I was hoping for was that my container would have no idea of it's parents devices, and that's what I'm seeing. So this is pretty cool.
<rharper> beantaxi: device block manipulation won't be present inside a container;
<rharper> beantaxi: even launching as a VM, the virtualization layer on Ec2 exposes their virtual disks as xvdX (this is a Xen thing);
<blackboxsw_> Odd_Bloke: if you get a chance today I think I need an upstream  +1 review on https://github.com/canonical/cloud-init/pull/516
<blackboxsw_> have an approval from lucas
<beantaxi> rharper: Ah, so _that's_ where the x in xvd* comes from. If I really wanted my container to have access to host devices, it looks like lxc config devices add <something> <something> disk <etc> could allow that
<AnhVoMSFT> @rharper @Odd_Bloke is there anyway to tell a systemd service to "start After A, but if A does not exist, start After B". More specifically, we're dealing with a scenario where network.target isn't reliable enough, so we want the service to start After=NetworkManager-wait-online.target, but if NetworkManager-wait-online.target does not exist, it should start After=network-online.target
<rharper> you can include more than one After, if the target specified does not exists, it's ignored
<rharper> so network.target and network-online.target are two very different things ...   can you just start after network-online.target (which is reached after networkd or network-manager online.targets are reached
<rharper> or are you trying to adjust cloud-init.service (which runs after network.target and after the $service-online.target but *before* the network-online.target ?
<rharper> I know there have been issues getting NM-wait-online.target to work in the same spot as systemd-networkd-wait-online.service ;  specifically around when dbus starts and things like that;
<AnhVoMSFT> no, this is for rpc-statd.services and rpc-statd-notify.services. in RHEL these services are configured to start after network-online, which causes a conflict with cloud-init's init phase when it tries to start mount -a, which causes NFS mounts to lock up due to cyclic dependencies
<AnhVoMSFT> @rharper there're more details in this bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1858930
<ubot5> bugzilla.redhat.com bug 1858930 in nfs-utils "nfs mounts will block cloud instances with cloud-init from starting up" [High,New]
<rharper> hrm, I thought mount -a would ignore any of the _netdev mounts;
<rharper> is _netdev not on the option list in fstab for the nfs mounts?
<AnhVoMSFT> no, in the cases I looked at they don't have the _netdev option list
<AnhVoMSFT> should they be on there?
<AnhVoMSFT> note that this isn't an issue for NFSv4 mounts, only NFSv3 (NFSv4 don't require the rpc-statd* services)
<rharper> yes
<rharper> all network mounts require _netdev to prevent mount from bringing up the mount before networking is present
<rharper> AnhVoMSFT: ^
<AnhVoMSFT> Thanks @rharper, let me try this and add that to our documentation. Is this documented in cloud-init doc? If not we can contribute a PR there
<rharper> documented in man (8) mount;  and I believe we mention _netdev  as well but not 100% sure
<rharper> AnhVoMSFT: looks like we could use  a doc/example update for nfs;  and we do have code in cc_mounts which checks if the device is "network" related;  so possible could update the defmounts to append a ,_netdev in the opts for nfs mounts
<AnhVoMSFT> @rharper "update the defmounts to append a ,_netdev in the opts for nfs mounts" : how does this work?
<rharper> cloud-init in cc_mounts updates/generates /etc/fstab for any of the provide mounts;  if one of the devices was: nfshost:/mypath /localmount  ....  we could ensure that the options field with also get a ,_netdev on it;
<rharper> it's a bit tricky since, if a user provided their own mount options, clobbering that might interfere;   I think a good step 1 is to test and then document that for nfs mounts, users should provide mount options, like defaults,_netdev ;
<rharper> later one could see if cloud-init can safely append the _netdev automatically if not present in mounts that are network devices
<rharper> AnhVoMSFT: see cloudinit/config/cc_mounts.py:is_network_device()  ; which is just a regex checking for : in the device name;   using that later on when composing the updates to fstab, we could check for network device mounts if _netdev is in the opts, and if not, append it
<AnhVoMSFT> @rharper does mount -a actually ignore _netdev?
<AnhVoMSFT> -a, --all
<AnhVoMSFT>               Mount  all  filesystems  (of  the given types) mentioned in fstab (except for those whose line contains the noauto keyword).  The filesystems are mounted following their order in
<AnhVoMSFT>               fstab.
<AnhVoMSFT> looking at "man mount" it only mentioned that it would only skip those with noauto keyword on it
<rharper> hrm, even if we ignored the nfs mounts; there's noting to actually mount things;
<rharper> I know I played with this before; it looks to me like on systemd systems, we shouldn't use the mount -a at all, but rather the daemon-reload we have, which will re-parse fstab (and then with _netdev options) those mount units won't run until after networking (and will happen automatically);
<rharper> in the mount man page theres mount -a -O _netdev ; which would exclude any mount that had _netdev option set;  then the daemon-reload would create the mount units for the nfs entries ...;  for non-systemd systems; there's nothing to trigger a mount post network coming up though
<rharper> this would be an existing issue that they've already solved (either by not using cloud-init cc_mounts to handle nfs mounts, or appending some final script which runs mount -a again
#cloud-init 2020-08-05
<AnhVoMSFT> @rharper - continuing from our discussion yesterday. Is the best course of action is to have cloud-init do a mount -a -0 _netdev to avoid mouting NFS mounts?
<beantaxi_> Where's a good place to ask basic lxc/lxd questions? I've tried #linux-containers but there does not appear to be any activity of any kind there
<beantaxi_> whoops -- I lied. It looks like things picked up there about 3 AM US time
<blackboxsw_> beantaxi_: probably #lxd-dev
<blackboxsw_> beantaxi_: and from the topic on that channel it says support questions over in #lxcontainers
<blackboxsw_> instead of #linux-containers
<jj623> Hello, I was expecting `cloud-init status --wait` to poll until all stages of cloud-init have completed, but it looks like it exits before cloud-final has finished running if a module within cloud-config fails. My current idea is to execute `cloud-init status --wait` in a systemd unit that has an After= dependency on cloud-init.target, but is there
<jj623> a more accepted approach for waiting until cloud-init is done executing all stages?
<jj623> $ cloud-init --version/usr/bin/cloud-init 20.2-45-g5f7825e2-0ubuntu1~16.04.1
<blackboxsw_> jj623: that sounds like it would be worth a bug. I want cloud-init status --wait to block until everything is done executing error or not.
<blackboxsw_> please do file a bug so we can keep that approach simple for non systemd units/scripts
<rharper> status --wait most certainly should block until final is done ..
<rharper> blackboxsw_: +1
<blackboxsw_> that said it should be possible to add After=cloud-init.target for systemd units/services... per this earlier blog post  https://ubuntu.com/blog/cloud-init-v-18-2-cli-subcommands
<blackboxsw_> but, really we do need to fix that bug you mention in cloud-init status --wait for your error condition
<blackboxsw_> it'd be worth an FAQ doc topic added  to https://cloudinit.readthedocs.io/en/latest/topics/faq.html on startup scripts waiting on cloud-init as this question is really asked a *lot*
<rharper> blackboxsw_: xenial uses systemd ...
<rharper> so I'm not sure what's going on
<blackboxsw_> rharper: I think the issue as jj623 alluded to, is that status --wait only blocks until any error is seen any stage of /run/cloud-init/status.json
<blackboxsw_> so if we have any error in config:modules stage, status exits as ERROR
<rharper> ah, I see;
<blackboxsw_> here I'm thinking https://github.com/canonical/cloud-init/blob/master/cloudinit/cmd/status.py#L133-L144
<blackboxsw_> which we bail on wait if ! (STATUS_RUNNING or STATUS_ENABLED_NOT_RUN) https://github.com/canonical/cloud-init/blob/master/cloudinit/cmd/status.py#L57
<rharper> yeah, we'll need a flag;  as existing behavior bails on first error;
<blackboxsw_> so I think we may need to avoid tracking ERROR until DONE is also achieved in the status --wait call
<rharper> --wait --ignore-errors/--no-errors/--stages-only   ...  not sure what color the bikeshed should be
<blackboxsw_> jj623: hiya thanks for the heads up, we agree it's probably a bug with status --wait related to  https://github.com/canonical/cloud-init/blob/master/cloudinit/cmd/status.py#L133-L144. if you get a chance, please file a bug  https://bugs.launchpad.net/cloud-init/+filebug and we can get that fixed
<blackboxsw_> rharper: :) +1 agreed
<jj623> will do, thanks for confirming what the intended behavior is
#cloud-init 2020-08-06
<otubo> smoser, Hey, do you know if this was merged at some point? https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+ref/fix/1788915-vlan-sysconfig-rendering
<otubo> smoser, looks like we have an issue on RHEL that could benefit from this fix.
<otubo> smoser, I mean, I can't find the exact commit entry on the log, but perhaps it was merged as a different title on github, I don't know
<Zador> Hello creme de la creme ! :)
<Zador> Can we use passowrd hash to implement Linux instance AD domain join ?
<otubo> Odd_Bloke, quick ping for PR 428 to make sure it does not get closed to inactivity :-) thanks for the reviews!
<Zador> hello everyone !
<Zador> any Guru in the channel ?
<Zador> I want to know if it's possible to use password hash to implement Linux AD integration
<Zador> ?
<ananke> not sure how that's related to cloud-init
<Zador> we need to join the AD domain using admin credentials
<Zador> and I was wondering if it's possible to use password hash to automate and secure the domain join
<Zador> or if you have any suggestion/doc or use case of cloud-init and AD integration
<Zador> thank you ananke !
<ananke> you may want to check in ##linux, or a distro specific channel, there may be tools already present in your distro of choice.
<Zador> alright thanks
<Odd_Bloke> otubo: Thanks for the ping!  I've removed the tag, so it should have another 14 days of grace (and I would hope I'd get to it before then!).
<Odd_Bloke> Zador: ananke is correct: cloud-init doesn't have any specific Active Directory integration.  You'll want to look into how to configure this using your distro's available tooling.  Once you have some scripting that works for it, then you could look into having cloud-init run that by passing it as user-data.
<otubo> Odd_Bloke, thanks :)
<amansi26> blackboxsw_: We have some modules specific to a particular platform, what should be procedure to contribute a new module in the cloud-init ? Is there any restriction on doing so? I came across this document https://cloudinit.readthedocs.io/en/latest/topics/hacking.html . Wanted to confirm is there anything apart from that is needed to contribute?
<meena> what *is* HiBee?
<AnhVoMSFT> @Odd_Bloke @rharper Is is possible to disable network configuration of cloud-init from user-data (network: config: disabled)
<AnhVoMSFT> I guess not, from cloud-init doc: "User-data cannot change an instanceâs network configuration."
<blackboxsw_> AnhVoMSFT: sorry right, it is either from /etc/cloud/cloud.cfg.d/* file that contains "network:\n config: disabled" or kernel commandline params provided like "network-config=disabled"
<blackboxsw_> also note if providing kernel cmdline params that any network config can be provided directly via kernel cmdline "network-config=<any-base64-encoded-networkYAML>"
<blackboxsw_> per commitish https://github.com/canonical/cloud-init/commit/1d2dfc5d879dc905f440697c2b805c9485dda821
<blackboxsw_> user-data (cloud-config) is processed too late to disable network config I believe (so even emitting a write_files: - /etc/cloud/cloud.cfg.d/disable-net.cfg with the right content I *think* will still not disable networking until after a cloud-init clean --reboot
<tt_> Hello, is it possible to have cloud-init run a script on first boot of a machine on aws without needing to pass it in via user-data? I would ideally like to make an ami with the script already there in the proper place.
<tt_> Is it as simple as dropping the script into scripts/per-instance? https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-instance
<blackboxsw_> tt_: if you want first-boot only, you would drop them in https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-once
<blackboxsw_> if you want them to run each time the platform tells you that this instance is "new" (maybe redeployed from a cloned AMI) then you would use scripts-per-instance
<blackboxsw_> Odd_Bloke: I think I got you some good questions comment on the oracle branch https://github.com/canonical/cloud-init/pull/493#pullrequestreview-462784349
<blackboxsw_> thanks for all the work there
<tt_> @blac
<tt_> blackboxsw_: Awesome, thank you. Would the user-data script example here(https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#using-instance-data) work if I put it in scripts/per-instance or scripts/per-once?
<Odd_Bloke> blackboxsw_: Ack, thanks, will take a look!
<blackboxsw_> tt_: no prob. Hrm I'll have to check if scripts-per-* actually handles jinja templates for those scipts.
<blackboxsw_> checking now . If not, then your script would likely be able to run `cloud-init query <somekey>` and react to the values in that output
<cut> is cloud-init supposed to stay installed for the lifetime of the vm? or can i have it uninstall itself after firstboot when it sets everything up?
<blackboxsw_> cut: if cloud-init is uninstalled, it can't re-run if the vm is rebooted and instance-data (the metadata provided by the cloud platform) changes across reboot. So your VMs would be missing out on any potntial configuration changes that are supposed to affect the system if provided by the platform.
<blackboxsw_> so, it depends, some platforms don't to any configuratin across vm boots, others "may"
<cut> thanks
<blackboxsw_> cut, no prob. there are facilities as mentioned about by tt  that try to run some things across each instance boot, or each time the instance-id changes for that cloud. So, it all depends on what you want your final image to support and which cloud you are running on.
 * blackboxsw_ gets some lunch
<Odd_Bloke> cut: If you remove cloud-init and capture an image from such an instance, then it won't perform its usual first boot functions (which include SSH host key rotation, authorized_keys handling, network rendering, and handling of any user-data passed in): you'll have to handle that all yourself (assuming that the network config/authorized keys in the captured image will even allow you access to the new
<Odd_Bloke> instance).
<blackboxsw_> ahh right gd pt Odd_Bloke . much more critical than I implied
<blackboxsw_> tt_: confirmed you cannot provide cloud-config as per-instance per-boot sripts, they are actually just executable scripts and are not rendered as user-data. . So if you wanted to rely on instance data variables as shown in (https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#using-instance-data, you would probably want something like https://paste.ubuntu.com/p/tFDfWN3Wpp/
<blackboxsw_> be back in a bit
<Odd_Bloke> blackboxsw_: Thanks for the review, your point about metadata is a really good one.
<Odd_Bloke> I'll address that tomorrow.
<blackboxsw_> good deal Odd_Bloke
#cloud-init 2020-08-07
<cgd> Hi everyone I am trying to troubleshoot an issue. I am hoping that a cloud-init 'Elder' can impart some wisdom!
<cgd> I am trying to install some packages but I get an error, according to output log file that it failed to connect to package server for download. I have had this issue more than once, I read it could have something to do with dynamically assigned IP addresses on network
<cgd> Happy to give log output and yml file
<jeeebz> Hello !
<jeeebz> I'm not in the cloud domain, but someone asks for the cloud-init package. I wrote the spec and build the package for our distro, and I would like to know if there are other things to do more than compiling, packaging.
<Odd_Bloke> cgd: What distro are you running?  How did you acquire/create the image?
<jeeebz> I just got an answer, cloud-init is already integrated in the distro, but I didn't found it first... I spent time for nothing xD The distro is Mageia
<jeeebz> (already for 6 years integrated in Mageia https://svnweb.mageia.org/packages/cauldron/cloud-init/current/SPECS/cloud-init.spec?view=log )
<Odd_Bloke> jeeebz: Oh no, always frustrating when that happens!  Still, let us know if there's anything else we can help you with. :)
<jeeebz> Odd_Bloke: I've no question about cloud-init now. An ISP provides box with embedded OSs,
<jeeebz> one would like to port Mageia on their box (freebox delta)
<jeeebz> now I'm learning about qcow2 images. Normaly, it's done with the cloud-init part.
<jeeebz> But, Odd_Bloke, maybe updating the repo mentionning that Mageia also is supported ?
<Odd_Bloke> jeeebz: Could you run `cloud-init query distro` on a Mageia instance and let me know what it returns?
<jeeebz> Hum... I've no instance, (it is just my desktop OS now). Should I do the whole process, creating a VM etc ?
<jeeebz> Or just installing it would be enough ?
<Odd_Bloke> jeeebz: I wouldn't suggest installing it on a desktop, it might behave unexpectedly.  I've actually answered my own question by looking at the source you just pasted: it looks like Mageia support is patched into cloud-init at package build time in Mageia.  So I don't think we should update the upstream docs, because Mageia isn't supported upstream.  That said, if you have a way of reaching out to Mageia
<Odd_Bloke> folks, we'd be glad to accept a PR which added such support.
<Odd_Bloke> (It's specifically this line: https://svnweb.mageia.org/packages/cauldron/cloud-init/current/SPECS/cloud-init.spec?view=markup#l69 which installs this file: https://svnweb.mageia.org/packages/cauldron/cloud-init/current/SOURCES/mageia.py?revision=553373&view=markup)
<jeeebz> OK, I will wrote an email to this mageian member https://svnweb.mageia.org/packages/cauldron/cloud-init/current/SOURCES/mageia.py?revision=553373&view=markup#l10
<Odd_Bloke> blackboxsw_: I've either responded to or addressed all your review comments on https://github.com/canonical/cloud-init/pull/493
<blackboxsw_> Odd_Bloke: thanks for the review do you have a dump of cloud-init query --all available from an oracle instance using your pr?
 * blackboxsw_ can't find my creds and --compartment-id as it's been a long time since I've touched oracle api
<Odd_Bloke> blackboxsw_: Yep, let me quickly pastebin.
<Odd_Bloke> blackboxsw_: https://paste.ubuntu.com/p/hss7cG77Nq/ is what calling the datasource file (i.e. __main__) gives us, and https://paste.ubuntu.com/p/SvKkyg6JFP/ is what `cloud-init query -a` emits.
<Odd_Bloke> I notice that the subplatform there is incorrect, I guess that must be cached from the initial boot of the instance somehow.
<blackboxsw_> Odd_Bloke: I'm +1 on oracle v1
<blackboxsw_> thanks
<blackboxsw_> needs rebase/ci/merge etc
