#cloud-init 2014-06-23
<harmw> smoser: I've got my cubietruck up&running with fedora rawhide
<harmw> it (kernel+uboot) supports kvm/qemu now, so I should be able to launch cirros pretty easily
<harmw> though not backed by openstack, yet
<daxroc> Morning all
<daxroc> How should I be generating the passwd for user-data, I've tried mkpasswd -m sha-256 password but any attempt to login with the resulting password fails ? 
<daxroc> To login as the user I need to set a new password and then it works fine.
<smoser> daxroc, you're trying to log in on console ?
<smoser> or on ssh
<smoser> via ssh you likely have to enable password auth also
<smoser> i'd never seen 'mkpasswd' before, but from reading its man page, i'd expect it to work otherwise.
<smoser> you'll need to make sure that whatever distro you're talking to supports shadow entries in sha-256.
<smoser> you can probalby iterate faster by just invoking 'adduser' yourself with '--password' and seeing if you can log in then. you should see a item in the log of cloud-init (/var/log/cloud-init.log) showing how it called adduser
<daxroc> smoser appreciate the help, I needed to change for sha-512 and enable password auth
#cloud-init 2014-06-24
<kfox1111> is there a way to ensure mounts cloud config section gets executed before a #!/bin/bash script?
#cloud-init 2014-06-25
<harmw> hm ,smoser 
<harmw> I'm reading some doc about openstack/glance
<harmw> glance image-create --location http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img --name "CirrOS 0.3.2 - Minimalist - 64-bit - Cloud Based Image" --is-public true --container-format bare --disk-format qcow2
<harmw> 'we' should have a cirros-latest for that
<harmw> *perhaps
<smoser> what is cirros-latest ?
<smoser> ah. as in the '--location'.
<harmw> like, the latest stable version
<smoser> well, yes. and you can read this resopnse however you'd like.
<smoser> feel free to assert i'm abusing a monopoly.
<smoser> i'd like to get things using simplestreams.
<harmw> monopoly? :)
<smoser> as it can securely download said image
<harmw> ah, its a python lib?
<smoser> (abusing cirros to further simplestreams)
<smoser> a library and metadata format
<harmw> hmk
<smoser> http://download.cirros-cloud.net/streams/v1/
<smoser> but anyway, i've been hesitant to make easy "just download cirros" that didn't use that.
<harmw> looks nice
<smoser> the data there is easily mirrorable (sstream-mirror)
<smoser> we have it for cloud-images.ubuntu.com also.
<smoser> and then, there is a tool http://bazaar.launchpad.net/~smoser/simplestreams/example-sync/files
<smoser> that mirrors that into your glance.
<smoser> and "sync"s it easily.
<harmw> so its specific to glance/cloud stuff?
<harmw> the whole simplestreams stuff, that is
<smoser> not relaly. its just nice data
<harmw> k
<smoser> its data that describes downloads
<harmw> looks nice, yes, json stuff
<harmw> indeed
<smoser> and sstream-sync mirrors that data to your local glance.
<smoser> (knowing that it wants 'disk1.img' and such)
<harmw> yea
<harmw> you've looked at my buildroot branch lately btw?
<smoser> no. i have not. i will. i need to do that.
<smoser> and to get ppc64el merged in too.
<harmw> oeh nice
<smoser> err.. ppc64 big endian
<smoser> i have that building
<smoser> but it doesn't see virtio devices
<smoser> er.. network virtio disk i think works.
<harmw> you happen to know the defacto way of connecting compute nodes to an openstack environment over wan?
<smoser> no. i dont.
<harmw> google to the rescue! :p
<CatKiller> Hi there! I've still have some issues with cloud-init. I can now use the correct data source in the image, but it does not seem to be able to access the source.
<CatKiller> However, once the virtual machine has booted (without the cloud-init configuration so using password login)
<CatKiller> I can access the metadataserver fine
<CatKiller> I've pasted a full log there: http://bpaste.net/show/PAx9ZnCbT6N5D2QuR5pU/
<CatKiller> My guess is that it can't reach the metadata server as the NIC is not yet configured (NIC is probably configured afterwards by upstart/systemd). Could that be it?
<CatKiller> If so, is there a way when installing cloud-init to bring up the network interfaces before the VM boots?
<CatKiller> To make this image I installed Ubuntu desktop in a KVM session, and ran "sudo apt-get install cloud-init", changed the data source to "OpenStack" only (this is going to run exclusively on OpenStack) and booted the image in openstack.
#cloud-init 2014-06-26
<CatKiller> Hi there, I'm having some issues with cloud-init, I've built a Ubuntu desktop image with cloud-init and OpenStack as the source, but cloud-init fails to retrieve the metadata from the server as the network isn't configured yet. Any ideas how to get cloud-init to configure the network so that the server can be contacted? Running it after the OS has finished booting (and subsequently once the network is up) works fine.
<robjo> Just noticed that the Azure data source tries to start "walinuxagent". This appears inconsistent to me as the upstream source provides "waagent"
<robjo> shouldn't cloud-init stick to upstream provided names?
<robjo> Also the code is setup to always use the "service" command. While distros switching to systemd have service rewired to do the "right" thing it would be nice to use systemctl where appropriate
<smoser> CatKiller, hi
<smoser> cloud-init runs its network metadata search when all configured 'auto' interfaces are set up in /etc/network/interfaces.
<smoser> i suspect you just need to put 'auto eth0' and config it for dhcp there.
<smoser> robjo, i'm not opposed to that, but service is a functional work around.
<smoser> ie, usage of it was by design. 
<smoser> wrt walinuxagent or waagent, i'm not sure. i'm fine for patches to go looking for the right name.
<robjo> smoser: so conditional code for name and command would be OK?
<smoser> sure. i dont really care. 
<smoser> once we got azure functional, i have not looked back at it. and have been happier :)
<robjo> yup, I usually look at this stuff only as long as I have to as well, will work on a patch
<smoser> the worst hack in azure is bouncing the network interface in order to "publish" the hostname.
<smoser> i'm guessing what i have there wont work on suse
<robjo> OK, thanks for the heads up, will take a look at that as well
<smoser> fwiw, the agent start command is configurable
<smoser> via the datasource config
<smoser> (and so is bounce command)
<robjo> Yes I saw the "merge" call but I cannot put all the parts together just yet on how the configuration and everything else connects
<CatKiller> smoser: Oh my god you're right, I know what's happening
<CatKiller> Silly Ubuntu desktop comes with Network Managr
<CatKiller> *Manager
<CatKiller> Or nightmare manager as I like to call it
<CatKiller> So cloud-init must be waiting from the network to be up from upstart which is probably not really happening with Network Manager
<CatKiller> I'm going to remove it completely (unless there's a better option)
<smoser> CatKiller, if it were me, i might go the route of grabbing a cloud image and apt-get install ubuntu-desktop
<smoser> rather than building your own
<CatKiller> smoser: I thought about this
<CatKiller> smoser: I thought it would be bad
<smoser> CatKiller, its not waiting for anything from network manager
<CatKiller> but I just learned the hard way!
<smoser> cloud-init's job only waits on "static networking"
<CatKiller> I'm actually probably do what you suggested
<smoser> which means "stuff configured 'auto' in /etc/network/interfaces"
<CatKiller> It does make a lot of sense
<CatKiller> Few distros use NetworkManager and no server ones would (which is usually what people start in the cloud)
<CatKiller> smoser: I think I'll go the way you suggested with getting ubuntu-desktop in a cloud image
<CatKiller> But thanks a lot for helping me understand what was going wrong
<smoser> CatKiller, i might do something like:
<smoser>  get-cloud-iamge
<CatKiller> it's good to know why something didn't work
<smoser>  mount-image-callback my.image -- chroot _MOUNTPOINT_ apt-get intsall ubuntu-desktop
<smoser> that wont' "just work", but it will be pretty close
<smoser> a couple gotchas:
<smoser> a.) you'll have to resize the image to larger than 2G (maybe)
<smoser>   er... 1.4G
<CatKiller> True, didn't think of that
<smoser> b.) you'll have to disable services . i have that code somewhere, let me find it.
<CatKiller> smoser: Thanks, that's great
<smoser> (disable services so they dont start when you 'apt-get install' stuff)
<smoser> CatKiller, https://code.launchpad.net/~smoser/maas/maas-ephemerals-v2
<smoser> the bin/maas-cloduimg2ephemeral is what we do to make the cloud images into "ephemeral" images.
<smoser> the process is much the same
<smoser> but we dont have to grow the disk.
<CatKiller> smoser: Nice!
<CatKiller> So I don't even have to boot the cloud image in KVM to configure it then?
<smoser> CatKiller, well, i do the build process inside a kvm. 
<smoser> but thats neither here nor there.
<smoser> we just do that for safety
<CatKiller> ok but you don't need to actually boot the cloud-image to install packages etc
<smoser> i'd be very open to patches to mount-image-callback for '--grow-first=4G' or something like that.
<smoser> right. we dont boot it.
<CatKiller> (which I had been doing to configure my existing cloud image, I booted it and configured it)
<smoser> we chroot in and 'apt-get install'
<CatKiller> smoser: If I can figure out a way to streamline the growing
<CatKiller> I'll submit a patch
<CatKiller> So your package is similar to "vmbuilder" right?
<CatKiller> except it relies on Ubuntu's preconfigured cloud images instead of building one from scratch from what I can gathr
<CatKiller> *gather
<CatKiller> bbl
<smoser> CatKiller, i really would never suggest to anyone that they build a cloud iamge from scratch
<smoser> there are tools that do that, and i thik that they are silly
<smoser> similarly i'd never suggest to anyone that they build their own linux kernel, python, glibc or php
<smoser> people do that for you, use their work.
<robjo> smoser: tried the config route for Azure and starting the agent
<robjo> then "rm -rf /var/lib/cloud/*"
<robjo> after running cloud-init -d {init, init --local, modules --mode=config} in that order on the command line there is no evidence of an attempt to start the agent?
<robjo> Is this testable live or do I have to run a re-build upload etc. cycle?
<robjo> The config entry looks as follows:
<robjo> datasource:
<robjo>   Azure:
<robjo>     agent_command: ['service', 'waagent', 'start']
<smoser> robjo, it should be testable
<smoser> pastbin /var/log/cloud-init.log ?
<smoser> robjo, 
<smoser> init --local 
<smoser> needs to run first
<smoser> (it clears the state)
<smoser> then init
<smoser> and init should have ran and found the azure datasource
<robjo> OK, let me try again
<smoser> you can configure off the other datasources to reduce cruft.
<CatKiller> smoser: Makes total sense, I'll use the cloud image first
<robjo> smoser: http://pastie.org/9326842
<smoser> robjo, /var/log/cloud-init.log should be much more verbose than that.
<smoser> oh. and i see. seed=/var/lib/waagent
<smoser> heres another fun bit of azure, robjo 
<smoser> you get that CDrom that has important data
<smoser> but at some point in a reboot they just yank it from you
<robjo> /var/log/cloud-init.log is empty, I pasted /var/log/cloud-init-output.log
<smoser> robjo, you rpboably need to restart syslog
<smoser> maybe remove that file and resart syslog.
<smoser> or maybe on suse thats not hooked up right. but thtats a bug .
<smoser> if its not.
<smoser> hm..
<smoser> hm.. sorry for being dense.
<smoser> there is defintiely more verbose output going somewhere.
<robjo> smoser: http://pastie.org/9326911
<robjo> http://pastie.org/9326914
<robjo> http://pastie.org/9326917
<robjo> http://pastie.org/9326920
<robjo> Output in the terminal from each command, was too large for 1 paste
<smoser> robjo, suse needs 'pastebinit'
<smoser> its amazingly convenient.
<smoser> pastebinit <file>
<smoser> or
<smoser> some-command | pastebinit
<robjo> /var/log/cloud-init.log is still empty even after restarting syslog
<smoser> 2014-06-26 14:39:34,259 - stages.py[DEBUG]: Restored from cache, datasource: DataSourceAzureNet [seed=/var/lib/waagent]
<smoser> it found that, so its not going to go through its earch
<smoser> not really sure why it found it as init --local should have removed that link
<robjo> so where is that cache? so I can get rid off it ;)
<smoser> well its /var/lib/cloud/instance that is a link. 
<smoser> you can purge all of /var/lib/cloud
<robjo> I did, before I ran the commands.....grmbl
<robjo> rm -rf /var/lib/cloud/*
<smoser> are you able to just let me in to poke really quick ?
<robjo> yes, send me your public key
<robjo> rjschwei@suse.com
<smoser> robjo, https://launchpad.net/%7Esmoser/+sshkeys
<smoser> another tool you need is 'ssh-import-id' :)
<robjo> smoser: OK, try this: ssh azuser@sp3-ibs-try4.cloudapp.net
<smoser> robjo, k. i'm in
<smoser> robjo, for logging, cloud-init sends its log to /dev/log
<smoser> which in ubuntu gets sent to /var/log/cloud-init.log because of 
<smoser>  http://paste.ubuntu.com/7706306/
<smoser> so yours is just goign to /var/log/messages
<smoser> as its not being captured specifically somewhere
<robjo> OK, can certainly setup the rule, thanks
<smoser> if you ust remove the 
<smoser>  [ *log_base, *log_syslog ]
<smoser> line in /etc/cloud.d/cloud.cfg.d/05_loging
<smoser> then it will go straight to the file
<smoser> so i made that change
<smoser> and now ran
<smoser> cloud-init init --local
<smoser> and
<smoser>  cloud-init init
<smoser> and see the log
<smoser> you can see your 
<smoser>  'service', 'waagent', 'start'
<smoser> the reason for the logging being as it is, is that generally, i wanted to use syslog
<smoser> but early in boot syslog isnt necessarily available
<smoser> so cloud-init tries to write to /dev/log
<smoser> and if htat fails it writes to /var/log/cloud-init.log directly
<smoser> then, the next time it comes up (cloud-init init) it probably has /dev//log
<smoser> and it works.
<smoser> note, the file above in that pastebin is a rsyslog format file. i dont knwo about your syslog (syslog-ng)
<robjo> OK, makes sense. I'll fix the logging ni my image builds. And will just configure the datasource in the config file, then there is no need to have a bunch of ugly serach code in the azure datasource code
<robjo> we have syslog-ng in sles 11, rsyslog in openSUSE and rsyslog will be in SLE 12, it's a mess, bute getting better :)
<robjo> Thanks, one problem down :)
<smoser> so just one other gotcha there.
<smoser> on our walinuxagent
<smoser> we do not start it by default
<smoser> and we basically configure off all of its function
<smoser> as it overlaps with cloud-init. so we have cloud-init start it only on azure. and neuter it heavily. 
<robjo> I am setting "Provisioning.Enabled=n" in the waagent.conf, that's the info I got from M$
<robjo> so should I or should I not enable the agent to start at boot?
<smoser> i'm trying to remember
<CatKiller> smoser: Does cloud-init (using configuration found on the Ubuntu cloud-image) run unattended security upgrades on boot, or is it something that needs to be defined at the metadataserver level?
<smoser> CatKiller, by default it does not do that.
<smoser> you'd enable that via user-data or some other way
<smoser> you could patch that on when you re-build your images
<CatKiller> smoser: ok, no problem. I'll patch that while rebuilding
<CatKiller> Another thing, what are the "cloud_config_modules" defined in the cloud.cfg config file?
<CatKiller> Are they programs that will run during the config stage
<CatKiller> For instance I wanted to know what "package-update-upgrade-install" was doing but couldn't really tell
<smoser> CatKiller, i'm sorry that documentation for such things sucks.
<smoser> that ends up involking the 'cc_package_update_upgrade_install.py' file
<smoser> which handles 'apt_update' and 'apt_upgrade'
<smoser> but this is not anywhere well documented.
<CatKiller> smoser: OK, I'll look in that file. Just a quick question though
<CatKiller> all of the modules defined in the cloud.cfg will run, or does that just mean they're available?
<CatKiller> and the metadata will tell which one needs to run
<smoser> they run
<smoser> the user-data can configure which ones run
<smoser> yes.
<smoser> by re-defining that list
<CatKiller> smoser: But if it's not defined in the user-data (we use OpenStack here and the user data is *very* scarce) it'll run
<smoser> what do you mean the user data is very scarse
<smoser> modules generally do the right thing.
<smoser> if you give htem config, they may act and have sane defaults 
<CatKiller> smoser: Well the metadata only provides hostname, authorized SSH keys but not much more
<smoser> user-data
<smoser> not meta-data
<CatKiller> ah ok sorry my bad I confused the two
<smoser> user provides user-data when they launch an instance
<smoser> nova boot --user-data=your.user-data.file.txt
<CatKiller> Ah ok I see. We don't provide any specific user data here as far as I know
<smoser> right. so you get the default behavior. which should be perfectly fine.
<CatKiller> OK, sounsd good
<CatKiller> Another thing I'm not entirely sure about. In the config there is a "package-update-upgrade-install" module, and in "cc_package_update_upgrade_install.py" the handle function looks for "package_update" and "apt_upgrade" from the configuration
<CatKiller> these are unrelated right?
<smoser> they're synonyms
<smoser> distro-generic and distro-specific.
<CatKiller> So "apt-get update" and "apt-get upgrade" will run on boot (or is that only on *first* boot) with the default cloud-init config?
<smoser> default is off
<smoser> you can turn it on and it will run on first boot
<smoser> that module only runs 'per-instance'. by default.
<smoser> meaning once per instance-id
<CatKiller> But it's in the config file though, so the user-data defines the behavior here?
<smoser> you can change it to run per-always
<smoser> you can put it in config also
<smoser> and the user-data can still override
<CatKiller> ah ok, so here it's in the config but user-data probably overrides.
<CatKiller> Where can I find the user data in the system actually?
<CatKiller> (the default ones)
<smoser> there is no real location for defaults. 
<smoser> most of stuff is described 
<smoser>  http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
<smoser> and other files in http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/examples/
<CatKiller> thanks for those files
<CatKiller> but in the specific of cloud.cfg's "package-update-upgrade-install" module, where can I figure out what will happen there?
<CatKiller> I'm not really sure what this clause does
<CatKiller> or where it is configured
<CatKiller> I guess I'm having trouble linking those modules with what will actually happen when they run (their config files etc)
<smoser> CatKiller, by default basically nothing happens there.
<smoser> but it responds to some settings in
<smoser>  http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
<smoser> (package_upgrade=true)
<CatKiller> smoser: Ah ok, makes sense
<CatKiller> smoser: And just a quick question so that I can understand this config file better
<CatKiller> where are all these clause defined then in the code?
<CatKiller> I grepped for "package-update-upgrade-install" without success
<CatKiller> in the cloudinit Python code
<CatKiller> (I wanted to figure out what config clause did what by looking at the source)
<smoser> cloud_config_modules, cloud_init_modules and cloud_final_modules
<smoser> are lists
<smoser> they reference "config modules"
<smoser> config_modules are loaded . they'r ein the source tree at cloudinit/config/cc_<name-in-list>.py
<smoser> where 'name-in-list' has '-' replaced with '_'
<CatKiller> smoser: Ahhh ok I get it. Sorry you were trying to tell me this earlier and I had completely missed it. I thought the "cc_" file you told me to lookup handled all of the "config modules"
<CatKiller> smoser: It's much much clearer now, thank you very much
<smoser> cool
<CatKiller> smoser: There's one thing that I'm not quite sure about yet
<CatKiller> smoser: Let's imagine I don't want to pass custom "user-data" to the instance, where can I configure a "default" user-data within the image?
<smoser> you cant explicitly provide default user-data.
<smoser> user-data an be cloud-config syntax or other syntaxes.
<smoser> but you can provide any config in /etc/cloud/cloud.cfg.d/<your-file>.cfg
<CatKiller> smoser: Ah great, so I could add a file  /etc/cloud/cloud.cfg.d/my-config.cfg that would contain:
<CatKiller> #cloud-config\npackage_upgrade: true
<CatKiller> for instance
<CatKiller> In that case, that config would be interpreted as a "cloud-config" file and the package_update should run on first boot
<CatKiller> Does that sound right?
<smoser> yes.
<smoser> files in that dir do not need to start with #cloud-config
<smoser> they can only be cloud-config
<CatKiller> smoser: Good to know
<CatKiller> smoser: In the absolute, it seems that I could add the package_upgrade: true clause at the top level of the /etc/cloud/cloud.cfg config file as well no?
<CatKiller> looking at the cc_package.... script it seems that it'll get the flag from the config(s) files (presumably cloud.cfg included)
<smoser> yeah.
<smoser> config modules know no difference between "builtin" config and user-data
<smoser> its all just config that they act on
<harlowja> smoser hey, do u have any medium size ideas for cloud-init that i can do, need a diversion from the other projects :-P
<smoser> is that medium sized in human terms
<smoser> or in super-human  harlowja terms
<harlowja> lol
<harlowja> harlowja terms i guess :-P
<smoser> the stuff i have listed here is at: 
<smoser>  https://blueprints.launchpad.net/ubuntu/+spec/servercloud-u-cloud-init
<harlowja> Utopic ?
<harlowja> whats that
<smoser> 14.10 Utopic Unicorn
<harlowja> ah
<smoser> went looking for mark's post on utopic
<smoser> and saw:
<smoser>  http://www.markshuttleworth.com/archives/1342
<harlowja> nice
<smoser> http://www.markshuttleworth.com/archives/1363
<smoser> that one is utopic
<harlowja> i should forward that on here, 
<smoser> anywah...
<smoser> for things..
<smoser> the 2 things you might be interested in:
<smoser>  a.) python3
<harlowja> ack, that one again, lol
<harlowja> python3 hasn't gone away yet, lol
<smoser>  b.) more work on ci-tool
<smoser>  http://bazaar.launchpad.net/~smoser/cloud-init/ci-tool/view/head:/ci-tool
<smoser> i'm not sure how i feel about ci-tool
<smoser> oh. the other one... the query stuff.
<smoser> you revved that at some point. 
<smoser> it'd be nice to have 2 json files in /run
<harlowja> smoser ya, good-ole-query stuff
<smoser> i think thats the way i'd go now.
<smoser> just 2 json files
<smoser> one with non-sensitive data
<smoser> and one with sensitive data
<harlowja> right
<smoser> and try to have a sane format for things common to most clouds
<harlowja> :)
<smoser> and allow the datasource to shove other stuff in its own place in the json
<harlowja> right
<harlowja> ci-tool; would that be needed with a more extensive query tool (or maybe they merge?)
<harlowja> into super-ci-query-tool
<smoser> ci-tool 'seed' is the real function of ci-tool
<smoser> 'seed', 'reset', 'set-ds'
<harlowja> oh man, marks page is now showing me 'Error establishing a database connection'
<smoser> :)
<harlowja> smoser sure, ya, the seed stuff is nice to have
<smoser> oh.
<smoser> the other bug... 
<harlowja> ?
<smoser> the network interfaces stuff sucks
<harlowja> :)
<harlowja> yaaaa
<harlowja> do u want to try to see how the netcf stuff works?
<harlowja> i can try messing around there
<harlowja> maybe its 'ready' for primetime
<harlowja> *although we'd probably need both, if netcf isn't avaiable on given distro
<harlowja> omg, what is my password doin in http://bazaar.launchpad.net/~smoser/cloud-init/ci-tool/view/head:/ci-tool#L51
<harlowja> haha
<smoser> thats your password to? wow. coincidence.
<smoser> too
<harlowja> :)
<harlowja> smoser in 14.10 unicorny no more 2.x python?
<smoser> yeah, the netcf stuff. ive' come to needing that elsewhere too.
<smoser> we really need to get openstack networking sorted
<harlowja> :-/
<smoser> to the way that amazon does it.
<smoser> so that you can hotplug a NIC into the system
<harlowja> i believe with neutron u can do that
<smoser> and then the system can hit the metadata service and get the interface config for that.
<harlowja> ya
<smoser> and that itnerface config needs to be in some format
<smoser> and thus... i was asking you again about netcf
<harlowja> :)
<harlowja> ya, let me see if i can get mark mcclain in here to chat about the openstack neutron networking thing
<smoser> maybe after running xml2json on it
<harlowja> how we can get from here to there
<harlowja> smoser ya, xml == evil
<harlowja> lol
<smoser> it really just isnt suitable for that reason
<harlowja> smoser exactly
<markmcclain1> harlowja: here
<smoser> hi markmcclain1 
<harlowja> markmcclain1 hey, so we were just wondering a little bit about the future of hot-plugging, metadata in openstack, and how cloud-init will help out here
<harlowja> thought u might have some knowledge (that i don't have)
<harlowja> *hotplugging nics
<smoser> you can plug the nics in
<smoser> i have done that.
<smoser> and they do show up
<smoser> all that works.
<harlowja> i know what yahoo is doing here (with config drive, which won't work obviously here)
<smoser> and then atually.... in the metadata servie i noticed one of them actually in a /etc/network/interfaces file after hotplugging.
<smoser> config drive will die
<smoser> and that will be ok
<harlowja> :)
<harlowja> ya, i'll have to fight the people here on that one to make it die (if possible)
<smoser> i'm not too hung up on it.
<smoser> mostly people hated the metadata service because it didn't work.
<smoser> or the networking never worked to get the instance to it.
<smoser> but now i think that those problems are less comon
<smoser> but anyway..
<smoser> i hotplugged a nic
<harlowja> k
<markmcclain1> yeah so with hotplugging dhcp should work on that interface
<smoser> and then an entry in /etcnetwork/interfaces styel file was in the interfaces location in the metadata service
<smoser> but it was out of order :)
<smoser> ie, the new nic was eth0 and the orig was eht1
<smoser> so its just silly.
<smoser> thus need for some beter format for describing
<harlowja> and then enter xml netcf format :-P
<harlowja> *xml (cough)
<smoser> yeah, i most certaily would make major mistakes if i invented my own language.
<smoser> but really would not be interested in putting xml into a stack where it doesn't otherwise exist. 
<harlowja> :)
<harlowja> xml is da best
<harlowja> markmcclain1 has there been any thought on moving to a more agnostic format in the community for this
<harlowja> not saying it has to be https://fedorahosted.org/netcf/ (but something similar in json or other would be nice)
<markmcclain1> I haven't heard too much yet about how to fix these issue
<harlowja> k
<harlowja> smoser so the mission, if u choose to accept it, is to fix this issue, this message will self-destruct in 5 seconds
<harlowja> boom
<smoser> yeah. i'd like to do it. it generally needs doing i think
<harlowja> agreed
<harlowja> smoser would this be a cloud-init thing at that point, or something else?
<harlowja> something else that listens for hotplugs and then does stuff
<harlowja> network-cloud-init (or something)
<smoser> i think i'd keep it in cloud-init. but try to make it separatable
<harlowja> k
<smoser> basically, its udev hooks, code to get the data, and then code to act on it.
<harlowja> ya
<harlowja> any idea how that works for freebsd ?
<harlowja> harmw activate, lol
<smoser> and cloud-init would get the original datasource, and then the udev hooks would say "ok, i just got an interface, what cloud should i get data from!"
<smoser> and cloud-init would have configured that.
<harlowja> right, makes sense
<harlowja> smoser do u want to do a joint writeup to the openstack-dev ML, i can draft somethign
<harlowja> and we can start this discussion there
<harlowja> and then see about how to make this really happen
<harlowja> fix the brokeness that exists for this in to many places (nova has parts of this code, neturon seems like it has others)
<harlowja> smoser sound ok with u boss
<harlowja> it just requires like cleanup in lots of places, which would be nice to just do
<harlowja> like nova has a metadata part, so does neutron, so this would have to be made in both of them?
<harlowja> and config-drive basically doesn't get this, idk
<harlowja> or does nova rewrite the config-drive when a hotplug occurs
<smoser> harlowja, well, it could re-pouplate it for reboot i guess.
<smoser> but theres really no other way for the guest to "free" it.
<harmw> harlowja_away: sup
<harlowja> harmw nadda was just discussing how more dynamic network configuration can be done
<harmw> you know fbsd is still ancient in many ways, right :p
<harlowja> :)
#cloud-init 2014-06-27
<CatKiller> smoser: Hi, I had a quick question about mount-image-callback: Are the commands provided with args executed from inside the chroot? If running "apt-get install" will it run using the apt-get binary from inside the image, or the "host"'s apt-get?
<CatKiller> Because I'd like to modify a 14.04 cloud image on an older 10.04 system (or 12.04)
<smoser> hosts.
<smoser> if you want chroot. then chroot
<smoser> mount-image-callback .... -- chroot _MOUNTPOINT_ apt-get update
<CatKiller> smoser: And do you think that would work then (running apt-get from inside a chroot with a different host's version?)
<utlemming> CatKiller: yeah, that works unless you need kernel modules from the host
<CatKiller> utlemming: Ah yes good point, didn't think about that. Here I'd be trying to install ubuntu-desktop within a ubuntu-cloudimage (server)
<utlemming> CatKiller: that works just fine, I do that all the time
<CatKiller> smoser, utlemming: Thanks a lot for the help
<CatKiller> smoser: Another thing, I'm going to implement resizing the image, but I was wondering whether it would not make more sense to create a separate image resize script instead of forking mount-image-callback as you initially suggested
<smoser> the reason it woudl fit in mount-image-callback is that in order to resize , you will need to set up the qemu-nbd device
<smoser> and re-implement a lot of that.
<CatKiller> smoser: Very true. Then mount-image-callback it is :)
<CatKiller> I'll submit a patch when I'm done provided my boss gives the go ahead.
<CatKiller> I will try and keep it really nice and clean like your own scripts
<smoser> CatKiller, thakns. i i think i'd just try to do it as both a '--resize' flag
<smoser> and also a 'mount-image-callback <resize>'
<CatKiller> The second being like a command you pass to mount-image-callback?
<smoser> CatKiller, well, i was tihning mount-image-callback would handle that specifically
<smoser> kind of like a subcommand of mic
<smoser> harlowja_away, https://review.openstack.org/#/c/103193/
<CatKiller> smoser, ok, I'll let you know when I'm done for a quick review
<smoser> k
#cloud-init 2015-06-22
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<mtl1> Hi. I'm trying to make Fedora available on my stack. For it to work, I need cloud-init to output a few things to the log that openstack sees during bootup instead of or in addition to the journal log. Â Anyone have any ideas of how that might be done? Â The irc://irc.freenode.net:6667/#cloud-configÂ commands I'm using are working, I just need the output from them. With Ubuntu and CentOS, that all happens out of the box. Thanks.
<smoser> mtl1, http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/873 perhaps.
<smoser> it seems like something like that.
<mtl1> smoser: thanks. I'll take a look.
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: HACKING.rst: mention signing the contribors agreement  https://review.openstack.org/194340
#cloud-init 2015-06-23
<smatzek> Forgive me if this was covered previous.  Based on what I see for reviews and some looking at this IRC history, it appears cloud-init v2 is being developed in the http://git.openstack.org/cgit/stackforge/cloud-init/ community and the 0.7.x versions are developed and will continue to be maintained in Bazaar?
<smoser> Odd_Bloke, what does I AM A CONTRIBOR mean ?
<smoser> smatzek, your statement above is generally correct.
<kwadronaut> that's a contraband supplier.
<smoser> :)
<smoser> CONTRIBUTOR  . https://review.openstack.org/#/c/194340/1//COMMIT_MSG
<kwadronaut> i had seen it, contemplated signing the cla for a pullrequest, but thought it was too much
<kwadronaut> uhm wait, is the cloud-init cla now the same as the openstack one?
<smoser> no.
<smoser> not the same.
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: HACKING.rst: mention signing the contribors agreement  https://review.openstack.org/194340
<Odd_Bloke> smoser: s/contribor/contributor/ :)
<harlowja> i don't think i signed the 'contribors' agreement :(
<harlowja> shit
<harlowja> i might get fired
<harlowja> lol
<harlowja> :-P
 * harlowja stops poking fun at smoser 
<smoser> harlowja, i think you did.
<harlowja> lol
<smoser> its nice that its ordered by first name
<smoser> that always makes sense
<smoser> Odd_Bloke, you can sign if you want, but your work is covered as a employee of canonical.  Ie, they already own copyright for your work you dont have to tell them they do :)
<harlowja> your life blood is theres
<smoser> ook. so on this..
<smoser> https://review.openstack.org/#/c/194340/1
<smoser> Odd_Bloke, wants HACKING.rst to be in doc / rendered somehow.
<smoser> which i think is a good idea.
<smoser> he suggested
<smoser>  06/22/15 16:55:49 <Odd_Bloke> You could link it in to the documentation directory and include it in the TOC and it would be generated as part of that...
<smoser> but i suspect the words "link" and "windows" don't go together.
<smoser> any other sugestions ?
<harlowja> https://raw.githubusercontent.com/openstack/oslo.versionedobjects/master/doc/source/contributing.rst
<harlowja> smoser * do something like ^
<spandhe> hey smoser!
<smoser> spandhe, hey.
<smoser> harlowja, cool. thanks.
<harlowja> np
<spandhe> smoser: hey.. I have a qn on MAAS regarding omshell.. anyone I can talk to about that?
<smoser> oh my. well, #maas probably better than here.
<smoser> i'm guessing that is 1.5 ?
<smoser> 1.7 is almost in trusty now (via SRU) . and i dont think omshell is relevant anymore.  i dont think.
<spandhe> smoser: thanks..
<Odd_Bloke> smoser: I probably signed it at some point in the past because of bzr contributions anyway. :p
<smoser> Odd_Bloke, still there?
<smoser> you made tox require > wily version
<smoser> can we get by with version in trusty ? i'd certainly like for trusty to work
<smoser> but at a minimum i'd like wily to
<Odd_Bloke> smoser: Maybe; I'll check.
<Odd_Bloke> I think I bumped the version to support something that we were doing, but I might have ended up pulling that out because of a bug.
<smoser> Odd_Bloke, also, where would i get doc8 ?
<Odd_Bloke> smoser: https://pypi.python.org/pypi/doc8
<smoser> bah. sorry.
<smoser> i just ahd to re-create the .tox for that.
<smoser> maybe you did it to avoid this error:
<smoser> http://paste.ubuntu.com/11764395/
<openstackgerrit> Daniel Watkins proposed stackforge/cloud-init: Move required tox version down to 1.6.  https://review.openstack.org/194814
<Odd_Bloke> smoser: ^
<smoser> does that work ?
<Odd_Bloke> smoser: Yeah, I was trying to do something clever when I was developing the coverage stuff (which needed the newer version).
<Odd_Bloke> But there was a bug, so I ripped it out.
<Odd_Bloke> But forgot to undo the required version bump.
<smoser> how come i see the dll error above ?
<Odd_Bloke> I'm not sure, I thought that was fixed in master.
<Odd_Bloke> (I'm seeing it as well)
<smoser> do i need to google for that dll and download it and put it in c:\windows\system ?
<smoser> :)
 * Odd_Bloke sharpens his knife.
<smoser> do i have to be Administrator to do that ?
<Odd_Bloke> smoser: Probably.
<smoser> http://paste.ubuntu.com/11764447/
<smoser> htat is needed on trusty with 1.6
<Odd_Bloke> smoser: https://review.openstack.org/#/c/194814/1/tox.ini
<Odd_Bloke> I'm not really sure why the doc build is failing, it's something to do with pbr I think.
<Odd_Bloke> I'm sprinting at the moment, so I can't really look at it right now.
<smoser> oh. sorry. you alrady did that :)
<smoser> thanks Odd_Bloke
<Odd_Bloke> :)
<Odd_Bloke> No excuse to not +2 it now, you wrote the same code. ;)
<smoser> done
<openstackgerrit> Merged stackforge/cloud-init: Move required tox version down to 1.6.  https://review.openstack.org/194814
#cloud-init 2015-06-24
<smoser> claudiupopa, lets just do this here then.
<smoser> i owe you some stuff and let me do that now.
<smoser> the thing i found yesterday was that trunk was not running 'tox' for me. complaining about windows dlls
<claudiupopa> Okay, don't know what's with my sound.
<claudiupopa> cloud-init v2 trunk?
<smoser> yeah
<smoser> i'll get pastbin
<smoser> Odd_Bloke, was seeing it too.
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<smoser> claudiupopa, http://paste.ubuntu.com/11767934/
<claudiupopa> does it still reproduces?
<smoser> yeah. that is trunk on my system right now.
<smoser> ImportError: cannot import name 'windll'
<smoser> which isnt terribly surprising :)
<claudiupopa> Yeah, will take a look, thanks. ;-)
<claudiupopa> So..back to bussiness. The openstack stuff should be finally done, after Joshua's final comments.
<claudiupopa> I didn't had time to work on other things yet, had some issues with cloudbase-init which needed to be fixed.
<claudiupopa> Did you had time to work on the executable part?
<smoser> no. because i'm not very productive.
<smoser> i will look at your openstack stuff now.
<claudiupopa> the crash makes sense. sphinx-autodoc tries to import the modules in order to build the documentation.
<claudiupopa> but windll exists only on windows.
<claudiupopa> Either we can ignore windows-specific files, but that's hard, since they can be all over the place, or we can write something that does this statically.
<claudiupopa> without needing to import the module.
<smoser> its not something ew can easily fudge either.
<smoser> ie, we can't just make a load_os_specific('windll', 'windows')
<smoser> that would just return garbage on linux. because you use stuff from it right away
<smoser>  windll.kernel32.GetLastError
<claudiupopa> Right, that's the case.
<smoser> we could proably manage it.
<smoser> but if this is just going to be a endless path doesn'st sound like fun
<claudiupopa> so doing something statically makes the most sense.
<claudiupopa> I'll try to see if I can come up with a sphinx plugin.
<claudiupopa> Which operates on ast instead of objects.
<smatzek> this is a complete hack but it may work as a temporary solution, something like this in kernel32.py: if os.name == 'posix':
<smatzek>     import mock
<smatzek>     sys.modules['ctypes.windll'] = mock.MagicMock()
<smatzek> I've temporarily done the same in reverse to mock out Linux/posix only python modules to run things on Windows
<smoser> claudiupopa, on 'ast' ?
<claudiupopa> abstract syntax trees
<smoser> will it be an endless path ?
<smoser> ie, if we mock those names to None on non-windows, ould that be an endless amount of ongoing work
<claudiupopa> yeah, that definitely could be.
<patcable> stupid question: do i have to run the brpm scripts on a rhel/centos machine when building cloud-init?
<patcable> wasnt sure if there was a way to just run it on my ubuntu dev box as well. figure i just need to make stuff work over in RH land.
<smoser> patcable, ?
<smoser> the brpm is just a way to get you an rpm. that would hopefully work.
<smoser> on ubuntu you can get a deb with bdeb
<patcable> smoser: right-- bddeb works great on an ubuntu box, brpm complains about not having some centos packages around. I suspect that I probably need to build the rpms on a centos box, but I wanted ot check
<smoser> ah. i understand the question now
<smoser> yeah, you need to run that on centos. if you got it to run on ubuntu, patches would probably be accepted, but generally i'd expect to build it on centos.
<patcable> that's fine, just wanted to make sure I wasn't missing anything
<patcable> thanks!
<patcable> we've been hacking to see if we can add support in for handling a user config that has been encrypted, to sort of make a "trusted" cloud init. it's very young right now, though it does work
<smoser> patcable, neat.
<patcable> yeah. Looking forward to writing about it/proposing patches when we have something that actually works for real :)
<smoser> claudiupopa, still there?
<smoser> what is _METADATA_NETWORK_KEY = "content_path"
<claudiupopa> content_path is an entry from under network_config, with a path for a debian interfaces file.
<claudiupopa> http://sprunge.us/KDRX something along these lines
<smoser> claudiupopa, ok. i have some very tiny comments.
<claudiupopa> sure thing
<smoser> i think i'm fine to pull it because i've taken so long, and we can fix them later if you want.
<claudiupopa> if it's not too time consuming, I could fix them right away.
<smoser> http://paste.ubuntu.com/11768400/
<smoser> claudiupopa, ^
<smoser> metadata_network_key was just confusing to me since its not network specific in any way. its more "referenced_payload_key" or something.
<claudiupopa> cool, thanks!
<smoser> i did not test that regex. its probably wrong
<smoser> i just figure if we're looking for YYYY-MM-DD, we might as well check for that expicitly
<smoser> rather than 2339595-2939395995595-12224
<claudiupopa> Yeah, makes sense. :D
<smoser> i'd even say you can limit it to a '2' in the first digit. realizing that future humans will laugh histerically at my short sitedness in the year 3000
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<claudiupopa> smoser: should be fixed. I prefered to leave http_client.CONFLICT as is for now.
<smoser> how come ? it just seems wierd to me to have to go elsewhere for that little thing.
<smoser> i'm not strongly opposed just curious
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<claudiupopa> yeah, I guess you're right, url_helper should have already what we need.
<smoser> claudiupopa, VERSION_REGEX.match(version)
<smoser> that will currently match 2015-05-23344555666
<smoser> this is completely nit picky, i do accept :)
<claudiupopa> right, I was using .search.
<smoser>  r'^\d{4}-\d{2}-\d{2}$'
<smoser> that does the right thing
<claudiupopa> yeah, fixing right now.
<openstackgerrit> Claudiu Popa proposed stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<smoser> woot
<claudiupopa> finally.
<claudiupopa> :D
<claudiupopa> now we need a couple of config modules and that main executable and we're set for the ci. ;-)
<smoser> thank you claudiupopa
<openstackgerrit> Merged stackforge/cloud-init: Add the data source base classes and the HTTP OpenStack implementation  https://review.openstack.org/188327
<smoser> so i'm trying to solve
<smoser>  http://stackoverflow.com/questions/23663070/setup-https-proxy-for-pip-install-command-within-tox-environment
<smoser> in a windows friendly way.
<smoser> i tried
<smoser> tried http://paste.ubuntu.com/11768965/
<harlowja> what windows
<harlowja> *whats windows
<harlowja> :-P
<smoser> but tox 1.6 (in Ubuntu 14.04) doesn't want to substitute {toxinidir} for me.
<smoser> any suggestions there?
<smoser> remember httpretty and doesn't work so well when http_proxy is set.
<smoser> hm.. wonder if maybe we could use no_proxy..
<Odd_Bloke> smoser: Could you paste the tox.ini you're having problems with?
<smoser> http://paste.ubuntu.com/11768965/
<smoser> it works as expected on wily
<smoser> basic issue that i was wanting to solve is this:
<smoser>  http://paste.ubuntu.com/11769000/
<Odd_Bloke> smoser: What do you see on trusty?
<smoser> it tries to execute the string {toxinidir}/tools/foo
<smoser> just no substitution
<Odd_Bloke> Try doing "changedir = {toxinidir}" and then just "./tools/..." for the command.
<smoser> the paste above shows that tox doesnt run if http_proxy / https_proxy is set, but i need it set when i build in this environment (serverstack has no access to intertubes other than via proxy)
<Odd_Bloke> Right, so you just want the proxy set for the pip call.
<smoser> its funny, cause it says "default: {toxinidir}" but that is not the case either on 1.6
<smoser> Odd_Bloke, it doesnt do it.
<Odd_Bloke> smoser: Doesn't do what?
<smoser> doc says "when executing the test command".
<smoser> it doesnt change the dir.
<smoser> cwd at that point is the .tox/<toxenv>/
<Odd_Bloke> smoser: Hmm, sounds like we might need to bump the tox version after all. :/
<smoser> or i can just live with having to hack the full path in that case.
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: tox: disable proxies when running nosetests  https://review.openstack.org/195294
<smoser> Odd_Bloke, ^ theres a solution there if you want to review
<Odd_Bloke> smoser: Just did.
<Odd_Bloke> Looks good apart from some missing words in a comment. :)
#cloud-init 2015-06-25
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: tox: disable proxies when running nosetests  https://review.openstack.org/195294
<smoser> claudiupopa, are you there ?
<smoser> can you tell me if https://review.openstack.org/#/c/195294/1 works on windows ?
<claudiupopa> yeah, seems to work.
<smoser> can you +1 ? or +2  if you want.
<claudiupopa> done.
* smoser changed the topic of #cloud-init to: cloud-init || cloud-init 2.0 reviews: http://bit.ly/cloudinit-reviews
<smoser> claudiupopa, so why did that not merge ?
<smoser> does it need a jenins run, and if so why didnt it run
<claudiupopa> http://status.openstack.org/zuul/ seems to be a pretty big merge queue.
<openstackgerrit> Merged stackforge/cloud-init: HACKING.rst: mention signing the contribors agreement  https://review.openstack.org/194340
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: tools/tox-venv: support running other than ./tools/tox-venv  https://review.openstack.org/195631
<openstackgerrit> Merged stackforge/cloud-init: tox: disable proxies when running nosetests  https://review.openstack.org/195294
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: tools/tox-venv: support running other than ./tools/tox-venv  https://review.openstack.org/195631
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: Clean up stale auto-generated autodoc files.  https://review.openstack.org/191139
<Odd_Bloke> smoser is on a rampage.
<smoser> harlowja, around ?
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: Bring over the 'safeyaml' from bzr  https://review.openstack.org/170252
<smoser> claudiupopa, if you want, you requeseted test there, i added some ^
<smoser> or Odd_Bloke if you want to review that.
<openstackgerrit> Merged stackforge/cloud-init: Clean up stale auto-generated autodoc files.  https://review.openstack.org/191139
<harlowja> smoser suppp
<smoser> harlowja, i was gonna ask about how to add tests to your existing change id
<smoser> its really wierd.. ijust --amend
<smoser> and git review
<smoser> and my patchset rplaced yours
<harlowja> hmmm, lol
<smoser> which is wierd in my opinion
<harlowja> :)
<harlowja> touching my patchesets, how dare u
<harlowja> lol
<harlowja> so what u did is one way, the other way is to make a dependent review
<harlowja> where u make a new patchset (using the initial one as the base) and then do git-review -R
<harlowja> *which makes that new review have a dependency on the first
<harlowja> otherwise ya, the patchsets sorta just get added ontop
<smoser> yeah. i think the way i did it makes more sense here.
<smoser> as the dependent review would then hav both of them in the git log
<smoser> but yours wasnt acceptable without the tests
<harlowja> :-P
<smoser> hm..
<smoser> what to do here.
<smoser> looking at the templeter
<smoser> and tests for it.
<harlowja> up to u, how much of it u want to keep :-P
<harlowja> smoser one option, remove the basic_rendering support, leave jinja2
<harlowja> that might help reduce the amount of stuff to test (and meh, do we need that other way anymore?)
<harlowja> otherwise, include a bunch of test-templates.txt, ensure they get expanded to expected-out-test-templates.txt or whatever
<harlowja> and blow-up if not
<smoser> i think just if its not declared, then we use builtin
<smoser> its fine for some things.
<harlowja> k
 * harlowja doesn't have any attachment to it :-P
<harlowja> my precious!
<harlowja> lol
<smoser> harlowja, i have this right now:
<smoser>  http://paste.ubuntu.com/11774661/
<harlowja> seems fair to me
<smoser> the value in it is that we have *some* builtin renderer
<harlowja> agreed, which probably suits 90% of the cases anyway
<harlowja> except for weird i need to have for loops in my tempaltes and such
<smoser> well. yeah, i think as we start offering $datasource.hostname
<smoser> and such, the builtin will probably fall appart
<Odd_Bloke> Why do we need a bultin renderer at all?
<smoser> i'm wondering if you were right though, an djinja should be the default
<harlowja> as long as jinja2 doesn't die like cheetah, lol
<smoser> i dont know, Odd_Bloke
<Odd_Bloke> It's more stuff for our users to learn, and more code for us to maintain.
<smoser> its only really for backwards compat.
<Odd_Bloke> Backwards compatibility is probably an argument for it though.
<Odd_Bloke> ^_^
<smoser> so lets say things..
<harlowja> should we support cheetah to then :-P
<harlowja> for backwards compat ;)
<smoser> the thing it comes down to is selecting a default
<Odd_Bloke> /kick harlowja NO
<harlowja> lol
<harlowja> but but but
<harlowja> ha
<smoser> any template can declare what it needs to be rendered with
<smoser> but if there is no header ('##template: jinja2')
<smoser> then what do we do?
<smoser> what i did just right there was fall back to the builtin
<smoser> if we say instead "undefined is jinja2"
<smoser> then in the far future world of jinja2-is-dead
<harlowja> jinja3 ftw
<smoser> then we are in the position of basically *having* to break someone
<smoser> as we wont be be able to render their undeclared thing
<Odd_Bloke> Jinja2 is very unlikely to go anywhere.
<harlowja> jinja3 ftw
<harlowja> lol
<Odd_Bloke> And if it does, we just vendor it in and warn people that it's deprecated.
<smoser> meh
<smoser> i think i like basicaly requiring the declaration
<harlowja> ya, we can do that to
<smoser> you either declare what you want, where we can then definitively render it or not
<smoser> or you dont
<smoser> and you get harlowja's crappy builtin renderer
<harlowja> da best ever!
<harlowja> lol
<smoser> i think i'll add a alias for 'basic' from 'harlowja-crap'
<Odd_Bloke> I think it would be good to warn people against using harlowja's crappy builtin renderer.
<harlowja> u have to rename it 'crappy_built_in_renderer'
<Odd_Bloke> Something in the log or somesuch.
<smoser> Odd_Bloke, why? we can maintain that.
<smoser> its not hard.
<Odd_Bloke> I guess.
 * harlowja surely will miss cheetah though, lol
<harlowja> we could also default to https://github.com/Yelp/yelp_cheetah
<harlowja> lol
<smoser> harlowja, we can bring it back.
<smoser> just for old times sake
<harlowja> :-P
<smoser> We really recommend that you don't choose cheetah for new projects, and certainly not this hacked fork
<harlowja> lol
<smoser> i think i'm going to add things like that to all my code from now on
<harlowja> don't listen to them, jeez
<harlowja> lol
<harlowja> :)
<smoser> "i really recommend you dont use this."
<harlowja> :)
<smoser> instead, maybe go outside and get some exercise!
<harlowja> isn't that what all those licenses are for, lol
<harlowja> thought they did the same thing, use at your own risk, its just opensource mannnn
<harlowja> lol
<harlowja> *aka u can't sue me ...
<harlowja>  https://github.com/stackforge/cloud-init/blob/master/LICENSE-Apache2.0#L144 (that section, lol)
<smoser> alright. i almost have tests for templter whoo hoo
<harlowja> woooo
<harlowja> u da man
<smoser> the key to improving test coverage is reducing lines of code
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: Bring over the 'templater' from bzr  https://review.openstack.org/170257
* smoser changed the topic of #cloud-init to: cloud-init || cloud-init 2.0 reviews: http://bit.ly/cloudinit-reviews
<harlowja> :)
<harlowja> i still see lines of code there
<harlowja> its not 0
<harlowja> lol
<smoser> what i didn't say was one of the things i did was join lines.
<smoser> because 2 lines of 30 characters is 2 lines.
<harlowja> :)
<smoser> but one line of 60 chars is only 1
<smoser> :)
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: Bring over the 'templater' from bzr  https://review.openstack.org/170257
<smoser> harlowja, you want to review your code there + my tests ?
<harlowja> :)
<harlowja> sureeeee
<harlowja> oh man, whoever wrote that code is super
<harlowja> lol
<harlowja> smoser ok added a comment
<harlowja> i think the fixtures test library can replace that custom tempdir
<harlowja> commonly used in other parts of openstack (since the author works there)
<smoser> harlowja, i dont want to just grab dependencies.
<smoser> i dont like writing code just because either
<harlowja> this would only be a test dep
<smoser> yeah, thats not so bad.
<harlowja> :-P
<smoser> harlowja, if you want to add fixtures useage thats ok.
<harlowja> k
<harlowja> i can after that goes in
<harlowja> ok with u
<harlowja> i'm cool with that
<harlowja> bb
<smoser> but can we make sure it works iwth trusty
<harlowja> sure,
<harlowja> afaik the package exists there
<smoser> its 0.3 in trusty
<harlowja> ok, thats pretty old, but should be ok for this i think
#cloud-init 2015-06-26
<openstackgerrit> Joshua Harlow proposed stackforge/cloud-init: Bring over the 'templater' from bzr  https://review.openstack.org/170257
<harlowja> smoser ok updated with fixtures usage
<openstackgerrit> Joshua Harlow proposed stackforge/cloud-init: Expose api response properties and cache buffer decoding  https://review.openstack.org/195800
<erhudy> good morning, i'm having a bit of an issue with some openstack vendordata that's clobbering /etc/cloud/cloud.cfg and breaking things when launching instances from ubuntu cloud images
<smoser> erhudy, example ?
<smoser> i'd consider it a bad idea for vendordata to clobber /etc/cloud/cloud.cfg
<erhudy> i've been digging into it more and it seems like something is happening with the vendordata that's causing freakouts in user_data.py
<erhudy> it writes out the raw data to /var/lib/cloud/instance/vendor-data.txt
<erhudy> and then when it goes to write it out in mime multipart format to vendor-data.txt.i
<erhudy> something along the call stack silently malfunctions and it never writes that file out and other things seems to short-circuit
<erhudy> it's really weird
<erhudy> not familiar with cloud-init as a whole but i'm guessing the whole thing is wrapped in some sort of exception handler that could be silently consuming some exception that's getting thrown
<erhudy> specifically in user_data.py, in convert_string() where it tries to evaluate data[0:4096].lower(), at that point it seems like execution just stops and it unwinds the entire call stack all the way back to the top
<erhudy> i pickled out data and then loaded it in python and it looks like TypeError: unhashable type is getting thrown and silently eaten
<erhudy> that was ambiguous, i pickled out the data variable and loaded it in the REPL
<erhudy> basically it looks like util.decomp_gzip() is returning a dict and things below it expect it to be returning a string
<erhudy> actually, raw_data is already a dict, so util.decomp_gzip() doesn't do anything with it
<erhudy> this is with 0.7.5 which is what canonical is still including, it looks like the code i'm looking at has changed slightly in 0.7.7
<smoser> erhudy, please open a bug.
<smoser> if it is non-sensitive, if you can give the content of vendor_data that'd be wonderful.
<erhudy> i think it was fixed in 0.7.7
<smoser> erhudy, open a bug and we can get it fixed in 14.04 images.
<smoser> via SRU to 14.04.
<smoser> harlowja, so with safe_yaml just 'yaml.load'... any reason to not just abandon that at the moment ?
<harlowja> as long as people use 'yaml.safe_load' and not yaml,load
<harlowja> let me tweak it a little
<smoser> here. let me push first
<openstackgerrit> Scott Moser proposed stackforge/cloud-init: Bring over the 'safeyaml' from bzr  https://review.openstack.org/170252
<smoser> there. thats what i did. basically just used safe_load and assert that loading python should fail
<smoser> things i need to fix:
<harlowja> oh man, u to fast
<harlowja> haha
<smoser> so i want to test fixtures with the fixtures on 14.04
<smoser> and basically want to put in some way to test "will this work on 14.04"
<smoser> even build/test
<erhudy> smoser: will do
<erhudy> smoser: i actually opened a canonical case already asking for cloud-init to get updated to 0.7.7 in UCI, but i'll file a separate bug on cloud-init
<smoser> erhudy, ok. thank you.
<smoser> we wont update 14.04 -> 0.7.7, but we can/will SRU the fixes.
<openstackgerrit> Joshua Harlow proposed stackforge/cloud-init: Bring over the 'safeyaml' from bzr  https://review.openstack.org/170252
<harlowja> smoser ok did some other additions
<harlowja> sucked over the logging module from taskflow, which sets up a new less than DEBUG level, and a nullhandler and all that crap
<smoser> thank you for the YAMLError
<harlowja> :-P
<smoser> what is shell ?
<smoser> i thikn yoru load_file is 'util', no ?
<harlowja> idk, i guess common shell stuff
<harlowja> unsure, ha, i'll put it either place :-P
<harlowja> shell-like stuff in shell?
<harlowja> or util, or both, idk
<smoser> k. but shell, i tihnk i copied the meaning from the python-<openstacktool>
<smoser> the intent was that would be where the main was
<harlowja> ah
<harlowja> ok
<harlowja> gotcha
 * harlowja moving
<smoser> moving as in houses ?
<smoser> as in you're moving to michigan now?
<smoser> leaving that west coast non-sense.
<harlowja> lol
 * harlowja is coming to stay with u man
<harlowja> in your basement
<harlowja> *we can share the basement*
<smoser> sweet. party on wayne.
<openstackgerrit> Joshua Harlow proposed stackforge/cloud-init: Bring over the 'safeyaml' from bzr  https://review.openstack.org/170252
<harlowja> party on gerrit
<harlowja> lol
<erhudy> filed https://bugs.launchpad.net/cloud-init/+bug/1469260 for you smoser, let me know if you need/care about tickets in salesforce
<smoser> erhudy, i'll mostly let them take care of it. they may or may not bother me.
<smoser> but thank you.
<erhudy> okay, i referenced the LP ticket in the other one
<erhudy> thanks for your time
<harlowja> tickets in salesforce?
<smoser> canonical support
<smoser> if you ahve a support contract it goes up that way
<smoser> harlowja, i think we want to ditch requests.
<harlowja> oh man
<harlowja> ha
<smoser> https://bugs.launchpad.net/simplestreams/+bug/1461181
<smoser> quite possibly my code that was leaking.
<harlowja> u got alot of openfiles
<smoser> (not explicitly requests, but we couldn't reproduce it)
<harlowja> open less files?
<harlowja> 1 file per day, keeps the doctor away
<smoser> it was leaking file descriptors.
<smoser> many more than 1 per day
<smoser> :)
<harlowja> 2 files per day keeps the doctor away
<harlowja> lots of peopel use requests (all of openstack) so it makes me wonder...
<smoser> hm.. maybe it is just mine then.
<smoser> but for cloud-init id ont really think we gained a lot from it.
<harlowja> agreed, we probably don't
<harlowja> except for ssl support that works
<smoser> urllib3 has that.
<harlowja> ya, pretty sure requests uses urllib3333
<smoser> right.
<harlowja> and vendorizes it :-/
<harlowja> https://github.com/kennethreitz/requests/tree/master/requests/packages/urllib3/
<harlowja> which afaik most packagers un-vendorize, lol
<smoser> oh nice.
<smoser> i didntk now that.
<smoser> wow
<harlowja> :-P
<larsks> smoser: what decides between DataSourceConfigDrive vs DataSourceConfigDriveNet?
<larsks> I want cloud-init (0.7.x where x < 6) to consider injected network information on openstack, but it checks for 'self.dsmode = "local"', and with DataSourceConfigDriveNet self.dsmode is always 'net' (and this check ignores any user-set dsmode).
<larsks> Or harlowja ? ^^^^
<smoser> when it runs.
<smoser> larsks, the idea is that config drive net would run in 'cloud-init init --local'
<smoser> and *should* have the opportunity there to apply networking config
<larsks> smoser: but it won't, if self.dsmode == 'net'.
<larsks> ...which appears to always be the case when it checks before trying to run on_first_boot(), which is what seems to handle network config...
<larsks> I tried forcing datasource_list: [ ConfigDrive ], but I still end up with dsmode == 'net'..
<larsks> (I am looking in particular at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceConfigDrive.py#L119)
<larsks> smoser: Oh wait, I get it; the call with --local changes self.dsmode. Okay.
<larsks> So, let's see what happens in that case...
#cloud-init 2015-06-27
<harlowja> smoser https://review.openstack.org/#/c/191791
<harlowja> well thats interesting...
<smoser> that is interesting.
<smoser> amazon has user data modifcaiton when instance is shut down
<smoser> but cloud-init has never really done anything about it.
<smoser> harlowja,  i met Uma, right ?
<harlowja> smoser maybe, not sure
<harlowja> actually, probably not
<smoser> hm.. i think i met your manager not in vancouver, but somewhere.
<harlowja> ya, that was a different manager, that manager got sucked away to some other internal project
<harlowja> do to her prior area of expertise...
<harlowja> *due to
<smoser> ah.
<harlowja> maybe or may not be releated to http://www.computerworld.com/article/2475864/search/on-a-fast-break--yahoo-throws-a-curveball.html
<harlowja> i disavow any knowledge of anything, lol
#cloud-init 2015-06-28
<subway> Any recommendations on how I might go about templating cloud-config variables at instance boot-time?
<subway> For instance, generating a hostname that contains the instance-id
<subway> I've been accomplishing this by setting the hostname in a user-data script sent down in addition to the cloud-config I generate, but it seems this has introduced a race condition -- other services may start before cloud-init has executed my user-data script to update the hostanme.
<subway> When setting hostname directly via cloud-config's hostname module, the race condition appears not to exist (though I haven't actually looked at the code, and it could be pure luck that the hostname is getting set before upstart starts other services.
#cloud-init 2016-06-27
<smoser> harlowja, around ?
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1596690
<harlowja> yo yo
<harlowja> around, in all day meeting about ironic
<harlowja> so may be on/off
<harlowja> whats this azure thing
<harlowja> lo
<smoser>  http://paste.ubuntu.com/17990192/
<smoser> is that sane ?
<smoser> that obj.pkl think is apita
<smoser> a pita
<smoser> but when i load it from disk, i can't be guaranteed that it has a dsmode, as the dsmode was getting set in the __init__
<smoser> but restoring doesn't call __init__
<harlowja> hmmmm, stupid pickle
<smoser> so that change there *does* stop the stack trace... as it then means the dsmode is gotten from the superclass.
<harlowja> is there anyway to plugin to __setstate__ and ensure a default is set?
<harlowja> https://docs.python.org/2/library/pickle.html#object.__setstate__
<smoser> maybe..
<smoser> so th patch there works, but wasn't sure if it migiht mean getting the super classes value all the time
<smoser> i wouldnt think so, but need to check.
<smoser> it seems to be ok.. the __setstate__ seems useful
<smoser> i think amybe look at that late.r
<smoser> i want to get to ditching the pkl'd object at some point
<smoser> and have some better serialized object... less complex that is restored from
<smoser> harlowja, could you sanity check http://paste.ubuntu.com/17991115/ ?
<smoser> it seems to be right for me... tested that un-pickled objects that did not have the dsmode attribute now have it, and those that *did* have it keep their old value.
<harlowja> ya, i think that will be ok
<smoser> i guess the thing i'm really asking about is the differece between __init__ doing a self.dsmode =
<smoser> and having dsmode as a class attribute
<harlowja> ya, one is just setting it a class load time as a default
<harlowja> which i guess is fine for this, until we get rid of the obj.pkl junk
<harlowja> and move to something like to_json or something
<smoser> right. ok thank you.
#cloud-init 2016-06-28
<Wulf> Good Morning
<Wulf> Wasn't there support for fetching EC2 meta data in cloud-init?
<Wulf> I believe I had seen such files somewhere in /var in running instances. But can't find them right now
#cloud-init 2016-06-29
<Odd_Bloke> Wulf: cloud-init does fetch and use EC2 metadata; what specifically are you looking to do?
<Wulf> Odd_Bloke: how can I access this metadata from a userscript (x-shellscript mime part)?
<Wulf> Odd_Bloke: what I'm looking for is information about the hard drives. I need to get a list of all instance store drives and temp ebs drives attached on launch.  Other ebs drives must be excluded.
<Odd_Bloke> Wulf: You could poke around in /var/lib/cloud to see if it's available to you there; if not, then you can always query the metadata server yourself.
<Wulf> Odd_Bloke: well, it's not available there
<Odd_Bloke> Wulf: I don't think cloud-init provides any particular way of doing this, though.
<Odd_Bloke> Wulf: That's the only place cloud-init fetches EC2 metadata from, so you may out of luck in that case. :(
<Wulf> Odd_Bloke: no, I mean it's not available from /var/lib/cloud
<Wulf> But I remember having seen it there. Some time ago.
<Odd_Bloke> Wulf: Oh, OK; in that case, you'll need to query the metadata server yourself. :)
<Wulf> Odd_Bloke: I could do that. But I don't want to reinvent the wheel
<Odd_Bloke> Wulf: If it isn't in /var/lib/cloud/ you aren't reinventing the wheel. :)
<Wulf> Odd_Bloke: but I know that cloud-init has some code for reading the meta data. Just want to know how to use it.
<Wulf> The wheel exists, it's right in front of me. But somehow I can't get it to spin
<Odd_Bloke> Wulf: You could try using the Python functions in http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/ec2_utils.py
<Odd_Bloke> Wulf: But I don't know that that's considered a public interface, so don't necessarily expect it to be stable.
<Odd_Bloke> To extend your analogy, you can try using that wheel, but don't be surprised if the car it's attached to drives off and runs your fingers over. ;)
<sfriesel> this metaphor went too far
<Odd_Bloke> s/extend/over-extend/
<Wulf> Odd_Bloke: I'm wondering where the car came from. Not every wheel is attached to a car.
<Guest20454> smoser: sorry for super late update (was on PTO for ~2 weeks), latest cloud-init works fine for me. Thanks!
<mgagne> smoser: that was me ^
<harlowja> no PTO allowed
<harlowja> lol
<smoser> mgagne, horay!
<mgagne> tyvm all for your work! much appreciated
<smoser> and that should be soon in xenial
<smoser> (its in -proposed)
<raylu> i have a cloud-config file. is there a way to get cloudinit to stop everything if a command fails?
<raylu> (as in not run the rest of the commands - something like sh's set -e
<raylu> )
<harlowja> raylu not afaik
<raylu> :(
#cloud-init 2016-06-30
<holser_> smoser: Hi
<smoser> holser_, hey.
<holser_> Iâve submitted 2 reviews for mcollective https://code.launchpad.net/~sgolovatiuk
<holser_> smoser: your feedback is welcomed
<smoser> six doesn't cover BytesIO ?
<smoser> holser_,
<smoser> $ for p in python2 python3; do echo $p; $p -c 'from six import BytesIO; print(BytesIO)'; done
<smoser> python2
<smoser> StringIO.StringIO
<smoser> python3
<smoser> <class '_io.BytesIO'>
<smoser> smoser@milhouse:~/src/cirros/trunk$
<smoser> seems to work
<smoser> but i'm guessin gyou tried that
<holser_> yeah I tried
<smoser> must be older version of six
<smoser> that you have
<smoser> what a pita
<holser_> I use the default version in 16.04
<smoser> its got to be in there.
<holser_> fresh 16.04
<holser_> anyway, I gave the exact steps in bug
<holser_> so you can try them with your six
<smoser> python3 on 16.04 most certainly can do: from six import BytesIO
<holser_> I can do that way
<holser_> without try:
<holser_> just from six import BytesIO
<smoser> see if that works on python2 though
<smoser> its posisble configobj in python2 will not like that it gets bytes
<waldi> possible. why do you want that?
<smoser> why do i want python2?
<waldi> wait. python2 does not have bytes
<smoser> well, some pytho2 do
<waldi> >>> bytes
<waldi> <type 'str'>
<waldi> so no, no bytes, only str
<smoser> right. ok.
<smoser> could you add unit test for this ?
<smoser> you can rip the section out of 'handle' that does:
<smoser>  if 'conf' in mcollective_cfg:
<smoser>  ....
<smoser> and just put that in a fuction that has 2 inputs.
<smoser>  config and output file
<smoser> then unit test that the file gets what you expected
<smoser> or even that just returns a string (which you can pass to util.write_file())
<smoser> i think if you re-factor that code a bit, it will be more easy for you do do the other mp too.
<smoser> oh. i see that the method i mentioned would have other htings too PUBCERT_FILE, and such.
<Wulf> Hoping to get a more satisfying answer today:  I'm using cloud-init on aws ec2. I've seen in the source code, that retrieving ec2 meta data is supported. I would like to access the meta data from a userdata script written in python. Is it possible to access it without having to reinvent the wheel?
<smoser> Wulf,
<smoser>  from cloudinit import ec2_utils
<smoser> ec2_utils.get_instance_metadata()
<Wulf> smoser: and that is considered a public interface?
<smoser> that will work for 14.04 and 16.04
<Wulf> Good enough for me, thank you!
<Wulf> Now I only need to find out why I'm getting wrong data from AWS :-)
<Wulf> Odd_Bloke: and thanks to you too
<harlowja> btw, how far along is the networkd stuff for supporting systems?
<holser_> smoser: Iâve updated review, added unit test
<copumpkin> does cloud-init ignore everything before #cloud-config in ec2 userdata?
<copumpkin> I see a lot of people describe it as #cloud-config needs to be on the first line
<harlowja> arg, why do when i go to rebuild a cloud-init rpm and look at what redhat rpm has in it do i always find little surprises that never seemed toget upstreamed
<harlowja> (where surprises == patches)
#cloud-init 2016-07-01
 * harlowja uploading a few other patches that redhat folks never seem to upstream smoser 
<harlowja> as i try to rebuild a good rhel-useable package that isn't totally different from what exists in rhel
<Wulf> copumpkin: why don't you try it? :)
#cloud-init 2017-06-26
<smoser> bah
<smoser> how do i un-delete https://trello.com/c/BDxqarbu
<smoser> or un-archive
<smoser> "send to board". obvious
<smoser> http://help.trello.com/article/795-archiving-and-deleting-cards
<dpb1> smoser: after you update the bug for #1697545, let's talk about the open MPs on #169189
<dpb1> #1691489
<smoser> k
<powersj> smoser: let me know when you have a new test case for http://pad.lv/1687712, then we can wrap up SRU testing
<ubot5`> Launchpad bug 1687712 in cloud-init "cc_disk_setup: fs_setup with cmd doesn't work" [Medium,Confirmed]
<smoser> powersj, still working on it. :-(
<powersj> ok
<smoser> powersj, updated https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1687712
<ubot5`> Ubuntu bug 1687712 in cloud-init "cc_disk_setup: fs_setup with cmd doesn't work" [Medium,Confirmed]
<powersj> smoser: thx! I'll give it a shot shortly
<harlowja> smoser 'cloud-init summit '
<harlowja> woa
<harlowja> lol
<harlowja> *woah
<smoser> the big time
<powersj> smoser: still running into issues with your updated SRU template, mount -a fails due to wrong fs type, bad option, bad superblock on /dev/sdc1
<powersj> cloud config: https://paste.ubuntu.com/24958336/
<powersj> log: https://paste.ubuntu.com/24958331/
<powersj> woops wrong cloud-config, that's without cmd one sec
<powersj> this one: https://paste.ubuntu.com/24958364/
<smoser> powersj, ok. so that i guess is expected. :-(
<smoser> i was testing on openstack.
<smoser> config preference goes
<smoser> system (files in /etc/cloud)
<smoser> datasource (DataSourceAzure 'BUILTIN_CLOUD_CONFIG')
<smoser> user-data
<smoser> you provided via 'system', which then got trumped in the fs_setup section the fs_setup section by azure.
<powersj> ah
<smoser> :-(
<smoser> i think that you would get he desired behavior if you did it via user-data
<smoser> (openstack datasource does not have provide fs_setup config)
<powersj> hmm ok I know how to first launch an instance with a config, but not afterwards. Let me look around.
<smoser> well, if you launch with the config
<smoser> it will fail the first time
<smoser> (thats the bug)
<smoser> then you can
<powersj> right
<smoser> rm -Rf /var/lib/cloud /var/log/cloud-init*
<smoser> and reboot
<smoser> and it should dtrt
<powersj> after enabling proposed?
<smoser> (well, after upgrading
<smoser> yeah)
<powersj> ok
<powersj> alright I'll go that route
<powersj> I forget about the preferences hierarchy
<smoser> well, azure is the only datasource that does that
<rharper> =(
<rharper> I fought that and lost
<smoser> rharper, http://paste.ubuntu.com/24958427/
<rharper> that;s just renaming
<rharper> since it runs at Local as well ?
<smoser> i think thats probalby why your network config was running twice. i'd have to see a log.
<smoser> only runs at local
<rharper> oh
<rharper> I didn't think we could do that
<smoser> the big difference there is line 30
<rharper> I thought we always need to run at net time when attempting to consume metadata
<rharper> smoser: I had a separate class with just enough to run local but the class gets picked after local, and when net ran, it didn't have the right class, so instead I changed the Net class to have network_config, and then let it run in both local and net;  the assumption was that AzureNet could *not* run at local time (ie, it's metadata it reads may not be available if networking is *not* up, which is the case for init --lo
<rharper> cal)
<smoser> i'll poke some more
<smoser> why did i not get any errors on that.
<rharper> I don't no
<rharper> I didn't get errors when Net didn't do anything
<rharper> so, it's possible my assumption is not correct anymore
<rharper> in which case, I'm all for running to completion (consuming metadata) at local time
<smoser> well this is what i commented in the MP about
<rharper> smoser: some quick reading that made me cautious (agent_command requiring bouncing network, get_metadata_from_fabric, which calls into WALA shim,
<smoser> at least i ghoutht i did
<rharper> reading the Shim object, it appears to do a dhcp and read data out of it
<rharper> that implies some level of networking;  I would then expect that in local mode we can write config, and then at init net; attempt to bring networking up and read the lease file, etc.
<smoser> well, net doesn't bring up networking
<smoser> its already up
<rharper> right
<smoser> the azure datasource is a  mess in that it may actually *bounce* the interface
<smoser> my comments last week and an inline comment in 'get_data'
<rharper> sure; for this first round, I was hoping to change as little of the DS as possible to make things work;
<smoser> i think youo've changed that a bunch now. but we'll have to do something like that i think
<rharper> do something like what?
<smoser> 2 options
<smoser> either get_data handles doing a dhcp and then appropriate tear down
<smoser> or get_data just does local things and 'activate' does the remainder
<smoser> the second is less invasive at the moment
<powersj> smoser: rharper: dpb1: SRU verification complete.
<dpb1> woohoo!
<dpb1> someone do a 21 gun salute
<rharper> smoser: vs the partial get_data I have now ?
<smoser> \o.
<smoser> rharper, i'm looking at f31c8a02f75fcb1aeafe665dc6b240a0ff542a05
<smoser> is that right?
<rharper> git top  hash ?
<smoser> yes
<rharper> I've pushed two commits to fix up your items of spelling and such
<smoser> " the partial get_data I have now" confused me. as i think get_data is most of everything
<rharper> but otherwise that's the right hash
<rharper> do you want to do a hangout ?
<smoser> probably should
#cloud-init 2017-06-27
<dpb1_> smoser: I thought the best time was 1700 UTC
<smoser> i converted wrong when i said that.
<dpb1> smoser: ah
<smoser> 10:00 US/Mountain == Mon, 10 Jul 2017 10:00:00 -0600 == Mon, 10 Jul 2017 16:00:00 +0000
<smoser> at least per GNU date
<dpb1> smoser: because we are in GMT
<smoser> which i trust more than myself
<dpb1> err
<dpb1> because we are in DST
<dpb1> stupid dst
<rharper> lol
<dpb1> and... gnu date.
<dpb1> amazing
<smoser> rharper, control capture fails.
<rharper> generically ?
<smoser> well, capture succeeds, launch fails
<rharper> = /
<smoser>  http://paste.ubuntu.com/24964022/
<smoser> paulmey, ^
<smoser> is that known to you?
<smoser> basically i launched an instance in the portal, then click 'capture'. then launch an instance from it and it is not reachable from network , and console log above.
<smoser> note, I did ignore the suggestion that i should run some 'walinuxagent -deprovision' command.
<smoser> next is to see if that "fixes" it.
<smoser> that was 17.04 that i launched.
<larsks> smoser: aaaaaaaaah, my inbox asplode with launchpad...
<powersj> lol
<dpb1> yay!!!!
<dpb1> I mean, not yay for your inbox... :)
<smoser> larsks, how come ?
<smoser> did mine ?
<powersj> smoser: sru was released, so all the emails about this bug was fixed just came in
<larsks> Yeah, that ^^^^^ :)
<smoser> ah.
<smoser> hm..
<smoser> paulmey, around ?
<smoser> here is log on an instance where i *did* run 'sudo waagent -deprovision+user' before capture
<smoser> http://paste.ubuntu.com/24964566/
<smoser> rcj, or Odd_Bloke ?
<smoser> have you ever done this ?
<smoser> launch an instance in azure, hit 'capture' and then launch a new instance from that one ?
<smoser> my new instances seem to be firewalled off
<Odd_Bloke> smoser: I have, but I've generally just `rm -rf /var/lib/{cloud,waagent}`d.
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/326373
<smoser> rharper, have to fix unit tests, but that worked for 'clean' instance.
<rharper> k
<smoser> but unit tests:
<smoser>  FAILED (errors=38, failures=1)
<rharper> erg
<smoser> all just because we're doing less in get_data()
<rharper> well, there was some left over stuff
<rharper> smoser: reviewing now
<smoser> rharper, "we should set this (line 497 -- blacklist = ['mlx4_core']) as a default in the builtins ds configuration I think."
<smoser> not sure what you meant there.
<rharper> smoser: in the top of the azure DS, tehre are defaults in the built-in ds config
<rharper> I was suggesting that this not be part of the network_config attribute, but a property of the DS itself
<smoser> i think at the momemnt we dont have any use case for it being a different value...
<smoser> no reason for the user to configure it, right?
<rharper> sure
<smoser> rharper, ok. i think that tests will pass and that i've addressed your comments.
<smoser> i will update the commit message here.
<rharper> k
<rharper> re-reviewing
<rharper> smoser: looks good; do we have that in a ppa?  I can run that through some instance testing;  do you have the instructions for doing the "capture, update cloud-init, boot new instance" ?
<smoser> rharper, you can just build from trunk there. i can upload to a ppa, but you can build just as quick.
<smoser> and for "capture"...
<smoser> I just
<smoser> a.)launch instance from UI
<rharper> smoser: sure; just checking if you already had one built
<smoser> b.) ssh in , dpkg -i cloud-init , rm -Rf /var/lib/cloud /var/lib/waagent /var/log/cloud-init
<smoser> c.) 'capture' on the azure portal
<smoser> d.) launch an instance from that image in the portal
<rharper> smoser: is it ok to ignore that 'wala deprovision' warning on capture in the portal? or is that what the rm on the /var/lib/waagent is doing?
<smoser> rm -Rf /var/lib/cloud /var/log/cloud-init /var/lib/walinuxagent
<smoser> is sufficient to do that
<smoser> you're welcome to run the command they provide
<rharper> np
#cloud-init 2017-06-28
<dpb1> powersj: https://jenkins.ubuntu.com/server/job/cloud-init-ci/20/consoleText too many values to unpack?
<powersj> hmm a merge on ubuntu/xenial
<powersj> let me see what the state of the integration tests is there
<smoser> powersj, yeah, was justa about to ask about that
<smoser> if i cherry picked your integration fixes
<smoser> would that do it ?
<powersj> Yes it would. It is due to the new test framework handling error exceptions from pylxd which is missing from the old version
<smoser> ok. i'm going to attempt to pull those along
<smoser> powersj, bah
<smoser> so i got those commits. and didn't fix.
<smoser> so i'm going to drop them again and ignore the failure.
<smoser> :-(
<powersj> ok
<powersj> same failure though?
<smoser> i grabbed the commits, but they're not in the tree
<smoser> yeah
<smoser> https://jenkins.ubuntu.com/server/job/cloud-init-ci/21/console
<smoser> they're not in the tree, they're in quilt patches in debian/patches
<powersj> ah
<smoser> so you'd have to 'quilt push' in order to run the intergration tests with them there.
<powersj> hmm should I keep the integration tests as a part of CI then?
<smoser> yes
<smoser> and msot of the time this would work.
<powersj> ok as long as you are fine with those odd ball cases :)
<smoser> powersj, i wonder...
<smoser> i dont have a suffiecntly old lxd anywhere
<powersj> ?
<smoser> could you easily run integration tests on the xenial deb in https://launchpad.net/~smoser/+archive/ubuntu/cloud-init-dev/+files/cloud-init_0.7.9-153-g16a7302f-0ubuntu1~16.04.2~ppa3_all.deb
<powersj> we could; clone master to get latest tests, and pass --deb option to integration tests to put that deb into the images.
<powersj> this however, assumes we don't run into a mismatch of tests and features/functionality
<dpb1> getting the tests executed will be step 1
<dpb1> instead of failure to launch
<smoser> powersj, yeah. i was just asking if you could do it locally
<powersj> smoser: running...
<smoser> i think it will run
<powersj> oh it is
<powersj> collecting output for tests now
<powersj> tests passed
<powersj> https://paste.ubuntu.com/24975100/
<powersj> hmm CI doesn't seem to be triggering at the moment now...
<dpb1> powersj: attach those results to the MP?
<powersj> dpb1: done
<dpb1> thx
<powersj> ok CI is back, sorry about that folks
<smoser> [ubuntu/xenial-proposed] cloud-init 0.7.9-153-g16a7302f-0ubuntu1~16.04.2 (Waiting for approval)
<smoser> [ubuntu/yakkety-proposed] cloud-init 0.7.9-153-g16a7302f-0ubuntu1~16.10.2 (Waiting for approval)
<dpb1> woop dee doo
<smoser> [ubuntu/zesty-proposed] cloud-init 0.7.9-153-g16a7302f-0ubuntu1~17.04.2 (Waiting for approval)
<smoser> night night
<dpb1> smoser: cya
<dpb1> rharper: this should be fix released? https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1693939 (for cloud-init project)
<ubot5> Ubuntu bug 1693939 in cloud-init (Ubuntu Zesty) "Switch Azure detection to use chassis_asset_tag" [Medium,Confirmed]
<rharper> dpb1: hrm
 * rharper checks in zesty daily
<dpb1> well
<rharper> dpb1: no, I don't think so
<dpb1> I mean, the cloud-init project bug
<dpb1> er
<dpb1> task
<rharper> 0.7.9-113 is in zesty, so ideally we'd SRU to Zesty
<dpb1> not the source package task
<dpb1> ok
<dpb1> that confuses me
<dpb1> why
<dpb1> ?
<rharper> we don't  strictly have to update zesty
<rharper> but it's not nice to have 16.04 up to date but the current release have issues
<dpb1> why is zesty involved in the cloud-init project task though
<dpb1> there is a separate source package task on there
<rharper> well, we upload to each of the current release (devel + any currently supported releases)
<rharper> possibly as cloud-init has a branch per release to handle the packaging ?
<rharper> I don't know for sure
<dpb1> ok
<dpb1> the release process stuff is a bit hazy to me
<rharper> I'm not too keen on the lp bits, but I do know that we have a branch per release for packaging changes
<dpb1> in our source tree (outside of the ubuntu distro)
<rharper> yes
<rharper> git
<dpb1> k
<dpb1> that makes sense
<dpb1> I'll ask scott when I next think of it.
<rharper> ok
#cloud-init 2017-06-29
<larsks> smoser: you around yet?
<larsks> When you are: Someone had a system on which /etc/resolv.conf was mode 0444, causing cloud-init to fail with a traceback.  It would be better if we logged an error and continued instead.  Should this be in util.write_file itself, or should this be "on the edge", e.g., have explicit try/except blocks for particular files?
<smoser> larsks, here.
<smoser> you're asking if write_file should have a "on failure just warn" option ?
<smoser> or "on failure call this function with the exception" ?
<smoser> this sort of stuff is hard though...
<larsks> Or if we should even default to that.  The problem with tracebacks in cloud-init is that it can cause cloud-init to exit before it has done things like configure credentials, making it impossible to log into a system.
<smoser> if cloud-init can't write your /etc/resolv.conf (and it thinks it should), then stuff quite likely not going to work all that well
<smoser> if all we do is just warn, then no one will ever file a bug
<larsks> Well, in this case, it was just resolv.conf that was a problem.  Everything else would have worked just fine.
<larsks> I agree that fundamentally the bug in this case is somewhere else (who knows where, though).
<larsks> But I wonder if we should handle this situation with more aplomb.
 * smoser brings up dictionary.com
<larsks> s/with more aplomb/more gracefully/
<smoser> :)
<smoser> self-confidence or assurance, especially when in a demanding situation.
<smoser> larsks, but yeah, i agree there is a general problem.
<smoser> something goes wrong, and user can't get in to help tell you what
<smoser> the stack traces at least do go to the console i hope. but that is often times not enough.
<smoser> i'm interested in ideas
<smoser> i dont want to sound like i'm trying to block you fixing an issue.
<larsks> Don't worry, you're not.  I'm honestly not sure what the best choice is.  I feel like a fix that is specific to resolv.conf is the wrong way to go, but I'm not sure that a blanket "always warn never traceback" policy is any better.
<larsks> smoser: what about this (this is against 0.7.9): http://chunk.io/f/e65bff7039714d498dbb6c683fa0edce
<larsks> That would allow network configuration to fail w/o bombing out completely which...might be better? I dunno.  I give up for now :)
<dpb1> smoser, rharper: FYI: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1701297
<ubot5> Ubuntu bug 1701297 in cloud-init "NTP reload failure (causing deployment failures with MAAS)" [Undecided,Incomplete]
<rharper> =/
<rharper> there's not a lot going on
<rharper> dpb1: one wonders why they don't run a -proposed test
<dpb1> it's almost like we should all be doing proposed testing!
<rharper> no way
<dpb1> let's all send that round the echo chamber
<rharper> too much work
<powersj> rharper: smoser "2017-06-29 15:29:06,765 - util.py[WARNING]: failed read of /sys/class/dmi/id/product_serial"
<rharper> I'd rather wait till it breaks in the archive
<powersj> that was on lxc ubuntu-daily:xenial updated to verison in proposed after cleaning out /var/lib
<smoser> powersj, thanks. unfortunate.
<powersj> cloud-init is still running
<powersj> but here are logs
<powersj> https://paste.ubuntu.com/24982063/
<powersj> https://paste.ubuntu.com/24982065/
<powersj> should I try without proposed enabled?
<powersj> just to make sure this isn't my own system
<powersj> smoser: I ran with the version in updates, blew away /var/lib/cloud, and same thing happened. Here is a full log: https://paste.ubuntu.com/24982084/
<smoser> powersj, :-(
<powersj> sounds like a new bug?
<smoser> i dont see it in 0.7.9-153-g16a7302f-0ubuntu1~16.04.2
<smoser> i used lxc-proposed-snapshot
<smoser> this is in a lxc, right ?
<powersj> yeah
<smoser> i jsut tried
<smoser>  lxc launch ubuntu-daily:xenial x1t
<smoser>  .. wait ..
<powersj> https://paste.ubuntu.com/24982658/
<powersj> something like ^
<smoser>  enter, enable prposed update ...
<smoser>  reboot
<smoser> oh
<smoser> right
<smoser> yeah
<smoser> whew
<smoser> so you cleaned out /var/lib/cloud/seed/
<smoser> which is where lxc put its datasource
<smoser> so... this time there was no datasource
<powersj> ahhhh
<smoser> and it tried everything looking for one.
<powersj> yeah that makes sense why it took so long to complete then
<smoser> https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bin/do-reboot
<smoser> do-reboot has 'clean' option. which cleans in a lxc-friendly way
<powersj> will that work on other clouds as well?
<smoser> yeah
<powersj> sweet
<smoser> rm -Rf /var/lib/cloud works everywhere except for lxc (and Softlayer, which does really wierd things)
<powersj> heh
<smoser> the seed dir really should be elsewhere.
<smoser> hindsight
<powersj> smoser: rharper: tested a few clouds and put pastebins of logs into the card
<powersj> Basically updated to proposed + rebooted then got logs; then cleared out /var/lib/cloud + rebooted then got logs;
<powersj> I didn't see anything out of the ordinary in any of the logs
<smoser> right. no WARN string
<smoser> i just opened https://bugs.launchpad.net/cloud-init/+bug/1701325
<ubot5> Ubuntu bug 1701325 in cloud-init (Ubuntu) "attempt to read dmi data can cause warning and stacktrace in logs in a container." [Undecided,New]
<smoser> and will fix that, but that is artful (and trunk) only at this point
<powersj> thx
<powersj> the only areas I haven't done is openstack, I have to find my creds
<powersj> and I did no-cloud via uvt-kvm, so does that count for kvm and no-cloud?
<smoser> powersj, yes. that was good.t hank you.
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/326546
<smoser> powersj, ^
<powersj> smoser: awesome, I'll give it a shot when I get back, need to run to Dr real quick
<smoser> kind of wierd
<smoser> az group delete --name=foo
<smoser> does not require a --location
<smoser> but az group create *does*
<smoser> group names are unique across regions
<paulmey> ola
<paulmey> az group location is the region that will be the primary master of that group
<paulmey> i.e. where it is administered
<paulmey> not sure if that makes sense
<smoser> right. it does make sense.
<smoser> i didn't realize at first that group names were "global", and thought that vm create *required* a --location.  so i was mostly just confused.
<smoser> i'm sorted now.
<smoser> paulmey, is there a way to launch (with 'az') by "sku" ?
<smoser> i think there is some magic that picks the most recent version of a sku in the portal
<smoser> and i'd like to launch with 17.10-DAILY
<smoser> since there is no obvious way for me to get the urn of "latest".
<smoser> https://bugs.launchpad.net/cloud-images/+bug/1701062
<ubot5> Ubuntu bug 1701062 in cloud-images "azure stream data is not useful for 'arm' (Azure Resource Manager) mode" [Medium,Confirmed]
<paulmey> do `az vm image list --publisher Canonical --all -o table` to see all available images
<paulmey> then, use the urn in the --image parameter for `az vm create`
<paulmey> instead of a version, you can specify :latest, which will use latest
<paulmey> so something like `az vm create -n testvm -g $rg --image Canonical:UbuntuServer:17.10-DAILY:latest -l westus2`
<paulmey> the portal is 'curated' with nice pictures and stuff... it's a separate process... don't ask. :-)
<smoser> so a urn is not really a 'urn'
<smoser> (i assume that is "universal resource name")
<smoser> but that is a very helpful answer, thanks.
<smoser> so that works for the cli
<smoser> it'd be nice for *me* at least if the portal allowed you to specifically provide it with a urn like that.
<smoser> its kind of unfortunate that our sku for lts contains the string lts
<smoser> that  makes launching things much more difficult than necessary
<smoser> UbuntuServer:16.04-DAILY-LTS:latest
<smoser> but
<smoser> UbuntuServer:16.10-DAILY:latest
<paulmey> I can see that... You can talk to the cloud image team to see what their reasons were
<paulmey> and yes, it's more like a colon-separated-value thing... but anyways... it's quite useful
<paulmey> I think they added -LTS in the sku to indicate to cli users that it's an lts version
<paulmey> it would be nice if we had some way to define aliases for images/skus...
<paulmey> but that's not on the horizon...
<smoser> it doesnt really seem very "helpful". other than to someone who is looking at a list of skus, *and* doesn't know which ubuntu releases are lts.
<smoser> ie, thats kind of "low tech". but they made it more difficult for someone trying to be smarter.
<smoser> anywah. oh well.
<smoser> other request that might hvae been lost up above was for the portal to let me put a urn in for the image...r ather than just letting me search.
<smoser> maybe thats possible and i just dont know.
<paulmey> I don't think there is a standard form in the portal, but you could do something like https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-linux
<paulmey> I think that particular template is a bit outdated, but the 'deploy' link in the readme.md opens it up in the portal
<paulmey> you can put your own values in the parameters section or just leave it user-editable...
<paulmey> is that close to what you're looking for?
<smoser> hm..
<smoser> thats neat
<smoser> thanks paulmey
<paulmey> np
<smoser> rharper, powersj
<smoser> https://code.launchpad.net/~cloud-init-dev/+recipe/cloud-init-daily-xenial
<smoser> and friends (-yakkety, -zesty) should be good now.
<smoser> and dailiy archive builds should not pick up the jsonschema dependency
<powersj> smoser: what change caused that?
<smoser> powersj, https://git.launchpad.net/cloud-init/commit/?h=ubuntu/xenial&id=0debd0ab3104812e92f6927c91d38fbeec768d98
<powersj> ah
<rharper> smoser: cool
#cloud-init 2017-06-30
<powersj> dpb1: those failures are due to a conversation smoser and I had earlier around where lxc keeps their datasource
<powersj> you can use his https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bin/do-reboot
<dpb1> powersj: thx
<Wulf> Hello!
<Wulf> Is it possible to specify a script (or URL with a script) that will execute and provide the actual configuration to cloud-init?
<Putti> Wulf, most likely: I have seen many VPS providers allowing the users to customize the cloud-init configuration. I can look into the code if you want.
<Putti> Wulf, maybe these help you: http://cloudinit.readthedocs.io/en/latest/topics/capabilities.html and http://cloudinit.readthedocs.io/en/latest/topics/format.html#include-file
<Wulf> Putti: Not really what I'm looking for. I can specify the full configuration (or a URL) via EC2 user data. But I need different configuration for each of my machines. So I thought I could write a script which supplies dynamic configuration for cloud-init.
<Putti> Wulf, mhm, to which configuration you are referring to if not the one that can be set with user data (as that is dynamic)?
<Wulf> Putti: I don't want to put the full cloud-init configuration into the userdata.
<Wulf> Putti: I want to specify a URL (same for all instances). Cloud-formation shall download the URL and execute it as a script that returns the real cloud-formation config
<Putti> ok, I don't know how to help with that. But maybe now with this information someone else could?
<Wulf> I guess I could add a part-handler and my script as a new part. But then how can my script inject additional generated parts into cloud-init?
<larsks> Wulf: I don't think it can. But you can have additional parts that are urls pointing to remote data...
#cloud-init 2018-06-25
<agentnoel> Hi there, I'm looking to run some scripts per-instance, and pass them environment variables via user-data.
<agentnoel> This is in the context of AWS EC2
<agentnoel> I've found boothooks can set environment variables early in the process, but the syntax is not completely user friendly. If I supply a "bash script" as user data, it is easier to read, but executes too late in the process.
<agentnoel> Do you have any suggestions for injecting environment variables early in the cloud-init process?
<agentnoel> Also, the scripts are "baked" into the Amazon Machine Image at /var/lib/cloud/scripts/per-instance/. Is this the correct location? (I couldn't get the vendor scripts folder working).
<agentnoel> Thank you :-)
<blackboxsw> rharper: so, would you expect if cloud-init didn't rename interfaces that we'd avoid the wait-online  timeout?
<rharper> blackboxsw: well, I would say no, but I also don't know why we're not matching any interface to configure either
<blackboxsw> rharper: I'm wondering if the dual rename is introducing a race in networkd wait-online. like the data gets cached at some point post cold-plug rename and pre cloud-init rename
<rharper> no, rename happens before networkd starts
<rharper> it's 1) boot 2) generate which creates /run/systemd/network/*.{link,network} files 3) cold-plug (which fires on .link files) 4) cloud-init local
<smoser> rharper: i dont think order of 2 and 3 is guaranteed
<rharper> yes
<rharper> it is
<rharper> systemd generators run way before any units are processed
<smoser> ? cloud-init's invocation of generate ?
<rharper> no
<rharper> generators
<rharper> netplan get's called as a geneator by systemd itself
<rharper> this is reboot, so we already have an existing /etc/netplan/*.yaml file
<smoser> ah. yeah.
<blackboxsw> as it stands operations like look the following:   cold-plug rename eth1 -> rename3 because of our existing /run/systemd/networkd/10-netplan-eth0.link file which contains a Name=eth0 and matching mac. But since azure presents an existing eth0 we fallback to rename3.
<blackboxsw> then cloud-init does two  renames, eth0 -> cirename0 and rename3 -> eth0;
<rharper> oh, interesting
<rharper> moves the "new" eth0 out of the way; and then pushes the right eth0 into position
<blackboxsw> yeah cloud-init's is a little smarter, move the existing out of the way
<rharper> so, this sounds just fine
<rharper> w.r.t getting the "right" eth0 in place
<rharper> which means it won't be optional
<rharper> we'll wait for it and config matches;  so why the stall then on wait-online ?
<blackboxsw> right, but networkd-wait-online may have been waiting for the orig eth0 device to come online (by what logic I'm uncertain). if new rename3 ->eth0 is moved into eth0 place prior to a udev rule saying online maybe that's why we timeout?
<rharper> networkd-wait-online can't run until cloud-init-local has finished, we block the network.target
<blackboxsw> hrm ok... <drums-fingers-on-desk>
<blackboxsw> btw, order of the nics is properly rendered by azure metatdata, I can see the ips and macs get pushed to the proper index in the 'network-'-
<blackboxsw> > 'interface' key
<rharper> blackboxsw: so I don't know how much we can play with it and look at the console but if possible, getting a networkctl status --all dump prior to invoking systemd-networkd-wait-online;  I'd typically do this with an extra ExecStart=/bin/sh -x -c 'cmd1 here' in the networkd.service file
<rharper> the alternative is to recreate in say qemu where we can manually swap the mac addrs on the underying nics to trigger the scenario and still have a serial console to get in and debug
#cloud-init 2018-06-27
<blargh> does cloud-init support including external files (e.g. to avoid storing ssh keys in user-data)?
<smoser> blargh: yes
<smoser> #include <url>
<smoser> #include-once <url>
<smoser> the latter form can be an expiring url it will only ever be read once.
<blargh> smoser: thank you!
<blackboxsw> rharper: smoser, thinking about Azure network_from_imds upgrade path. pre-upgrade deployments would have generated fallback config (with blacklisted mlx drivers). after upgrade azure could regen network on next boot and the network_config will have a delta from the original fallback_config versus config from imds_metadata. Do we want the upgraded instance to retain fallback_config?
<rharper> blackboxsw: did you find that mlx interfaces are present in the metadata?  I didn't think that they were
<blackboxsw> basic use case (1 nic) we'd see no change in 50-cloud-init.yaml just that the file would have an updated timestamp. in 2nic instances though 50-cloud-init.yaml would contain definitions for both interfaces after upgrade
<blackboxsw> rharper: nope that blacklist works fine. no mlx interfaces listed (but the cached network_metadata on the instance contains unused driver information (hv_netsvc).
<blackboxsw> and mlx interfaces aren't present in metadata
<rharper> right
<blackboxsw> what differs is imds_metadata contains ip addrs etc which aren't present in the pickled DS config on disk. so I see a delta in that case. do we regen network in that case from known fallback config
<rharper> so ignore the mlx bit;  the scenario is bionic, upgrade, now a crawl of metadata could return an network_config with multiple nic configs;  should running instances get this post upgrade?
<blackboxsw> yeah exactly, that's a situation that could be a potential fail if they've already configured nic2
<rharper> I think yes since it's a "bug" that we don't handle dhcp on secondary nics; that's getting fixed; so on upgrade to imds rendered network config, we'd want to retain that ability
<rharper> well, manual configuration override isn't something we support today AFAIK, the azure default is "dhcp" on all nics;  one has to change stuff in the image to make that happen and keep it working (like turning off the hotplug hook script for eth1-9 etc)
<blackboxsw> yeah that makes sense. ohh right, yeah per cpc image we were trying to support dhcp all nice
<blackboxsw> ok, so the way I have things for azure is this:   attempt to get imds_network_md if present, use it all and regen, if absent, use fallback non-mlx
<smoser> so... its interesting.
<smoser> rharper is implying that the "dhcp on all nics" that is present in official ubuntu images allows us to have more freedom with respect to taking on new behavior as long as that new behavior is similar to the old.
<smoser> there are some issues with that though
<smoser> a.) cloud-init and ubuntu do not make any requirements that they are running "officially provided cpc images"
<smoser> so for sure people could be using cloud-init on ubuntu elsewhere and our changes in behavior could break them.
<smoser> b.) users could very well have adjustted to the mechanisms that are in that image to configure networking and disabled them or learned to work with them.
<smoser> since the new behavior would be different (not exactly the same) it could definitely break *those* users too
<smoser> the absolute safest thing is: upgrade does not change behavior
<smoser> er.... safest is: upgrade does not change behavior, and new instances of ubuntu release still behave as they always did.
<smoser> second is: new instances can change behavior, upgrades do not
<smoser> least "safe": upgrade or new instance get change in behavior, sorry user.
<blackboxsw> in that case, the only way cloud-init can be certain it owned a fallback config generation would be to inspect ds.network_config for version:1 containing the 'params'->'driver' key present.
<blackboxsw> because only then would cloud-init have owned the network configuration generated (as no users would have provided that unused params:driver key
<blackboxsw> I might add a helper method DataSourceAzure._is_fallback_config we can chat about when we review the azure-cold-plug branc
<blackboxsw> I might add a helper method DataSourceAzure._is_fallback_config we can chat about when we review the azure-cold-plug branch
<blackboxsw> thanks
<smoser> i'm not entirely opposed to saying that cloud-init takes over networking configuration on azure
<smoser> i'm just saying that the arguments for "we can do that" were not sufficient
<blackboxsw> no worries, just wanted to know if I should account for that behavior
<blackboxsw> and I probably should. we could do what we have for openstack which would be to allow for someone to disable the network config option.
#cloud-init 2018-06-28
<ybaumy> i need help with the OVF datasource http://pastebin.centos.org/876866/
<ybaumy> smoser: you are here?
<ybaumy> anyone ever used that OVF provider?
<BloqueNegro> hi :)
<BloqueNegro> i want to use cloud init on ubuntu 16.04 to enable a second interface on a newly created instance
<BloqueNegro> any tips how to do that? everything i tried needs another reboot (e.g. using write_files to write to interfaces.d) or is ignored completly (e.g. a config file from the example i wrote to /etc/cloud/cloud.cfg.d/)
<BloqueNegro> it drives me crazy
<smoser> ybaumy: are you trying to run on a cloud somewhere or just providing data locally ?
<ybaumy> smoser: cannot get OVF to mount a iso at all. i am now switching to NoCloud datasource which atleast mounts /dev/sr0 find a user-data and meta-data file... but nothing happens
<smoser> ybaumy: i'd suggest using NoCloud rather than ovf
<smoser> but i'd be surprised if doc/sources/ovf/README does not work
<smoser> and similar for nocloud
<smoser>  https://asciinema.org/a/132013
<ybaumy> 1. im not using ubuntu .. im using centos 2. like in the paste url earlier i just seems to recognize the OVF in datasource_list at all
<ybaumy> ok that cloud-localds i havent done
<ybaumy> and im using vmware
<smoser> ybaumy: well... a.) try the copr repo to get newer cloud-init
<smoser> b.) if it fails, run 'cloud-init collect-logs' and file a bug
<smoser> ybaumy: i'd also kind of expect the openstack centos image to work with nocloud identically to how the asciicinema demo for ubuntu did
<smoser> something from http://cloud.centos.org/centos/7/images/
<ybaumy> smoser: thanks will try
<ybaumy> works!!!!
<ybaumy> i had that seefrom: None in there
<ybaumy> and removed it to see what happens
<ybaumy> that was the problem
<smoser> ybaumy: so the centos image and nocloud "just worked" ?
<smoser> i should make a ascii...spelling-thing of that too
<ybaumy> no i have created a custom ovf image in vcloud director
<ybaumy> i used copr cloud-init latest version
<smoser> oh. ok. good.
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000 is up for the azure case. I've not pushed logic to keep original fallback network_config  yet
<blackboxsw> smoser: rharper ping when you have about 10 mins to talk upgrade path azure
<blackboxsw> I added scenarios to the bottom of the doc https://hackmd.io/aODzXfa_TOikNtYBLt8erA?both
<smoser> blackboxsw: i'm here.
<blackboxsw> smoser: joining
<rharper> blackboxsw: smoser still going ?
<blackboxsw> yep join in on the fun
<rharper> k
<smoser> blackboxsw or rharper https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711
<smoser> or powersj
<smoser> that will go a long way to "fix" ing our gpg related errors
<blackboxsw> couple comments on that branch https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711
<blackboxsw> looks good thogh
<blackboxsw> *though, take 'em or leave 'em as far as my suggestions
<smoser> blackboxsw: https://hackmd.io/jlq3C4qbSgurZ_DZ5GTiuw
<smoser> i updated the top hunk of that to use log2dch --hackmd
<smoser> blackboxsw: in your suggestion of less try/except...
<smoser> hard to read in line..
<blackboxsw> ahh excellent on hackmd output
<smoser> but i think that you'd have to pass in a "retries=(1)" in order to get a single run
<smoser> in curtin's subp we do basically:
<smoser>   subp() # first time
<smoser>   for trynum, naplen in enumerate(...):
<smoser>     subp other tries
<smoser> i wanted to avoid the two 'subp' calls
<blackboxsw> ohh shoot, right smoser yeah I only though through the retries part, forgot the initial :/
<smoser> yeah. so that is why it is as it is.
<blackboxsw> yeah oops, +1
<blackboxsw> was thinking about appending to retries when enumerating, but that's just adding complexity in another place. :/
<smoser> the way its done here is actually a way the read_url logic could have been done
<smoser> the exception_cb stuff.
<smoser> your iterator can be anything, and decide to exit based on other things
<smoser> although the context of the exception is important there.
<blackboxsw> smoser: that makes more sense, I was just thinking a bit too simplistically (heh, and incorrectly) there
<smoser> blackboxsw: i think at one point you suggested dropping the minion integration test
<smoser> is that right ?
<smoser> because all it does is veriy we write files that we already unit test write
<smoser> i'm looking at
<smoser> https://bugs.launchpad.net/cloud-init/+bug/1778737
<ubot5> Ubuntu bug 1778737 in cloud-init "salt-minion test needs fixing" [Undecided,New]
<blackboxsw> smoser: yeah I mentioned that it really doesn't do much more than unit tests and validate that the minion package was installed.
<blackboxsw> we could tweak it to install salt server on the instance that then we'd actually have something to test (full integration there0
<smoser> the easiest way to fix that specific issue is to just rid ourselves of that test :)
<smoser> i'm not opposed to installing server and configuring minion to talk to localhost
<blackboxsw> smoser: you could drop it for now, I'll file a bug for a real integration test and can assign it to me to resolve
<blackboxsw> or you can assign the bug to me now and I can get it post this SRU. it'd be nice to not have a timing issue affecting CI, I didn't see how frequent the failure was
<blackboxsw> reading the bug
<blackboxsw> yeah I've seen those tracebacks fequently via journalctl. But was able to get a working minion talking to a non-local server fairly easily. it wouldn't be too hard to get that server setup locally for avoid a couple of the issues with trying to lookup a salt hostname etc.
<blackboxsw> it didn't involve too much config, if we can write_files  to seed the expected client key
<blackboxsw> ... in the server config
<blackboxsw> smoser: yeah, I'm all for dropping the existing salt-minion test for the moment. it really doesn't do much. and can easily be referenced whenever we get to a better integration test for it
<smoser> blackboxsw: ok. removal on its way
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719
<blackboxsw> approved https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719
<blackboxsw> BTW, having  a datasource express different scoped update_event lists it reacts to for 'network' vs 'storage' is making update_metadata and clear_cached_data logic a bit complex as we now have to determine what cached attributes to clear across an update_metadata refresh. it'll be interesting how this solutioon shakes out.
<rharper> blackboxsw: hrm, AFAICT there are separate things;  1) what does the cached metadata look like (this should always match what we fetched from the service  2) what does the ds/cloud-init do when (1) is complete and ds is configured to apply the update ?
<blackboxsw> +1 on part 1.    for part 2, right we need to (today) also set ds._network_config back to UNSET or None to ensure that the updated metadata cached gets propagated  to ds.network_config, instead of the cache value there too
<rharper> hrm
<blackboxsw> ... and we need to make sure get_data doesn't arbitrarily clear_cache_data on the _network_config attr by default
<blackboxsw> because that would ensure any subsequent call to ds.network_config would get the 'new' metadtaa
<rharper> well, I think we talked about decoupling the fetch of new data
<rharper> from updating the ds object attributes
<blackboxsw> correct there too, crawl_data(read)  vs get_data (persist). but in this case get_data doesn't persist the _network_config attribute, that is done within each ds.network_config call.
<blackboxsw> so if ds.update-metadata is called with EventType.BOOT and ds 'network' scope wants to react to that event we perform the following:
<blackboxsw> 1. get_data (which calls crawl_data)  clears the generic ds.cached_attr_defaults and persists new values to them.  2. clear ds._network_config so the next call to ds.network_config (note no underbar) will generate the network config from fresh metadata.
<blackboxsw> if we don't clear ds._network_config on a datasource, then we expect that this datasource would continue to present cached original ds.network_config
<blackboxsw> I should have a diff here locally that I can push in the next few minutes to better explain
<rharper> yes, I see what you;re saying;  we may need to let network_config property be a bit more complex and ask the ds for other states; maybe it could choose whether render the current value versus re-rendering based on whether the refresh indicated new settings (and we've a event flag saying we need to update)
<rharper> I much prefer the network_config property handler deal with clearing rather thant other parts of the object resetting it underneath
<rharper> but let's see what you have in a diff and go from there
<smoser> htis is interesting https://jenkins.ubuntu.com/server/job/cloud-init-ci/
<smoser> the "averarge stage time" of the maas compat test either went up recently or is affected a bunch by the 700ms fails times
<smoser> (it thinks average is 2m 3s, while reality seems more like 3m)
<blackboxsw> ok finally pushed the changes to https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000
#cloud-init 2018-06-29
<smoser> blackboxsw: around ?
<smoser> https://github.com/CanonicalLtd/uss-tableflip/blob/master/scripts/sru-attach-jenkins
<smoser> i inadvertantly failed to pass '--dry-run' on that, and attached artful results to
<smoser>  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1777912
<ubot5> Ubuntu bug 1777912 in cloud-init (Ubuntu) "sru cloud-init (18.2-4-g05926e48-0ubuntu1) to (18.3-0ubuntu1)" [Undecided,New]
<smoser> telling you for 2 reasons
<smoser> a.) i can do the same for b and x
<smoser> b.) to let  yoiu know of that.
<smoser> hm.. but it seems that the attach failed. file is corrupted.
<blackboxsw> nice smoser on the tooling  excellent, that'll come in handy
<smoser> blackboxsw: i think we might as well run it and attach the things.
<smoser> i'll delete the existing
<smoser> if you use python2 it works
<smoser> https://bugs.launchpad.net/ubuntu/+source/python-launchpadlib/+bug/1425575
<ubot5> Ubuntu bug 1425575 in python-launchpadlib (Ubuntu) "[py3] corrupts binary bug attachments" [Undecided,Confirmed]
<blackboxsw> Good deal
<blackboxsw> smoser: rharper https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000 description updated and comments added
<smoser> blackboxsw: bio ?
 * smoser will be back in a bit to hit submit... need a bio blurb from you . i added a place for it on hackmd
<blackboxsw> ok back up
<blackboxsw> hi I see time's about up..
<blackboxsw> I'm in hangout smoser
<blackboxsw> and you have to leave
<smoser> oh. yeah, i just hit 'submit'
<smoser> and i'm going afk
<blackboxsw> have a good one thank
<blackboxsw> you
<blackboxsw> will hit submit on the 2nd talk by EOD
#cloud-init 2020-06-22
<tribaal> mruffell: as far as I can see it's currently undergoing testing, so it should be relatively soon (you can help by enabling -proposed and checking that it works for your use-case, too!)
<Odd_Bloke> mruffell: Yep, tribaal is correct, we're currently performing validation; I would expect that would be complete this week, but blackboxsw would have a better perspective.
<blackboxsw> correct, mruffell tribaal Odd_Bloke this SRU was a big one to verify as we hadn't SRU'd for 4 months, so lots to verify. We are tracking and adding logs to this process bug  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1881018  and need to knock out the rest of the TODOs. expectation is this week if we don't hit a significant regression in testing.
<ubot5> Ubuntu bug 1881018 in cloud-init (Ubuntu) "sru cloud-init (19.4.33 to 20.2-45) Xenial, Bionic, Eoan and Focal" [Undecided,In progress]
<AnhVoMSFT> @blackboxsw it would be useful to keep a list of what can be automated during your manual verification and have a list of tasks to automate them. If there is anything we can help from the Azure side of things please let us know.
<blackboxsw> AnhVoMSFT: agreed, this sru has been painful. In our retro I believe we will be trying to instrument a new approach to spread the wealth of some of this verification cost by potentially looking at requesting that new features have some integration test coverage under tests/cloud_tests.
<blackboxsw> AnhVoMSFT: I was aware at one point you may have an internal CI running cloud-init's azure cloud_tests, is that still true?
<blackboxsw> and if so, I think it would be excellent if there is an ability to kick off a CI run on Azure's side which added the (xenial|bionic|focal)-proposed pocket and run that test suite https://cloudinit.readthedocs.io/en/latest/topics/debugging.html#manual-sru-verification-procedure.
<blackboxsw> it would be effective to have a successful CI run from any cloud vendor running tox -e citest -- --ppa ppa:cloud-init-dev/proposed
<blackboxsw> that would exercise the cloud-init version under current SRU verification and ensure no feature regression.
<blackboxsw> rharper: do you think this week you'll have a chance for SRU verification tests for either https://git.launchpad.net/cloud-init/commit/?id=46cf23c2  or https://git.launchpad.net/cloud-init/commit/?id=723e2bc1
<blackboxsw> otherwise I think we'll probably grab them in a day or two
<blackboxsw> I think we/I volunteered you for them last week when you thought there might be time, no pressure either way, we are just closing in on release verification
<AnhVoMSFT> @blackboxsw I will look into it. If not I can look into integrating it
<rharper> blackboxsw: yeah, I can get those done
<blackboxsw> excellent rharper thank you sir
<blackboxsw> AnhVoMSFT: also good to hear. I'll get a rtd update together on manual SRU testing too to suggest the tox -e citest -- --ppa ppa:cloud-init-dev/proposed  run. for additional SRU verification tasks for a given c;loud
<meena> oh noooâ¦ github is dead, or at least it doesn't resolve anymoreâ¦.
<blackboxsw> lucasmoura: responded on the azure sru verification results PR. thanks. https://github.com/cloud-init/ubuntu-sru/pull/126
<blackboxsw> I think we need to improve this process before our next SRU (which I suppose is in about 2 weeks ;/)
<blackboxsw> so we should be able to trim this verification process a bit so it doesn't sink our ship for 2.5 weeks
<blackboxsw> and addressed comments on consolidated SRU verification scripts https://github.com/cloud-init/ubuntu-sru/pull/128
#cloud-init 2020-06-23
<otubo> Odd_Bloke, quick question on PR 428: Would you like a unittest for kernel_version() function? We already have tests for swap file.
<otubo> Odd_Bloke, oh I see you specified you want both :-) Nevermind.
<Odd_Bloke> :)
<otubo> Odd_Bloke, but I'm not sure I can test the kernel_version function without issuing os.uname() again. I would have to hard-code a kernel version assuming I know the actual kernel version in where the test will run.
<Odd_Bloke> otubo: I noticed this when reviewing your generator PR: https://github.com/canonical/cloud-init/pull/452
<otubo> Odd_Bloke, Oh, didn't see you already had this PR! Apologies :-)
<Odd_Bloke> otubo: Oh, no, I noticed that one of the other systemd files was incorrect when I was grepping around to confirm that your change matched the rest of the templates.  Your fix already landed, this is a different file. :)
<Odd_Bloke> otubo: Regarding testing kernel_version(), I was thinking that you'd mock out os.uname() and test that if you return a few different valid kernel versions (ideally formatted how we would find across a few different distros) it produces the expected tuple.
<otubo> Odd_Bloke, oh nice one, indeed.
<Odd_Bloke> Thanks for the review!
<Odd_Bloke> Oh, huh, I thought I had a committer +1 on https://github.com/canonical/cloud-init/pull/391 but I don't; could someone give me a +1 so I can hit the green button?
<meena> Odd_Bloke: i could but its not worth much
<Odd_Bloke> :)
<Odd_Bloke> Similarly, just looking for a committer +1 here: https://github.com/canonical/cloud-init/pull/452
<otubo> Odd_Bloke, I don't think I'm a committer, am I?
<Odd_Bloke> Nope, that's Chad, Ryan, Scott and myself.
<powersj> ahem
<powersj> Odd_Bloke, +1 on 452
<Odd_Bloke> powersj: Thanks!
<powersj> +1 on 391
<Odd_Bloke> Thanks!
<meena> otubo: Odd_Bloke tried to give me reviewer permissions before, and it didn't pan out
<meena> my â is still grey
<Odd_Bloke> meena: I gave you enough permissions that we can _request_ reviews from you, we still want to have a committer's +1 even on BSD-specific changes.
<meena> aaaaaaahh
<blackboxsw> sorry lucasmoura just pushed that missing release -> UBUNTU_RELEASE commit to the sru test refactor https://github.com/cloud-init/ubuntu-sru/pull/128 and merged your RandomPassword validatrion
<blackboxsw> validation rather
<falcojr> whats the point of these "ssh "${SSHOPTS[@]}" ubuntu@$VM_IP -- grep Trace /var/log/cloud-init.log" lines in the manual verification?
<falcojr> there's no trace in the logs so the script is failing
<blackboxsw> falcojr: generally with set -ex in the script we should have made that line whats the point of these "ssh "${SSHOPTS[@]}" ubuntu@$VM_IP -- ! grep Trace /var/log/cloud-init.log"
<blackboxsw> sorry if I switched the script to set -ex, I should have adapted those lines to expect no matches
<blackboxsw> or fail quickly if they hit it
<blackboxsw> note the bang "!" above
<falcojr> gotcha. I saw some with a bang and some without, so was wondering
 * blackboxsw looks over the refactor cloud tests and fixes that throughout
<blackboxsw> gce and azure scripts it seems
<rharper> blackboxsw: btw, this swap: auto for swapfile is a PITA, all of my lxd deployments use ZFS and ZFS does not support swapfiles ... so having to go VM route;  lxd vms IIRC don't have cloud-images for Xenial (right? ) so that'll have to be a one-off  to get a xenial cloud-image up; should ahve a PR up for that today
<blackboxsw> gce/softlayer I mean , though I ffixed softlayer in the PR https://github.com/cloud-init/ubuntu-sru/pull/130
<Odd_Bloke> I think we've figured out that you can launch xenial VMs but the lxd agent won't work without the HWE kernel so you can't use any of the `lxc` commands to interact with it.
<Odd_Bloke> Someone else please confirm though, we've been learning a lot over the past couple of days so I might have got mixed up. :p
<blackboxsw> rharper: sorry, right so falcojr was using ssh into the xenial lxc container instead of lxc exec specifically for xenial https://github.com/cloud-init/ubuntu-sru/pull/129
<blackboxsw> strange thing rharper is that paride was still able to lxc launch ubuntu-daily:xenial --vm  --profile <vm_profile> yesterday I thought (and I know I was able to do the same last week, but can no longer do that today)
<rharper> well, I think I'm going to add an ext4 loop device as a storage pool to lxd
<blackboxsw> rharper: trusty via lxc launch --vm is an official "not gonna do it" as far as images. Xenial has some support, and bionic+ is good to go
<rharper> then file-based swaps will work and I can just use the normal lxc path
<falcojr> @rharper I tested some other swap srus
<falcojr> I can take this one too if that's easier
<rharper> I've got the test written; just a matter of running  it on all of the releases
<falcojr> kk
<blackboxsw> falcojr: pushed the "! grep Trace" commit  into sru/refactor-manual-clouds
<blackboxsw> o/  sparkiegeek
<sparkiegeek> \o blackboxsw
<sparkiegeek> I'm trying to understand how MAAS needs to send user-data to cloud-init for the machines it boots. AFAICT MAAS is currently base64 encoding a script, and sending it as application/octet-stream but I might be misunderstanding
 * blackboxsw is peeking through maas code again to see how cloud-config userdata is specified directly to cloud-init.  Generally I believe providing any cloud-init userdata types should be acceptable and interpreted correctly by cloud-init https://cloudinit.readthedocs.io/en/latest/topics/format.html
<blackboxsw> In maas case, I know that cloud-init gets some user-data/ or at least vendors data through a curtin config passthrough that maas provides (in order to make sure curtin and cloud-init knows how to talk back to the maas proper log/event endpoints). But I don't recall exactly how maas passes cloud-init user-data specifically (whether through curtin config or the maas datasource).
<blackboxsw> ahh user-data for maas comes from the maas seed url. to the instance.
<blackboxsw> so generally, if that user-data exposed to the instance is a supported cloud-init user-data format  (link above). Cloud-init should be able to properly interpret it. I see in the MAAS datasource, that cloud-init will attempt to decode the content of user-data url if present. It does expect user-data to be binary encoded content.
<blackboxsw> https://github.com/canonical/cloud-init/blob/master/cloudinit/sources/DataSourceMAAS.py#L22-L27
<blackboxsw> sparkiegeek: not sure if that is what you were looking for as far as context
<sparkiegeek> blackboxsw: thanks, that's plenty to chew over
<blackboxsw> -/me merged ubuntSRU verification  branch https://github.com/cloud-init/ubuntu-sru/pull/128 will adapt/iterate on the softlayer results.
<blackboxsw> falcojr: I think I handled your initial review comments (and probably expect there may be a couple of extra things you run across in your GCE test run)
<rharper> sparkiegeek: you may also be interested in the curtin handling of maas provided cloud-config, https://git.launchpad.net/curtin/tree/curtin/commands/curthooks.py#n1344
<rharper> fun fact:  eoan, bionic, and xenial  never had the bug around swap: {size: auto};  as it was an regression introduced with the fix for fallocate, on Jan 23; 6603706eec1c39d9d591c8ffa0ef7171b74d84d6 ; it only regress focal;  blackboxsw  so, for SRU verification is it OK to only verify focal ?
<blackboxsw> good pt. rharper yes focal only
<blackboxsw> given we never introduced the bug in the first place.... thank goooodness for not SRUing for 6 months ;)
<blackboxsw> ohh wait carry that cost forward :/
<rharper> hehe
<rharper> I spent too long trying to figure out why I couldn't recreate on eoan, thinking it was lxd vm related; turns out not a bug in 19.4-33
<rharper> blackboxsw: https://github.com/cloud-init/ubuntu-sru/pull/132/files
<blackboxsw> on it rharper
<rharper> you also mentioned, https://git.launchpad.net/cloud-init/commit/?id=723e2bc1  (nfs style paths) ;  is that still open/needed ?
<blackboxsw> rharper: yep in the hidden trello board, still has your gravatar on it :)
<blackboxsw> falcojr: you didn't grab the above commit for verification did you? ^
<falcojr> blackboxsw rharper , no I have not grabbed that one
<rharper> blackboxsw: falcojr: ok, I'll take that one next
<blackboxsw> lucasmoura: a followup change request on https://github.com/cloud-init/ubuntu-sru/pull/122#pullrequestreview-436136291
<blackboxsw> falcojr: you are on gce next for SRU verification right? and you don't plan on oracle?
<blackboxsw> if not oracle , then I'll grab it (and re-find my credentials)
<lucasmoura> blackboxsw, ack
<falcojr> blackboxsw yes, I'm doing GCE (should have it done soon)...I could jump or Oracle too
<blackboxsw> falcojr: I'll wrap up SRU reviews and ping you to when I'm done to see where we are both at.
<falcojr> sounds good
<meena> Odd_Bloke: how do i take on one of those cloudinit/net refactor bugs?
<meena> never mind, i'm too tired. lol
<blackboxsw> heh meena, just assign yourself when intererested
<blackboxsw> meena: Odd_Bloke was planning on filing bugs for each aspect of the net refactor and we can have devs assign themselves to each bug and mark it in progress
<blackboxsw> as they carve out the items
<blackboxsw> I'm on https://github.com/cloud-init/ubuntu-sru/pull/133 falcojr for gce verification. minor request for comment on https://github.com/cloud-init/ubuntu-sru/pull/129 to resolve and we can land that
<falcojr> Sounds good
<blackboxsw> lucasmoura: https://github.com/cloud-init/ubuntu-sru/pull/131#discussion_r444497107 I'm not sure what you are asking. should I add the content of sru-vars.template to the verification output for softlayer ?
<blackboxsw> or something else
<lucasmoura> blackboxsw, no no. It is just that in the other cloud providers tests we have the shellscript that generated the output on the report as well, like this for example https://github.com/cloud-init/ubuntu-sru/blob/master/manual/ec2-sru-20.2.45.txt#L1
<blackboxsw> ahh ahh, I see. yes I will put that in there. an oversight thanks
<blackboxsw> I thought you were specifically asking about including sru-vars.template content alone
<blackboxsw> didn't realize I left it out
<blackboxsw> all of it
#cloud-init 2020-06-24
<Odd_Bloke> rharper: Did you see my reply to your comment: https://github.com/canonical/cloud-init/pull/453/files#r444872750 ?
<Odd_Bloke> (That's going to conflict with the next networking PR I want to put up, so I'd like to resolve it sooner rather than later, if possible. :)
<rharper> Odd_Bloke: sorry, not yet;  today's crazy at the house; I'll look at it shortly
<meena> Odd_Bloke: i think i'll need to see a sample PR doing one of the functions before i can sensibly contributeâ¦
<meena> i expect that startig mid-august when my daughter's in child-care, so i hope i'll have moreâ¦ brain left
<Odd_Bloke> meena: I'm not proposing it yet (because it conflicts with the PR I'm discussing with Ryan), but I've pushed up an example that you can take a look at: https://github.com/OddBloke/cloud-init/compare/net...net3
<Odd_Bloke> meena: Obviously that hasn't been reviewed at all, so I would still hold off on using it as a template for now.
<meena> Odd_Bloke: i'm surprised that is_physical isn't used anywhere in cloudinit/net/__init__.py https://github.com/OddBloke/cloud-init/compare/net...net3#diff-ce643f3a459f3fe8a4a910905190bf77
<meena> also, wow, this isâ¦ https://github.com/OddBloke/cloud-init/compare/net...net3#diff-a708ab28fe2a7127c07c0800289b6bd4R68-R76 wow.
<meena> Odd_Bloke: so, basically, after the first thing goes thru, most consumers will have a distro parameter where it's needed and for those that don't, we can use that first PR as example.
<rharper> Odd_Bloke: commented on the PR and the bug
<meena> rharper: which ones?
<rharper> https://github.com/canonical/cloud-init/pull/453/files#r444872750
<meena> ah yeah, i keep thinking that all important PRs / issues are the ones in which i'm "participating" in on github
<rharper> hehe, it doesn't help that there are a lot of threads on this refactor;  thank you for working on it
<Odd_Bloke> rharper: Thanks!  Would you be able to give me a +1 there too?
<rharper> oh, right,
<rharper> yes
<Odd_Bloke> Thanks!
<meena> rharper: if by working on it, make Odd_Bloke work on it, then, yes, you're perfectly welcome.
<meena> if i had managed to make him do this a year or two ago, we could've been long done already
<blackboxsw> falcojr: or lucasmoura for SRU review if you get a chance https://github.com/cloud-init/ubuntu-sru/pull/134. either one of you
<blackboxsw> it's netplan vs eni prioritization on x b e f
<blackboxsw> and I'm hitting SRU review queue
<blackboxsw> falcojr: and lucasmoura and if you approve any of the PRs in ubuntu-sru or qa-scripts or uss-tableflip repos, I think you have commit rights, so feel free to squash merge that stuff in github UI if you want.
<blackboxsw> thanks for all the reviews
<blackboxsw> man all those merges feel good though
<Odd_Bloke> meena: OK, you can do some work and review https://github.com/canonical/cloud-init/pull/457 ;)
<Odd_Bloke> (If you have time, ofc!)
<meena> i have time, i'll checkout your pr, and see if i can cotribute a BSD function too ;)
<Odd_Bloke> Thanks! :)
<meena> Odd_Bloke: somebody in #freebsd helped me come up with a way to list all the physical interfaces: http://ix.io/2q5e
<meena> if you're thinking, isn't this the worst thing i've ever seen, well, yeah, this is unix, baby!
<chillysurfer> i remember last year somebody from upstream showing me scripts that are used for sru validation on cloud providers? i think it was on github
<chillysurfer> anybody know where that could be?
<meena> chillysurfer: sru validation is happening rn
<meena> chillysurfer: https://github.com/cloud-init/ubuntu-sru/ ?
<blackboxsw> thnk meena , and that's the right repo
<blackboxsw> I think we are wrapping up the last cloud SRU validtions now. expectation is to finally publish tomorrow
<rharper> blackboxsw: I didn't yet get to the nfs mount spec verification;  did that get covered?
<blackboxsw> rharper: not yet, but if need be we can grab it.
<blackboxsw> I think the last remaining issue is oracle SRU, an email ping to VMware, and the nfs
<rharper> blackboxsw: I can work on it but I don't want to hold you up if you needed it done by tonight for tomorrow release
<chillysurfer> awesome thanks!
 * meena â¡ï¸ ð
<blackboxsw> ok lucasmoura done https://github.com/cloud-init/ubuntu-sru/pull/134 with re-run of netplan vs eni test
<lucasmoura> blackboxsw, approved
<blackboxsw> rharper: we are still waitingon a CDO qa run to compete. job is queued, but that and Oracle SRU verification are last remaining bits
<blackboxsw> complete even
<rharper> ok
<rharper> blackboxsw: hrm;  a quick test with nfs and the default cloud images don't have an nfs client installed, so mounting something with a colon for remote system is complicated locally;  what do we want to do?  THe original bug mentions EC2 nfs (which is type efs or something and maybe the ec2 image has correct client package support in it)
 * blackboxsw is reading up on efs setup
<blackboxsw> in aws
 * blackboxsw is looking at https://docs.aws.amazon.com/efs/latest/ug/wt1-create-efs-resources.html
<blackboxsw> looks like their docs are for non-ubuntu (yum install nfs-utils) etc. so image doesn't have utils baked in
<rharper> *sigh*
<rharper> ok, so looks like it's reboots and cloud-init clean
<rharper> I can do that
<rharper> what a PITA
<blackboxsw> yeah, pita. rharper we are going to talk about and float new SRU verification process to the mailinglist I think after a retrospective on this round of fun.
<blackboxsw> so watch for that email/proposal folks because we want to reduce the cost of this effort if we can without sacrificing quality.
<rharper> indeed
#cloud-init 2020-06-25
<hipolito> Hi, I'm trying the nocloud config drive example from the docs, but it doesn't work at all. Am I missing something?
<hipolito> sadly I can't log into the image, it doesn't have any user/pw
<meena> hipolito: let's start with a silly question: does your cloud support NoCloud?
<hipolito> meena: I'm trying locally on virtualbox, attaching the seed iso
<hipolito> I'm following the example from here: https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
<hipolito> the only thing I can see on the console is "source ...NoCloud failed"
<meena> hipolito:   okay, so we can dig into logs then
<hipolito> meena: sadly the image doesn't have any user or password, I can't log in without cloud-init injecting ssh keys
<meena> oh. ooohhhh
<hipolito> is there a kernel parameter on cloud-init so it prints more logs to standard output? I think systemd will print it out to the console
<meena> that i don't know, and I'm only on my phone so looking thru the source code is a bit tricky
<hipolito> no worries, I'll figure a way to log in somehow
<Odd_Bloke> meena: :D
<meena> there's been an update from #NetBSD, and at won't work on OpenBSD either, and won't cover all edge cases (on freebsd?) but we could use that simpler version
<meena> our OpenBSD support is very spotty right now anyway
<meena> We should probably move ifconfig from netinfo(?)
<meena> or is it just duplicated in netinfo?
<Odd_Bloke> meena: We don't need a single implementation that works for all BSDs; we can add {Free,Open,Net,Dragonfly}BSDNetworking subclasses and have specific implementations in each of those.
<meena> basically, ifconfig -C is needed everywhere and can live in BSD base ass
<meena> "what instrument do you play?" - "the base ass"
<meena> i wonder if there are any circumstances under which `ifconfig -C` output would change over the runtime of a machineâ¦ like if we load different kernel drivers
<rharper> Odd_Bloke: blackboxsw:  on the nfs mount bug, https://bugs.launchpad.net/cloud-init/+bug/1870370 ;  We never captured a log with an error message.  I've tested bionic and focal images from daily; and fstab always has the correct entry present;  the messages mentioned are present in the cloud-init, but they do not prevent the entry from being added to fstab;  the stacktrack related to the call to mount -a is due to the lack of an
<rharper> nfs client ;  once one installs nfs-common; the remote mount succeeds;  so; this AFAICT this was never an actual bug (in practice);
<ubot5> Ubuntu bug 1870370 in cloud-init "cloud-init doesn't support NFS in mounts" [Undecided,Fix released]
<blackboxsw> ahh geez
<blackboxsw> so rharper was the bug really that if we detect is_network_device(path) cloud-init should be installing nfs-utils?
<rharper> blackboxsw: well, I don't know;  the original submitter thought that the issue was the error message of 'ignoring entry'
<rharper> and maybe there still is a bug w.r.t saying we ignore mount entries and *still* put them in fstab
<rharper> At this point, AFAICT, there never was an issue with using NFS mounts, other than the error message it displayed ... the reason the mount -a fails is the missing nfs client;  the bug fix applied now does not emit the error message and nfs entries are considered "sanitized";
<rharper> for nfs, we could install the client; prior to running mount -a ...  not sure for other remote filesystems ; doing such a thing is a non-trivial feature;  and thus far, I suspect users have been rolling their own image (or doing everything in their own runcmd to handle client installs and updates to fstab
<rharper> w.r.t the ec2 efs;  not clear to me what client is needed in the image for efs;
<rharper> w.r.t the SRU;  I can verify that the error message is no longer emitted ...  and there's no regression in cloud-init behavior; so I don't think this disrupts the SRU at all; but the bug/fix is misleading in that it never was the barrier to enabling nfs mounts on first boot
<rharper> blackboxsw: let me know how you want to proceed
<blackboxsw> agreed, rharper . I think that is a sound approch for now. We'll ping the submitter on the bug and ask for confirmation at their convenience
<blackboxsw> we can have them re-open the bug if they hit it again and fill in more details (as well as cloud-init logs)
<blackboxsw> I think we can proceed with your verification of no regression in current behavior
<rharper> blackboxsw: alright, I'll just verify the error message is not present any more
<meena> i dunno folks, everybody should know what kind of environment they putting an image into. if you gonna need nfs, and you don't install nfs-utils into the image before putting it to use, then i don't know what to say
<meena> (and since i don't even know what ec2 efs is, I'm not gonna say anything about that)
<meena> we could check if the fs is supported, and error out, ð¤·ââï¸
<haderach> Hello! How to connect a local instance of cloud? I got the shell in the machine and instance id.
<meena> hrmâ¦ checking if an fs is supported is probably as complex as installing support for it, aaaaand, highly distro specific.
<haderach> Is running in Ubuntu 20.04
<falcojr> blackboxsw : in doing the Oracle SRU, I'm seeing two tracebacks
<falcojr> https://paste.ubuntu.com/p/tms3rBfsW3/
<falcojr> it looks like in older SRUs, two tracebacks were found, but we didn't detail them, so it's probably the same thing, but curious if you know what they are
<falcojr> this is for bionic
<falcojr> also /etc/netplan is empty which wasn't the case for older SRUs
<falcojr> but it's empty when I launch the instance, and empty after I installed proposed and reboot
<blackboxsw> falcojr: the cloudinit.url_helper.UrlError: 404 Client Error: Not Found for url: http://169.254.169.254/latest/meta-data/ is known on Oracle, older series don't use the proper src/cloud-init/cloudinit/sources/DataSourceOracle.py   This was a cloudimage feature that we need to resolve with CPC team internally at some point. So that is known, the openstack datasource hits urls that Oracle IMDS doesn't actually
<blackboxsw> support.
<blackboxsw> the other trace is related to network already being up when trying to run EphemeralDHCP, which I think is ok because that means the network is already active because of iscsi root on Oracle during initial  datasource detection time in local timeframe. We probably aren't going to resolve this for Oracle specifically in the OpenStack datasource, because oracle should be using DataSourceOracle which checks for
<blackboxsw> iscsi_root first https://github.com/canonical/cloud-init/blob/master/cloudinit/sources/DataSourceOracle.py#L194-L198
<blackboxsw> falcojr: does Oracle have a focal series image yet?
<falcojr> yes
<falcojr> I didn't have issues with the focal image
<blackboxsw> did it detect Oracle datasource vs OpenStack
<falcojr> ```
<falcojr> {
<falcojr>  "v1": {
<falcojr>   "datasource": "DataSourceOracle",
<falcojr>   "errors": []
<falcojr>  }
<falcojr> }
<falcojr> ```
<falcojr> lol...my brain can't keep two chat systems separate now
<falcojr> any idea about the missing /etc/netplan config?
<blackboxsw> falcojr: that's something concerning I think.   this is bionic right? grep renderer /var/log/cloud-init.log
<falcojr> returns nothing
<blackboxsw> yet all of the other fetches of  http://169.254.169.254/latest/ are working and `ip addr show` list valid active addresses on the network interfaces
<falcojr> yes, I can pull down metadata and reach out the the internet...ip a shows 10.0.0.24 on ens3 and loopback
 * blackboxsw relooks at the last oracle SRU runs. it may be worth putting up the full in progress PR . generally older Oracle SRUs did render /etc/netplan/50-cloud-init.yaml so something didn't fire on t
<blackboxsw> the instance.
<falcojr> alright, I'll put up the full text in the PR and we can take a look there
<falcojr> thanks
<blackboxsw> probably want to attach the logs and. Yeah sorry. and as you mentioned this is probably generally a known condition (as each oracle SRU had 2 tracebacks in logs)
<blackboxsw> just the netplan file not being present seems amiss
<blackboxsw> also may want to grep  'Writing to /etc/net' /var/log/cloud-init.log to see if it rendered /etc/network/interfaces or /etc/netplan etc
 * blackboxsw has to step away for kid lunch prep for a few
<meena> blackboxsw: o/~ ð
<rharper> blackboxsw: https://github.com/cloud-init/ubuntu-sru/pull/135
<meena> Odd_Bloke: i'm trying to contribute is_physical for BSD to your PR, and it would need to pull in get_interfaces_by_mac() (the underlying function that the BSDs use for get_interfaces() / get_devicelist())
<Odd_Bloke> meena: Are you able to call `self.get_interfaces_by_mac()`?
<meena> Odd_Bloke: hrm, so, the "problem" is that get_interfaces_by_mac() is a lot more output and parsing, and we only need it on OpenBSD, since ifconfig -l is all we want, but OpenBSD doesn't have that.
<meena> i think i should probably start with FreeBSD (and NetBSD) and then do openbsd (or let someone who cares about OpenBSD lol have a go)
<falcojr> blackboxsw PR here: https://github.com/cloud-init/ubuntu-sru/pull/136
<Odd_Bloke> meena: Hmm, I'm not sure I'm following along, I'm afraid.
<meena> Odd_Bloke: ifconfig -l output:
<meena> meena@fbsd12-1:~ % ifconfig -l
<meena> vtnet0 lo0
<Odd_Bloke> meena: Bear in mind that we can have a `BSDNetworking.is_physical` and an `OpenBSDNetworking.is_physical` (and a PR which just has the latter raise NotImplementedError would be perfectly acceptable: it's still an improvement).
<meena> or, or an actual server: vtnet0 lo0 bridge0 vnet0:1 vnet0:2
<meena> aye.
<Odd_Bloke> meena: (And, in fact, separate PRs for the two separate implementations would be much easier to review, too. :)
<blackboxsw> thanks gents for hte prs
<blackboxsw> falcojr: ahh oracle has disabled networking :)  2020-06-25 18:58:05,467 - stages.py[DEBUG]: network config disabled by system_cfg
<blackboxsw> sooo, yes that would be expected that cloud-init doesn't emit /etc/netplan :)
<blackboxsw> falcojr: and I believe that is due to the fact that network config is setup for iscsi root
<meena> Odd_Bloke: aye.
<blackboxsw> falcojr: to confirm that the machine is iscsi root you can python3 -c 'from cloudinit.net.cmdline import read_initramfs_config; print(read_initramfs_config())'
<blackboxsw> that should give you iscsi_root network configuration I believe
<falcojr> None
<blackboxsw> falcojr: it's at least that's what the Oracle proper datasource uses to confirm netcfg is up. also worth confirming that focal on Oracle doesn't emit the log "network config disabled by system_cfg"
<blackboxsw> but I think that's implied (the focal log check)  because you said focal rendered /etc/netplan/*cloud-init.yaml
<falcojr> right
<falcojr> so should I just update the procedure with a different comment as to why I'm || true there, remove the logs, and call it a day?
<blackboxsw> falcojr: I think so. if we have to sort more we can do it in review.
<blackboxsw> and it looks like we'll have a little time on this because solutionsQA verification run still isn't started, it has 4 CI jobs queued in front of it (which may take 8 hrs each).
<falcojr> sounds good
<blackboxsw> so we are in the camp of waiting  on CI approval for cloud-init SRU until that solutionsQA test run is actually executed.
<blackboxsw> despite being queued a couple days ago
<blackboxsw> falcojr: paride finally have a cloud-config fix that avoids the lxc console <VM> interaction to fix lxd on launch
<blackboxsw> https://paste.ubuntu.com/p/54WcQWrn4H/
<blackboxsw> I might even be able to simplify more
<blackboxsw> by adding that vendor data to the vm profile
<blackboxsw> rharper: too ^
<blackboxsw> sorry will wrap up your remaining sru PRs today
<blackboxsw> yep profile https://paste.ubuntu.com/p/pxtbd4fjph/
<falcojr> great!
<blackboxsw> ok lucasmoura I'm going to rework the ua-client PR for vm support
<lucasmoura> blackboxsw, ack. I have reviewed it this afternoon, but I just had a couple of minor comments
<rharper> blackboxsw: lemme look
 * rharper has been rather annoyed at lxd --vm 
<rharper> also, super not happy about the lxd agent not working in ubuntu-daily;$release ; and then the images:ubuntu/$release/cloud  which is not an official cloud image, but something else;  also has no ssh server installed.
 * rharper finish mini rant 
<rharper> blackboxsw: I see, your comment in the second paste is most helpful;  ISTR there was some issue with the reboot needed due to difficulting wrangling systemd units starting soon enough
<blackboxsw> rharper: I think it's just that the systemd units we add don't start properly without the reboot
<rharper> yes
<blackboxsw> the install.sh run comments about avoiding the reboot. but it failed when I tried
<blackboxsw> I'm following this https://discuss.linuxcontainers.org/t/running-virtual-machines-with-lxd-4-0/7519
 * blackboxsw has to head on an errand for a few
<rharper> nice
<rharper> blackboxsw: , the install says you can skip the reboot; To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent
#cloud-init 2020-06-26
<fangwen> hello everyone
<fangwen> I want to commit my code, but the CLA always verified error,who can help me,thanks!
<Odd_Bloke> Hey folks, I'm trying to run the integration tests locally to test a refactor, and I'm running into this error (even in trunk): https://paste.ubuntu.com/p/5z364gWqVG/
<Odd_Bloke> My command line is `tox -e citest -- run --verbose --preserve-data --data-dir results --os-name xenial --test modules/ntp.yaml --preserve-instance`.
<Odd_Bloke> (I'm trying a focal run right now.)
<Odd_Bloke> Any ideas what might be going on?
<compufreak> Is this what the kernel cmdline should look like for nocloud-net? `linuxefi /vmlinuz-3.10.0-1062.4.3.el7.x86_64 root=/dev/mapper/centos_centos--15388761-root ro crashkernel=auto rd.lvm.lv=centos_centos-15388761/root rd.lvm.lv=centos_centos-15388761/swap rhgb quiet LANG=en_US.UTF-8 ds=nocloud-net;s=http://hyperv01:5000`
<compufreak> It doesn't seem to be working unless there's something else I need to do with cloud-init besides just installing it (it is running but not using nocloud-net)
<Odd_Bloke> compufreak: Are you able to access the instance?  If so, a pastebin of /var/log/cloud-init.log would be really handy.
<compufreak> & log https://pastebin.com/qnkpKFdV
<Odd_Bloke> Hah.
<compufreak> The datasource seems to work `[root@vmcent77template log]# curl http://hyperv01:5000/meta-data/hostname; echo ""vmcent77cloud-init-test`
<compufreak> & full grub.cfg https://pastebin.com/Wv08n2v1
<Odd_Bloke> compufreak: To confirm: does the DS configuration show up in /proc/cmdline?
<compufreak> oof # cat /proc/cmdlineBOOT_IMAGE=/vmlinuz-3.10.0-1127.13.1.el7.x86_64 root=/dev/mapper/centos_centos--15388761-root ro crashkernel=auto rd.lvm.lv=centos_centos-15388761/root rd.lvm.lv=centos_centos-15388761/swap rhgb quiet LANG=en_US.UTF-8 ds=nocloud-net
<compufreak> does it need quotes?
<Odd_Bloke> I'm not sure, I'm afraid.
<Odd_Bloke> But I would try backslash-escaping the semicolon, perhaps?
<falcojr> blackboxsw: that pastebin from yesterday for lxd VMs on xenial, where does "/var/lib/cloud/scripts/per-once/setup-lxc.sh" come from?
<compufreak> `grubby --update-kernel=ALL --args='ds=nocloud-net\\;s=http://hyperv01:5000'` so it needed double-escaped
<Odd_Bloke> compufreak: And you're seeing it work now?
<compufreak> it was also missing a trailing / on the url. It's sort of working--it pulls meta-data and user-data. I think I have a problem with my user data tho, ha
<Odd_Bloke> Ah yeah, I've seen people being caught out by the trailing / thing before; it's there to enable e.g. s=http://example.com/my- to fetch http://example.com/my-metadata and .../my-user-data.
<compufreak> yup, last problem was powershell defaults to utf-16le but my python meta-data server was reading as utf-8...
<blackboxsw> falcojr: ahh that is the write_files b64 content in #cloud-config in the vm profile      - path: /var/lib/cloud/scripts/per-once/setup-lxc.sh
<blackboxsw> falcojr: so that decoded script is this: https://paste.ubuntu.com/p/fgTpfhC2Sf/
<blackboxsw> I just ran it through "base64 myscript.sh"  and dropped that into content of the user.vendor-data config provided
<blackboxsw> so the lxc profile provides the user.vendor-data #cloud-config lines 6-22 https://paste.ubuntu.com/p/pxtbd4fjph/ , which still sort of allows us to provide normal user-data cloud-config if we want (though user-data would override the write_files content or power_state
<blackboxsw> rharper: per "To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent"   yeah I tried that yesterday without the reboot and systemd blewup and didn't like that for either service directly run
<blackboxsw> directly started
<blackboxsw> I'll try again today with a fresh cup of coffee and see if I can avoid the costly reboot
<rharper> blackboxsw: hrm, I've had two experiences
<rharper> first, I skipped the reboot and manually ran them; all was fine;  then I adjusted my script to just start these services as it mentions;  and it rebooted the instance anhow
<rharper> anyhow
<rharper> wasn't sure if that was the agent's doing or something else.
<blackboxsw> though today it looks like I need to resolve some zpool storage issues https://paste.ubuntu.com/p/vptTmFH9Zf/
<blackboxsw> rharper: I think I saw that automatic reboot while restarting the service manually as well.
<rharper> ok
<blackboxsw> I mistakenly thought that  it was my cloud-config hit the power_state: mode: reboot
<rharper> yeah, I'm testing without that
<rharper> blackboxsw: this is what I've ended up with;  https://paste.ubuntu.com/p/Rb9qRBq5nN/
<blackboxsw> rharper: and that also reboots the system right?
<rharper> yeah
<blackboxsw> at least that's what I keep seeing when I run systemctl start lxd-agent-9p lxd-agent
<rharper> it's part of the agent I think
<blackboxsw> ok yeah
<blackboxsw> wfm
<rharper> =(
<blackboxsw> yeah costly
<rharper> I mean for other systems sure; for cloud-init it;s annoying to see the two boots;
<blackboxsw> though rharper were you testing on bionic
<rharper> we don't have to use the the agent though; instead you can query lxd for the IP;
<blackboxsw> or xenial. my xenial agent hasn't come back up
<rharper> blackboxsw: yeah, the focal image has the agent in it
<rharper> bionic that works fine, I can test on xenial
<blackboxsw> yeah I think I still run into the issue on xenial with that approach. but adjusting my profile
<blackboxsw> once cloud-init SRU actually publishes, we can use jinja template in the vm profile
<blackboxsw> and only add the vendor data on bionic/xenial
<rharper> oh, right, needs to be vendor data
 * rharper fixes local profile 
<rharper> blackboxsw: what's the lxd key for that user.vendor-data ?
<blackboxsw> yep
<rharper> ok
<blackboxsw> rharper: also I think there's a bug to stock lxd --vm
<blackboxsw> vendor-data and user-data both default to "#cloud-config\n"
<blackboxsw> which causes simple tracebacks
<blackboxsw> should be "#cloud-config\n{}\n"
<blackboxsw> otherwise cloud-init balks at trying to pop None
<blackboxsw> cloud-init bug I suppose to better handle non case
<blackboxsw> None case
<blackboxsw> so if you are editing a profile, might as well set user.vendor-data  and user-data to "#cloud-init\n{}" to avoid that trace in logs
<rharper> blackboxsw: did you file an issue ?
<rharper> I've seen that as well
<blackboxsw> +1 rharper I'll file it now.
<blackboxsw> btw, your script is +1 for me on xenial
<blackboxsw> reboot was triggered
<blackboxsw> by the lxd-agent service I think. (not a cloud-config setting)
<rharper> looks like on xenial the virtio-vsock module isn't present/loaded
<rharper> # /run/lxd_config/9p/lxd-agent
<rharper> Error: Failed to listen on vsock: listen vsock: open /dev/vsock: no such file or directory
<rharper> blackboxsw: yeah, I think the agent on stock Xenial is not going to happen until vhost_vsock module is built for 4.4 ...   if you wanted to follow up with the kernel team; they could confirm whether or not they can have virtio_vsock enabled for 4.4 (it's likely not going to be backported) ...
<Odd_Bloke> blackboxsw: https://github.com/canonical/cloud-init/pull/460 <-- oops, missed a test case in my Travis testing, so we need to revert part of my previous change
<blackboxsw> rharper: https://github.com/lxc/lxd/issues/7587
<blackboxsw> merged Odd_Bloke if something else crops up, we can continue to iterate
<Odd_Bloke> Thanks!
<blackboxsw> community-notice: So  all manual validation for cloud-init SRU 20.2.45 is complete, we are awaiting an automated 7+ hour test run for cloud-init by our solutionsQA department against various customer OpenStack datasource configurations. That run looks like it expected to complete early next week. We are in the sit and wait part of the SRU verification
<blackboxsw> a great many thanks all those who participated so far in SRU verification for cloud-init
<blackboxsw> will update and publish as soon as we see a green light from solutionsQA
<blackboxsw> expectation is probably Mon/Tuesday.
<blackboxsw> ... and upstream has a policy of not releasing on Fridays anyway
<rharper> blackboxsw: thanks
<rharper> I suspect stgraber has closed it already =P
<blackboxsw> heh
<blackboxsw> it's good bug mgmt resposiveness ;0
<blackboxsw> already assigned ;)
<blackboxsw> ahh right rharper I forgot that we need packages: [linux-generic-hwe-16.04] on xenial cloud-config so this is what I'm using on xenial https://paste.ubuntu.com/p/27T6HPmKt6/
<blackboxsw> at least for testing
<rharper> blackboxsw: that's not built into the image
<rharper> you install and reboot?
<blackboxsw> rharper: yeah on xenial only :/
<rharper> boo
<rharper> you should file a bug/issue with kernel and cloud-images
<blackboxsw> very much and all those kernel pkgs take a long time to install
<blackboxsw> yes will do
<rharper> for lxd at least and see if they can push vhost_Vsock into 4.4 or have something lxd vm specific if they want the agent to work out of the box
<rharper> it's really a rotten experience coming from containers
<rharper> the sneaky (imho) trick of building their own custom cloud images on images:ubuntu/$release/cloud  isn't great; now one doesn't know why the former works but ubuntu:$release does not.
<blackboxsw> "out of the box" is a bit of a stretch too, given the profiles we need to setup for vms
<rharper> we could suggest that lxd update to have a default vm profile with reasonable things enabled ...
<blackboxsw> rharper: I'm going to startup conversation in lxc-dev and see where it goes
<rharper> k
