#cloud-init 2014-03-31
<smoser> jclift, you culd just read /var/lib/cloud/instance/user-data
<smoser> it wont have any meta-data tags, but will have the user-data.
<smoser> and user-data.i is mime-multipart "pre-processed" (as in has consumed #include for you if you were interested in that).
<jclift> smoser: Thanks.  Looked into that yesterday, but the way Rackspace does stuff, metadata doesn't get into userdata.
<jclift> Figured out a way of doing it though.
<jclift> smoser: Does this seem terrible? https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/snippets/metadata_retriever.py
<jclift> Saved that to git for a bit after figuring it out yesterday.  Investigating potentially alternative approaches instead atm.
<smoser> jclift, you're right. metadata isn't going ot get into user-data.
<smoser> a feature that i've wanted to add in the past is 'cloud-init query'
<smoser> either a cmdline tool, or a explicit promise of how you can load data from the cloud in a datasource agnostic way.
<smoser> ie, i think the answer might just be to dump json into /run some where.
<smoser> but we dont have that now. so what you have isn't horrendus. 
<jclift> Yeah.  Could be useful.
<jclift> Thanks. :)
<smoser> and i doubt i would have even bothered sucggesting reading the 'system_info'['paths']
<smoser> rather than just loading /var/lib/cloud/instance/obj.pkl
<jclift> Heh, was more of a "just in case", since I have 0 idea how things are structured on non CentOS.
<jclift> And Gluster Community has people on all different kinds of OS's that may try it out
<smoser> yeah. its better in that sense.
<smoser> hm..
<smoser> so one thing you could do, is not use metadata
<smoser> but just user-data.
<smoser> is there a reason for meta-data ?
<jclift> Yeah, there's a 2K limit for user data, and I want to keep them separate
<jclift> Also, I don't know how to do the multi-part stuff programatically yet
<smoser> the limit should be i think 16k, and it can be compressed.
<jclift> It's definitely 2K with rackspace cloud
<smoser> well, it can at least be compressed.
<smoser> 2k is silly small
<jclift> It's even documented as 10K in some places in source... but in practice their API rejects anything over 2K
<jclift> Yeah, agreed
<smoser> (and you can use '#include')
<jclift> Yep
<jclift> Discovered that yesterday ;)
<smoser> but #include increases complexity
<jclift> eg https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/remote_centos6.cfg
<jclift> Yeah
<smoser> multipart is really easy though to do programatically.
<smoser> you just throw whatever "parts" you want into a yaml list
<jclift> Ahhh cool.
<jclift> k
<smoser> (and yaml == json)
<jclift> Sure
<jclift> I need to include a config file (eg as above), plus a script file to run afterwards (aka runcmd)
<jclift> Tried listing both under an #include, but only the config file gets executed
<smoser> ?
<jclift> It's on my ToDo list to figure out later on today wtf is going wrong there
<smoser> that should work.
<smoser> oh.
<smoser> it should work.
<smoser> #include
<smoser> http://url/1
<smoser> http://url/2
<smoser> you just have to get the "startswith" right.
<smoser> ie, '#cloud-config' or '#!'
<smoser> and cloud-init should do the right thing
<smoser> anyway, other than being config-drive specific i dont think what you have there is horrendos
<jclift> Yeah, that's what I tried yesterday. The config bit worked (first url), the 2nd didn't.  But I haven't looked through the cloud-init log to figure out why, even though I could see the 2nd URL was pulled down into user-data.i (or similar name, this is from memory)
<jclift> I'll look into it later on today, it's probably something simple. :)
<jclift> smoser: Does this seem like a valid startswith? https://forge.gluster.org/glusterfs-rackspace-regression-tester/glusterfs-rackspace-regression-tester/blobs/script_refactoring/regression_test.sh
<jclift> eg bash
<jclift> Probably nothing bash specific in there, so changing to /bin/sh instead would likely work straight off
<jclift> Meh, I'll investigate later.  Other things to finish first.
<jclift> :)
<smoser> it should, yeah, '#!' should be good enough.
<smoser> jclift, regarding "obvious"
<smoser> maybe that /usr/bin/bash doesn't exist ?
<jclift> Hmmm, completely possible
<jclift> I'll check in a sec.  Filling forms for other stuff atm.
<smoser> also, one thing i'd suggest, is to set "cloud-init-output"
<smoser> output: {all: '| tee -a /var/log/cloud-init-output.log'}
<smoser> that is the default in trunk now, but is extremely useful.
<smoser> that way any output of subprocesses of cloud-init goes there.
<jclift> Useful tip.  I'll look into it shortly. :)
<jclift> smoser: Interestingly, there's no flag file written by kernel updates for CentOS/RHEL/etc.
<jclift> smoser: However, there's a yum "reboot_suggested" flag that gets written to yum metadata
<jclift> Yum is written in Python
<jclift> And various parts of yum can be imported for use in python
<jclift> There's an example of something called PackageKit using it here: https://gitorious.org/packagekit/packagekit/source/945faa959f00e27d419517116c37e960d6093f56:backends/yum/yumBackend.py
<jclift> In theory, it might be possible to just do something like "from yum.update_md import UpdateMetadata", and get it working from there
 * jclift will experiment a bit, but my Python is very noobie level so you might be able to just glance at it and tell what to do. ;)
<jclift> So, after speaking with the guys in the #yum channel, it looks like the reboot_suggested flag isn't that widely used.  So, may not actually be reliable.
<jclift> They suggested just checking the version of the running kernel vs the latest installed one, and rebooting if they're different
<jclift> I'll see if I can whip up a suitable patch to do that in a bit.  Kind of brain faded atm tho :/
<smoser> jclift, that sounds fine. 
<smoser> its non-trivial to determine "latest kernel" though.
<smoser> i dont know how one is supposed to do that. you'll probably have to use yum to compare versions.
<jclift> They pointed me towards a recent yum command addition that does it.
<jclift> smoser: I'll hunt that down and see if it's feasible to copy.  Yum being written in python too, etc.
<jclift> May not be tonight though.  Kind of needing a break atm. :)
<smoser> oh, its not reasnoable to copy.
<smoser> you'd want to use the library. iirc rpm's "which version is greater" is massive spaghetti.
<jclift> Damn.  This is apparently a new yum command, in recent Fedoras.  I haven't yet looked into it.
<jclift> You could be completely right thogh.
<jclift> Guess I'll be finding out soon. ;)
#cloud-init 2014-04-01
<jclift> Script failure was caused by /usr/bin/bash not existing.  /me sighs
<jclift> Well, that's fixed at least ;)
<cds> does anyone know how i can force cloud-init to only run once?
<cds> i pass nocloud data as a cdrom during first boot, everything works fine
<cds> reboot without it and it falls back to the no data source and resets all my changes
<smoser> cds, 2 options:
<smoser> a.) don't detach the cdrom
<smoser> b.) set manual_cache_clean: True
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
<cds> cool, thanks
<cds> is setting the kernel param ds=nocloud acceptable?
<cds> i am just using this vm on a local machine
<smoser> well, how did you get the kernel and intiramfs? 
<cds> let me back up, i just want to boot ubuntu cloud once, have cloud-init configure it via nocloud-data
<cds> then not worry about passing it anything cloud-init related
<cds> changing ds=nocloud in /etc/grub/default after the initial configuration worked ok
<smoser> ah. ok. that doesn't sound terrible. 
<smoser> probably other ways to do it, but i'd have to think. yours doesn't sound terrible.
<cds> cool, thanks :)
<jclift> smoser: Btw, for the RHEL/Fedora/CentOS thought on "how to tell if a reboot is needed", it might not be that hard to tell for kernel changes
<jclift> eg just retrieve + store the version info strings prior to running yum, retrieve them again afterwards, if they compare differently, voila, new kernel installed.  No need to even understand the version number breakdown inside, just "if it's different, reboot".
#cloud-init 2014-04-02
<msbrown> Hey, have an oddball situation. I've taken a server installed via maas (d-i install), and moved it off of maas to be a stand-alone server. What's the best way to disable the cloud-init boot sequence?
<smoser> msbrown, you can probably 
<smoser> dpkg-reconfigure cloud-init
<smoser> and just seleect the None datasource
<msbrown> thx
<smoser> if that doesn't work, then i'd just put some data in /var/lib/cloud/nocloud/seed
<smoser> so that it would think it found *that* datasource all the time.
<msbrown> I'll try the first method
<msbrown> well, that makes the boot faster, you get some ugly messaging but as long as you know to ignore it....
<msbrown> (whatever you do, *don't* just delete all the cloud* entries from /etc/init :-)
<SpamapS> smoser: question
<SpamapS> smoser: oh and HI
<SpamapS> smoser: so, cloud-config sets the hostname right?
<SpamapS> smoser: I'm considering that we need to delay runlevel 2 until after cloud-config finishes
<SpamapS> smoser: thoughts on that?
<SpamapS> smoser: we're having problems where things don't always get the same hostname on reboot because they start faster the second time for whatever reason.. just before cc_set_hostname runs
<smoser> ubuntu ?
<smoser> SpamapS, ?
<SpamapS> smoser: yeah
<SpamapS> smoser: this is with baremetal hardware, but using EC2 metadata.. so.. yeah.. we're on crack a bit. :)
<smoser> cloud_init_modules should set hostname per default config
<smoser> which should execute 
<smoser> start on mounted MOUNTPOINT=/ and stopped cloud-init-nonet
<smoser> which almost certainly should block the starting of runlevel 2
<SpamapS> smoser: so cloud-config doesn't run the cc_* modules?
<smoser> see cloud.cfg, SpamapS 
<smoser> they run at different stages per that config
<SpamapS> smoser: AH
<SpamapS> misunderstanding resolved
<SpamapS> smoser: ok.. so hm.. why are we seeing our hostname different on two successive boots.. argh
<SpamapS> first boot we get no .novalocal
<SpamapS> or we do, then second we don't get it.. I can't remember now
<smoser> SpamapS, based on very little info, i guess its related to
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/__init__.py#L169
<SpamapS> smoser: we do have local-hostname
<SpamapS> smoser: but good thinking :)
<harlowja> SpamapS who are u, lol
<harlowja> no cloudinit for u
<harlowja> lol
<SpamapS> harlowja: i can haz hostname plzzzzz
<harlowja> SpamapS ha
<harlowja> SpamapS let me know if u figured it out, can help if u don't
<SpamapS> harlowja: I don't think our problem is cloud-init
<SpamapS> something else subtle going on
<harlowja> k
<harlowja> stop doing those bad things then
<harlowja> lol
<SpamapS> am I the only one that thinks lol looks like a ghost raising its arms and chasing somebody from right to left?
<SpamapS> lol==---...
<harlowja> :)
<harlowja> run away!
<SpamapS> ..---===/o/
<harlowja> ha
<SpamapS> well now it looks like bennie hill
<harlowja> SpamapS do u know if anyone from HP is going to http://www.meetup.com/openstack/events/172017412/ ?
<SpamapS> just need doors and gratuitous un-garmenting
<harlowja> "OpenStack at Mega-scale"  (whatever that means)
<SpamapS> harlowja: I hope so
<harlowja> ya, i'm not sure i want to go yet, lol
<SpamapS> harlowja: but most of our actual dev/ops who would be able to contribute are in "not bay area"
<harlowja> megascale sounds to much like marketing blah blah
<SpamapS> gigascale would be better
<SpamapS> terascale
<SpamapS> Have a competing meetup next door. "OpenStack at petascale"
<harlowja> :)
<harlowja> +2
<harlowja> mine bigger than yours meetup
<harlowja> *maybe shouldn't call it that*
<SpamapS> might attract the wrong crowd
<harlowja> :)
<SpamapS> lol===---...
<harlowja> ha
<harlowja> ..---===/o/
<harlowja> ha
<SpamapS> o/` [ kazoo music ] o/`
<harlowja> ^ did the ghost just pass threw a kazoo?
<harlowja> cause ghosts can do that afik
<harlowja> *afaik
<lipinski> How do I get a VMs instance id via cloud init?  e.g., the one that is in openstack meta-data.json?
<harlowja> cat /var/lib/cloud/data/instance-id
<harlowja> *thats one way
<lipinski> harlowja: that's not the instance ID I need.  I think that's the EC2 one?  I need the one that is available via Openstack metadata
<lipinski> e.g., curl 169.254.169.254/openstack/latest/meta_data.json (uuid)
<harlowja> ah, which cloud-init are u using
<lipinski> My problem is that using the 169.254.169.254 ties me to the network-based metadata service.  I would like to use Cloud-init so that I am abstracted from metadata via network vs cloud-drive
<lipinski> one sec
<harlowja> yup, the newest cloud-init has that abstraction
<lipinski> 0.7.2-2.el6.noarch
<harlowja> ya, that one only knows how to fetch from the EC2 metadata/userdata from 169
<harlowja> which is why u see the ec2 id
<lipinski> Is there a newer Cloud-Init that would give me what I need?
<lipinski> (hopefully in a RHEL6 RPM form.. :) )
<harlowja> ya, 0.7.5 would have what u need, although u'll need to build your own rpm afaik
<harlowja> which isn't that hard to do since the cloud-init code has that
<harlowja> i don't think rhel has published that version yet
<lipinski> harlowja: thanks a bunch.  So, the UUID/VMID that I'm looking for would then be accessible via /var/lib/cloud/data/instance-id ?  Or somewhere else?
<harlowja> that'd be the place
<lipinski> fantastic.  thanks again.
<lipinski> harlowja: sorry to bother again.  0.7.4-2 have what I need, or do I need 0.7.5?
<harlowja> i think 0.7.5
<harlowja> which was just released afaik
<harlowja> the other option is u can just add a script into user-data that fetches that information lipinski (basically running `curl 169.254.169.254/openstack/latest/meta_data.json`)
<lipinski> harlowja: thanks.  Yes - I may do that as a stop-gap measure until we can get to 0.75.
<harlowja> right
<harlowja> shouldn't be to bad, could even do something simple in python to get it
<harlowja> although u should be able to checkout cloud-init and run 'make rpm' and a rpm should pop out
<lipinski> I didn't want to get into the "business" of interfacing with metadata services.  Prefer Cloud-Init do the lifting for me and that way I get cloud-drive+metadata support without any change on my side :)
<harlowja> yup
<lipinski> I've never downloaded cloud-init before (since we get RPMs).  
<lipinski> Thanks again.
<harlowja> np
#cloud-init 2014-04-03
<smoser> harlowja, lipinski thats another request for "cloud-init query" 
<smoser> for 0.7.6 we should get that back in one way or another.
<smoser> 2 json files. one with privledged info (user-data), one with public data.
<harlowja> harlowja thats true, forgot about all that ;)
<smoser> harlowja, are you talking to yourself again?
<harlowja> oops
<harlowja> that was supposed to be smoser not myself, lol
<harlowja> about the cloud-init query stuff :)
<harlowja> harlowja u so awesome
<harlowja> harlowja i know
<harlowja> lol
 * smoser can't look at lol any more without thinking about running away from ghosts
<harlowja> ;)
<smoser> l0l <--- cyclops running away from a ghost.
<harlowja> all SpamapS fault
<harlowja> :)
<harlowja> smoser how'd http://itsfoss.com/facebook-to-buy-ubuntu-for-3-billion/ go over @ canoincal, lol
<harlowja> hope canonical folks were in on the joke
<smoser> joke?
<smoser> carp.
<smoser> i probably need to return all that jewelry i bought.
<harlowja> def
<harlowja> and cars
<harlowja> u might want to return all that
<smoser> yeah. that really sucks.
<smoser> cause i look DOPE in all that bling
<harlowja> ;)
<harlowja> ya, next time wait till the deal is verified before gold plating all your teeth
#cloud-init 2014-04-04
<SpamapS> smoser: http://download.cirros-cloud.net/streams/v1/index.json.gpg <-- whose key is that?
<SpamapS> smoser: doesn't mean much to sign things if the public key isn't itself in the web of trust
<SpamapS> smoser: anyway, I need to know where I can fetch that key so I can use cirros w/o wondering if I've been MITM'd
<harlowja> my guess SpamapS is thats smoser key, but not sure
<harlowja> since smoser is mr.cirros
<SpamapS> it is not
<SpamapS> harlowja: that key is unknown.. not in any key servers.
<harlowja> hmmm
<harlowja> not my key
<SpamapS> Ideally smoser would sign it, and upload the signed public key
<harlowja> agreed
<SpamapS> Actually ideally several people would sign it, whoever has root on the box that builds those really.
<SpamapS> But smoser has enough sigs.. his key is good enough for me. :)
<harlowja> :-P
#cloud-init 2014-04-06
<jeffspeff> i'm trying to find documentation on how to use cloud-init to change a computers name and add it to an active directory domain. if it's relevant, i'm using windows 7 guest vm's and ovirt
<jclift> jeffspeff: As a thought, it might also be worth asking in the oVirt channel(s) too
 * jclift doesn't know enough about cloud-init to help out with your question here though
<jclift> active directory <-- no idea
<jclift> Changing the non-active directory hostname can be done using the "hostname: foo" and "fqdn: foo.example.org" cloud-init statements
<jclift> Not sure if that's relevant though :)
#cloud-init 2015-03-30
<harmw> smoser: I'd say https://bugs.launchpad.net/cirros/+bug/1273159 can be closed, ageed?
<smoser> what are our defaults now ?
<harmw> we didn't change them afaik, just offer a way of changing them through /etc/sysconfig
<harmw> but to answer your question, I don't know
<harmw> TIMEOUT=60
<harmw> (from /etc/default/udhdpc)
<smoser> i really tihnk he's wrong
<smoser> i just dont know how that file would hvae been respedted
<smoser> 'udhcpc_opts'
<harmw> oh, hhe, I didn't think much of it since we offer atleast 'some' way to mess with timouts and stuff
<harmw> :)
<smoser> we dont really offer a way to do --retries though
<smoser> do we ?
<harmw> ah no, crap
<smoser> we should add that. seems easy eghouh
<harmw> indeed
<smoser> k. i have to go away now. sorry.
<harmw> just c/p the TIMEOUT part
 * harlowja back btw!
<harlowja> i survived :-P
<harlowja> ^ with all body parts intact
<harlowja> lol
<harmw> lol
<harlowja> except for that finger
<harlowja> ha, j/k
<smoser> who says you need 7 fingers anyway, harlowja 
<harlowja> ha
<harlowja> ya, the vestigal ones aren't so useful
<harlowja> so its ok if i lose a couple
<harmw> https://github.com/pellaeon/bsd-cloudinit
<harmw> hehe
<harmw> forked from cloudbase-init
#cloud-init 2015-03-31
<fish_> hi
<fish_> I'm building a ubuntu AMI from scratch and use cloud-init. when finished building the AMI, /var/lib/upstart is there but after I boot it's gone and I don't have any upstat logs. wondering if this might be related to cloud-init
<Odd_Bloke> fish_: Do you mean /var/log?
<fish_> Odd_Bloke: ehh sorry, yes sure /var/log/upstart
<Odd_Bloke> fish_: I would be surprised if cloud-init were doing anything to /var/log/upstart, but smoser could probably tell you more.
<fish_> Odd_Bloke: hrm okay, any ideas what might cause it beside cloud-init?
<fish_> I found this: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/990102 - unfortunately (and a great example why important) it doesn't mention the commit this was fixed
<Odd_Bloke> fish_: Well, that wasn't a cloud-init bug, it was a problem with the cloud images.
<Odd_Bloke> fish_: Which was fixed ~3 years ago. :)
<fish_> Odd_Bloke: I know, I just ask here because cloud-init is the only thing I *expect* to change stuff on the system on first boot :)
<fish_> but yes, there is probably something else tampering with /var/log on first? boot
<Odd_Bloke> fish_: Are you sure /var/log/upstart is actually included in your image?
<fish_> Odd_Bloke: I know the bug is old, but I suspect it's the same root cause as in my cause (they tried the same - building AMIs)
<fish_> Odd_Bloke: yes, I'm I double checked that
<Odd_Bloke> fish_: Out of interest, why not build your image based on the images at cloud-images.ubuntu.com?
<fish_> Odd_Bloke: well, I want to actually build the images (vs starting a instance and snapshotting) - I actually wrote a few words about why (+how) here: http://5pi.de/2015/03/13/building-aws-amis-from-scratch/ but tl;dr
<fish_> I want the AMIs small and a clean separation between run and built-time
<Odd_Bloke> fish_: Right, but why not take the images from there and modify them by mounting them?
<fish_> Odd_Bloke: that's an option.. deboostrap seemed more straight forward. do you think there are advantages in using the cloud-images directly? also worried about security updates because I want the amis to be pretty much immutable (well, at least I don't want to run config management etc on top to keep things up to date)
<fish_> but I just realize that I'm wrong, the resulting image indeed has no /var/log/upstart.. now I'm confused since I added a 'mkdir /var/log/upstart' to my build process which failed because it was there already.. looks like something during the build removes it
<Odd_Bloke> fish_: Well, using the Ubuntu images would (probably) have saved you from this bug, for example.
<Odd_Bloke> And you presumably have the same problem with security updates whoever is building the images?
<Odd_Bloke> In your modification process, an "apt-get -y update; apt-get -y upgrade" would pull in any security updates the same as installing from scratch.
<fish_> hrm yes that's true.. well, I simply used debootstrap because it seemed like the right tool to use. but yeah, it's definitely a good option to use the cloud-images, will consider that
<smoser> fish_, generally speaking, i really dont think you should build your own images.
<smoser> any more than i think you should build your own kernel, or your own python or elibc.
<smoser> you're certainly welcome to do it, and tools are availalbe to do so.  but doing so means you get to re-discover bugs which are fixed.
<smoser> the process that I would recommend, is
<smoser>  * download cloud image
<smoser>  * mount-image-callback --system-mounts --system-resolvconf $IMAGE -- chroot _MOUNTPOINT bash -s < your-update-script
<smoser> and 'your-update-script' does things like: 
<smoser>  apt-get update
<smoser>  apt-get install foo
<smoser>  apt-get clean
<smoser> the above does require root, which is less than ideal, but 
<smoser> a.) if you don't trust the ubuntu images, you're kind of SOL anyway
<smoser> b.) you can just run that in a VM to alleviate potential mount based attacks.
<fish_> smoser: yes, the more I think about that the more it makes sense. for now, things are working and I'm about to roll that out, but in the next iteration I'll definitely look into that
<fish_> I use a intermediate "base" image to avoid running a fresh deboostrap for every update anyways, so it should be fairely easy to use a cloud-image instead
<smoser> fwiw, the maas-images build process does essentially the above.
<harlowja_> claudiupopa i think we might have to remove the channel from https://review.openstack.org/#/c/169293/
<harlowja_> its gonna be hard to get an operator in this channel without kicking everyone out (which nobody has the permission to do)
<harlowja_> so might have to just skip that part
<harlowja_> years ago i think we all forgot to setup this channel with an operator and its pretty hard to do it post-creation
<harlowja_> *afaik*
<smoser> harlowja_, why didn't you ever add 'tar' to the write_files. and http:// 
<smoser> s/you/me/
<harlowja_> hmmm
<harlowja_> or was that me?
<harlowja_> idk
<harlowja_> haha
<smoser> ie, would be nice to have write_files either read content from a url
<smoser> or read a tarball from url and extract it into a target dir.
 * harlowja_ looking
<smoser> http://paste.ubuntu.com/10691441/
<smoser> that is what i have, but having those 2 big blobs as 'path: http://' would have been nice.
<smoser> and then also:
<smoser>  path: http://
<smoser>  format: tar
<smoser>  extract-dir: /writable/user-data/cloud-init
<smoser> or something
<harlowja_> hmmm, ya why did we do that, ha
<harlowja_> msg: ":::::: Hi Mom :::::::"
<harlowja_> lol
<harlowja_> hmmmm
<harlowja_> seems like we should just do that... (allow url stufF)
<harlowja_> and tar
<harlowja_> guess maybe we just didn't think of it ?
<smoser> yeah, i think we just didnt. 
<smoser> the inline makes sense.
<smoser> as you may not have networking at that point.
<smoser> but if you do have networking, then http:// makes good sense.
<smoser> and tar is just a nice archive format :)
<harlowja_> :-P
<harlowja_> shall i code that up or u want to?
<harlowja_> pop out the codes
<harlowja_> lol
<harlowja_> smoser ^
<harlowja_> smoser https://code.launchpad.net/~harlowja/cloud-init/write-files-fetch-from-somewhere/+merge/254816
<harlowja_> ok thats part 1 (no tar)
<harlowja_> should be mostly ok, ha
<smoser> python3... 
<harlowja_> ya, durn it
<smoser> pre_content comes as binary... pretty sure (or we ant to make sure it is)
<smoser> and then we want to write content without conversion
<harlowja_> will make sure
<harlowja_> load_tfile_or_url does that i guess
<smoser> tfile loads text
<smoser> ew want blobs of unadulterated raw data
<harlowja_> kk, let me check here
<harlowja_> stupid stff
<harlowja_> lol
<harlowja_> *stuff
 * harlowja_ goes and builds up my 3.4 venv
<smoser> harlowja_, see why i said i hate pypi
<harlowja_> lol
<harlowja_> ya
<smoser> oh darn, some dude's cable modem is out
<smoser> pypi fail
<harlowja_> lol
<harlowja_> ok dokie; added some tests and stuff; seems to work as expected
<harlowja_> at least under basic tests
<harlowja_> smoser check that out if u want
<smoser> harlowja_, thanks.
<harlowja_> np
<Odd_Bloke> smoser: Am I right in thinking that vendor-data simply isn't a thing in the version of cloud-init in precise?
<smoser> right . not at all.
<smoser> Odd_Bloke, i'd sponsor an sru
<harlowja_> precise is 12.04 right?
<Odd_Bloke> harlowja_: Yeah.
<harlowja_> k, can't ever remember the codenames, lol
<tmclaugh[work]> I want to set hostname to the aws instance ID.  Is there a var available for doing that ot do I have to use some commands during bootcmd to handle this?
#cloud-init 2015-04-01
<harlowja> smoser https://review.openstack.org/#/c/166201/ fyi
<harlowja> useful to look at/over
<Odd_Bloke> smoser: Is there any reason for having documentation in /doc that isn't in /doc/rtd/... (and therefore rendered on ReadTheDocs)?
<Odd_Bloke> smoser: Does http://bazaar.launchpad.net/~daniel-thewatkins/cloud-init/openstack-vendor-data-doc/revision/1089 look sensible to you?
<Odd_Bloke> smoser: (I'm waiting for confirmation from the partner that triggered this investigation that we've solved their problem before opening a MP)
<smoser> Odd_Bloke, i think i'd like to have something like:
<smoser>  http://bazaar.launchpad.net/~daniel-thewatkins/cloud-init/openstack-vendor-data-doc/revision/1089
<smoser> for all vendor-data consumption
<smoser> i tihnk we can do this in a backwards compatible mode to if we do it right.
<smoser> hm.. well i tihnk we want to do this well.
<smoser> sorry.
<smoser> bad link above.
<smoser> whic his confusing
<smoser> https://etherpad.openstack.org/p/cloud-init-vendor-data
<smoser> i' dlike to hvae that basic policy in place for all vendor data consumption i think.
<ktdreyer> hi folks, I'm wondering about the "#cloud-config" text that I see at the top of cloud-init YAML files. Isn't that a comment in YAML syntax? But cloud-init relies upon its presence?
<Odd_Bloke> smoser: OpenStack falls over if vendor-data is just a string; I didn't try with a list.
<Odd_Bloke> smoser: But I wouldn't be surprised if OpenStack requires a JSON object.
<smoser> Odd_Bloke, well, it does.
<smoser> but a string is valid json
<smoser> watch:
<smoser> $ python -c 'import sys, json; print(json.dumps(sys.argv[1], indent=1))' "Hi Odd_Bloke"
<Odd_Bloke> A string is valid JSON; it's not a valid JSON object.
<smoser> watch
<Odd_Bloke> (As a JSON object is defined as a mapping)
<smoser> http://paste.ubuntu.com/10718416/
<smoser> you can argue if you want that python's json module is behaving incorrectly
<smoser> but that same 'loads' is how openstack is going to check if its valid json
<smoser> $ python /tmp/go.py
<smoser> contents: "Hi Mom"
<smoser> data: Hi Mom
<smoser> ktdreyer, yes. cloud-init depends on it in some contexts.
<Odd_Bloke> smoser: "foo" is valid JSON, but it's not a JSON object.  A JSON object is a mapping ({"foo": "bar", ...}), and is one of the valid root objects in valid JSON.
<Odd_Bloke> As are strings.
<smoser> in /etc/cloud/cloud.cfg.d , it probably assumes a default of cloud-config (which is sane).
<Odd_Bloke> So I think we're talking past one another here. :)
<smoser> ktdreyer, but in other cases, it doens't know what the content of the file is... how to interpret it. so it relies on either data provided to it about the content (such as mime-type) or the content starting with a string.
<smoser> well, kyou said openstack will fail if vendor-data is just a string
<smoser> but i dont think it would
<Odd_Bloke> If it assumes that it will get a mapping and looks for specific keys it might; I'm trying to get back on to that OpenStack cloud to try again.
<smoser> i wrote it.
<smoser> its a blob
<smoser> it makes sense.
<smoser> its a contract / data from "vendor" to "guest"
<smoser> openstack is not involved.
<Odd_Bloke> Ack; I'll try and work out what I was doing wrong.
<smoser> Odd_Bloke, i'm not saying that we should use a string.
<smoser> i think the dict or list makes more sense.
<smoser> and cloud-inti shoudl take the 'cloud-init' entry in that.
<smoser> that is what we should make work, and what we should recommend to vendors.
<Odd_Bloke> Yeah; that does work.
<smoser> Odd_Bloke, fwiw, this whole "string as an object" conversation, I had almost identically with JayF when i first wrote that etherpad.
<smoser> :)
<Odd_Bloke> smoser: http://json.org/ says that strings categorically are not JSON objects. But JSON objects are not the only valid JSON root type.
<Odd_Bloke> So we're just in violent agreement here. :p
<smoser> well, just to be snarky, and add more violent agreement
<smoser> $ python -c 'import json, sys; print(isinstance(json.loads(sys.argv[1]), object))' '"foo"'
<smoser> True
<smoser> :)
<Odd_Bloke> :D
<smoser> openstack expects the VendorData class (which defaults to that thing that reads data from a json file)
<smoser> to return data
<smoser> it then 'json.dumps()' that to a file-like thing that it shows in the metadata service.
<smoser> the VendorData class could return anything, but openstack is going to force that thing to be 'json.dumps'-able.
<smoser> which seems sane. as it labels it 'vendordata.json'
<Odd_Bloke> Yeah; I think the problem might have been that I just put a regular cloud-config file in there, rather than a JSON document with a string as the root object.
<smoser> right.
<smoser> err..
<smoser> "right."
<Odd_Bloke> Yeah.
<Odd_Bloke> And JSON strings can't have literal newlines.
<Odd_Bloke> So it looks like "#cloud-config\npackages:\n- htop\n".
<Odd_Bloke> Uh, plus a space in there.
<smoser> so what you want is this:
<smoser> cloud_init_data = {'packages': ['htop']}
<smoser> with open("file.json", "wb") as fp
<smoser>  fp.write('\n'.join("#cloud-init", json.dumps(cloud_init_data))
<smoser> somthehign like that.
<smoser> bsaically, let tools write it for you
<Odd_Bloke> smoser: So have something like http://paste.ubuntu.com/10718633/ as the vendor-data OpenStack reads?
<smoser> hm.. wait no. sorry . here.
<smoser> http://paste.ubuntu.com/10718667/
<smoser> Odd_Bloke, ^
<smoser> either of those output formats (list or string) should work for cloud-init in vivid. and are in line with https://etherpad.openstack.org/p/cloud-init-vendor-data
<smoser> and i'm saying we should make trusty act like vivid.
<Odd_Bloke> smoser: http://paste.ubuntu.com/10718677/ worked on trusty and vivid.
<smoser> Odd_Bloke, oh.
<smoser> that is odd.
<smoser> i dont know whow that works on trusty
<smoser> ididn't think it looked in cloud-init 
<Odd_Bloke> smoser: Yep, it does; line 150 of DataSourceOpenStack.py.
<JayF> smoser: Very happy to see cloud-init v2 hit stackforge
<smoser> i pointed out, specifically... it is not a openstack project.
<smoser> we just want free infrastructure :)
<smoser> utlemming, i updated the lp:ubuntu/trusty branch
<smoser> by 'dget <link-to-dsc>' (from https://launchpad.net/ubuntu/+source/cloud-init)
<utlemming> smoser: ack
<smoser> and then 'bzr import-dsc' from that dsc
<smoser> and bzr push
<smoser> please make sure you update that when you upload
<utlemming> smoser: sure, sorry about that
<smoser> precise-proposed, trusty, and utopic are now all up to date with archive.
<harmw> what would be a fairly decent architecture for a small cloud deployment having compute nodes on multiple locations, each having their own ISP link for connecting with $world and using an ipsec tunnel with $home where keystone lives?
<harlowja> claudiupopa whats your openstack email; https://review.openstack.org/#/admin/groups/665,members is up
<harlowja> smoser ^
<harlowja> u are now cloud-init core
<harlowja> feel free to add others
<harlowja> also https://review.openstack.org/#/admin/groups/666,members
<harlowja> 666 omg
<harlowja> lol
<smoser> ok thanks.
<harlowja> np
<claudiupopa> cpopa@cloudbasesolutions.com
<claudiupopa> So it's finally up.
<JayF> 666 the number of the cloud
<harlowja> JayF ha
<harlowja> alexpilotti whats your email on gerrit?
<harlowja> autofill-in not working
<alexpilotti> harlowja: apilotti@cloudbasesolutions.com
<harlowja> k
<alexpilotti> autofill has some strange logic
<harlowja> k, u are now coreeeee
<harlowja> lol
<alexpilotti> yeiiii tx :-)
<alexpilotti> I found myself swearing a bit about the way it accepts naming ordering and spelling
<harlowja> ok first review
<harlowja> lol
<harlowja> https://review.openstack.org/169854
<alexpilotti> dâoh!
<harlowja> use all your magic core powers on that one, lol
<harlowja> guess we can see if the automated-ci works with that review also
<harlowja> or if its broken (or something else)
<alexpilotti> quick double check w the format we have in cloudbase-initâs .gitreview
<harlowja> thx
 * harlowja copied it from taskflow one, ha
<alexpilotti> should I +2a or do we leave to smoser the pleasure of his first +a? :-) 
<harlowja> let's see if the CI breaks first
<harlowja> it might be somewhat busted; not sure, ha
<harlowja> first commits usually expose that :-P
<alexpilotti> yep, we learned it the hard way
<alexpilotti> as the first 2-3 post-stackforge patches in cbs-init can testify
<harlowja> ya; looks like its busted, lol
<harlowja> https://jenkins04.openstack.org/job/gate-cloud-init-pep8/1/
<harlowja> ...
<harlowja> i'll get all that stuff fixed
<harlowja> maybe should turn off the docs build
<harlowja> *for now*
<alexpilotti> ohâ¦ well time to bring the joys of pep8 to cloud-init
<JayF> I never start a new project with CI
<harlowja> :)
<JayF> set it up with noop jobs, then setup things one at a time
<JayF> you are braver than me :P
<kwadronaut> s/pep8/pdp11/g
<harlowja> JayF ha
<harlowja> JayF ya; it usually just causes these first commits to be busted
<harlowja> then it all goes ok
<alexpilotti> docs: https://github.com/stackforge/cloudbase-init/commit/23ddd33fa4a35040f9ab2f84efb54d7244232b8b
<alexpilotti> an empty doc/source/conf.py should do
<harlowja> kk
<harlowja> cool
<alexpilotti> and setup.cfg
<harlowja> cools
<harlowja> thx
<alexpilotti> [build_sphinx]
<alexpilotti> source-dir = doc/source
<alexpilotti> do we have a .testr.conf?
<harlowja> will get all that going
<claudiub_> I would also suggest updating HACKING.md, since the new method of contributing will be through gerrit. That would mean going a git review instead, when the commit is ready.
<harlowja> alexpilotti although seems like we are just gonna stick to nose for a little so i guess no testr.conf needed
<harlowja> nose imho is just simpler :-P
<harlowja> testr meh
<JayF> harlowja: so existing cloud-init stays locked up in bzr? How long until this becomes the "new" cloud-init?
<harlowja> i defer to others on that one JayF :-P
<harlowja> i just work here, haha
<harlowja> *when its ready?
<harlowja> :)
<JayF> when it's ready is code for never :P
<harlowja> only if u are the type that doesn't commit to shit
<harlowja> lol
<harlowja> i don't think we are :-P
<JayF> russell_h: ^ Doesn't look like it's close though.
<smoser> ugh... whats the mP ?
<harlowja> smoser i got it under control boss
<harlowja> lol
<smoser> good job.
<alexpilotti> claudiub_: +1
<alexpilotti> harlowja: k!
<smoser> oh yeah
<smoser> i had that locally :)
<jetole> Hey guys. I have a question about cloudbase-init and this is the closest room I could find so I will ask here hoping either someone can help or someone can direct me to the right room...
<jetole> When I start a instance, neither the hard drive is being extended nor is the instance being activated (it has a valid MAK key). The logs say very little. 6 lines regarding attempts to get meta data via 169.254.169.254 but I have not added any metadata line to cloudbase-init.conf. Can anyone help me figure out why the plugins I specified aren't running and the meta data lines are being queried when they are not included in the conf?
<alexpilotti> jetole: best place for this question is ask.cloudbase.it
<alexpilotti> jetole: can you please post also a copy of your cloudbase-init.log and cloudbase-init-unattend.log?
<alexpilotti> jetole: also: which Windows version are you using?
<jetole> alexpilotti: 2k8r2 standard 
<jetole> alexpilotti: should I have configured cloudbase-init-unattend.conf as well as cloudbase-init.conf ?
<alexpilotti> jetole: if you used the installer, they come preconfigured
<jetole> I'm thinking maybe I should have since I'm going the generalize OOB sysprep 
<alexpilotti> jetole: also a copy of your config files will help in troubleshooting
<jetole> some is but not nessicerely how I prefer, for example I don't want to add a user or query the meta servers but I do want to activate windows which can manually be done since the correct MAK key is there and the hard drive doesn't grow to fill the instance size 
<jetole> OK
<jetole> Will do
<jetole> thanks
<harlowja> ok i think https://review.openstack.org/#/c/169854/ should fix all of it
<harlowja> maybe not the pep8 ones though
<harlowja> let's see how far that gets
<alexpilotti> harlowja: requirements might need sphinx>=1.1.2,<1.1.999
<harlowja> kk
<claudiub_> harlowja: I have left you a comment on that patchset regarding pep8
<harlowja> claudiub_ thx
<harlowja> looking
<harlowja> alexpilotti i put sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 into test-requirements
<harlowja> it seemed to work in a venv
<alexpilotti> pep8: we should do our homework and clean up the entire project
<harlowja> when i ran tox -edocs
<harlowja> alexpilotti ya
<harlowja> ok, updated
<harlowja> let's see
 * harlowja prefers rst vs markdown also :-P
<harlowja> since all that damn sphinx stuff is in rst (might as well just use it all the places)
<JayF> harlowja: I have a very recent patchset enabling doc builds on a project if you wanted a template
<harlowja> JayF i think this should work; tox -edoc worked for me, let's see if it gets farther
<harlowja> then we can make it better
<JayF> That's fine, was just offering :)
<harlowja> cools
<harlowja> thx
<alexpilotti> harlowja: tests dont seem to run
<harlowja> hmmm
<harlowja> Ran 0 tests in 0.006s
<harlowja> OK
<harlowja> seems to run something :-P
<harlowja> although 0 of something
<harlowja> ha
<harlowja> $ tox -epy27
<harlowja> likely cause there aren't any?
<alexpilotti> harlowja: https://github.com/stackforge/cloudbase-init/blob/23ddd33fa4a35040f9ab2f84efb54d7244232b8b/.testr.conf
<alexpilotti> and testrepository in requirements.txt
<harlowja> ya; i think we are sticking with nose + nosetests though for the time being
<alexpilotti> not sure if we can get nsoe to do this
<harlowja> we can swithc later if we care
<alexpilotti> in theory it should be transparent
<harlowja> ya; so thats why later switch i think is ok
<harlowja> u can commit that one :-P
<harlowja> once this goes in
<alexpilotti> the fact that we dont have tests is also a good reason why tests dont run :-D
<alexpilotti> as in: https://github.com/stackforge/cloud-init/tree/master/cloudinit/tests
<harlowja> ;)
<harlowja> yup
<alexpilotti> I was quite surprise by seeing a green Python 3.4 jenkins run at the first run :-)
<harlowja> woot
<harlowja> ha
<alexpilotti> added https://review.openstack.org/#/c/169880/
<alexpilotti> harlowja: hey docs are happy
<harlowja> ya; nearly there alexpilotti 
<harlowja> just a few hacking tweaks/exclusions
<alexpilotti> we âonlyâ have pep8 left
<harlowja> ok, think latests update should do it
<harlowja> let's see
<alexpilotti> hacking.rst: what about removing the names and adding a reference to: https://review.openstack.org/#/admin/groups/665,members 
<harlowja> sureee
<alexpilotti> this way we dont have to change the file anytime somedy gets added/removed from the core group :-)
<harlowja> ya
<harlowja> alexpilotti updated
<alexpilotti> merci
<harlowja> hopefully that makes it all happy and good to go
<alexpilotti> does it makes sense to keep: 
<alexpilotti> .. _Scott Moser: https://launchpad.net/~smoser
<alexpilotti> .. _Joshua Harlow: https://launchpad.net/~harlowja 
<alexpilotti> in the references sicne they are not accessed?
<alexpilotti> I mean they are not referenced
<smoser> referenced ?
<alexpilotti> https://review.openstack.org/#/c/169854/9/HACKING.rst
<alexpilotti> instead of the name list we have: 	* `Core reviewers/maintainers`_
<alexpilotti> and at the bottom we have:
<alexpilotti> .. _Core reviewers/maintainers: https://review.openstack.org/#/admin/groups/665,members 	38
<alexpilotti> .. _Scott Moser: https://launchpad.net/~smoser 	39
<alexpilotti> .. _Joshua Harlow: https://launchpad.net/~harlowja 	40
<alexpilotti> .. _IRC: irc://chat.freenode.net/cloud-init 	41
<alexpilotti> .. _freenode: http://freenode.net/ 
<alexpilotti> I mea, we can add everybodyâs launchpad profile there, but I dont see it particularly useful in a dynamic group
<smoser> sure. drop that.
<alexpilotti> tx. developerâs anti data-duplication professional bias :-)
<harlowja> alexpilotti i already dropped it :-P
<harlowja> refresh to patch 10
<harlowja> ;)
<harlowja> to slow, haha
<alexpilotti> :-)
<jetole> alexpilotti: http://ask.cloudbase.it/question/401/issues-with-meta-activate-serial-and-hdd-extend/ :-)
<alexpilotti> jetole: ok tx!
<alexpilotti> me or one of my colleagues will look at it asap
<jetole> I think I should have used a different blockquote method but I don't post to stackexchange often enough and couldn't recall the syntax 
<alexpilotti> for logs and long text I usually paste on paste.openstack.org and paste the link in the post
<alexpilotti> but yours are not very long so it works inline as well :-)
<jetole> I can do the same and add a ref link under the block quote
<jetole> ... or nevermind 
<jetole> ;-)
<alexpilotti> jetole: on what cloud are you running this instance?
<alexpilotti> plugins are meant to be executed when metadata is present
<jetole> alexpilotti: I'm testing on a local qemu-kvm on workstation but I had also uploaded it to openstack and had the same results. I don't need any data from the meta servers at all so I feel it should be fine locally 
<jetole> well on openstack icehouse, I have the same results 
<alexpilotti> you can surely reduce the list of plugins
<alexpilotti> on icehouse it should definitely pick up metadata from 169.254.169.254 or configdrive
<jetole> I just updated the post
<jetole> from the line @ Update near the bottom 
<alexpilotti> the unattend part is used only to save one reboot when you set the hostname
<jetole> OK. I don't have the logs from icehouse and it will take me a while to get them since I removed the test image and would have to re-upload. I'm using ceph for the stack so uploading raw images only which take a while 
<jetole> should I do that and get the logs 
<jetole> ?
<alexpilotti> yeah, that would be best
<jetole> Ok. It's only the logs though. I know the results are the same 
<alexpilotti> we can add a âNilâ metadata provider for your Qemu case 
<jetole> sorry, can you clarify what unattend is used for? I didn't find a good ref on the difference between what the two config files are used for online 
<alexpilotti> when the image is sysprepped, it uses a file called unattend.xml (rough equivalent of a linux preseed.cfg / ks.cfg)
<alexpilotti> thsi contains steps executed during the early first boot during a phase called âspecializeâ
<jetole> I'm familiar with that 
<alexpilotti> we do a first execution of cloudbase-init at this stage
<alexpilotti> to run only a limited set of plugins: NTP, MTU and hostname
<jetole> I've used unattend, preseed and occasionally kickstart before but the cloudbase-init-unattend. What does that do differently 
<jetole> er, I mean that file specifically 
<alexpilotti> the hostname one requires a reboot
<jetole> so the stuff that has to be done prior to the reboot goes into cloudbase-init-unattend?
<alexpilotti> yep!
<alexpilotti> in short, itâs just the anme of the configuration file used by cloudbase-init when executed during specialize
<alexpilotti> there are quite some limitations on what can be executed during that phase, in particular WMI is not available
<alexpilotti> in your case, given the requirements you posted, it can be skipped 
<jetole> ok
<jetole> waiting on the upload now. It's at 90% (gbit between wkstn and stack) but than it's theirs the save from local host to ce
<jetole> - to ceph (I think via glance)
<harlowja> alexpilotti smoser claudiupopa https://review.openstack.org/#/c/169854/
<harlowja> we are good to go
<harlowja> approve at will
<smoser> hooray. tests pass.
<smoser> :)
<alexpilotti> yeii
<harlowja> smoser yaaaa; 0 tests pass 0 times, lol
<alexpilotti> smoser: would you fancy +2a it this patch? :-)
<smoser> on the above ?
<alexpilotti> https://review.openstack.org/#/c/169854/
<alexpilotti> yep
<smoser> reviewing
<harlowja> slacking again apparently, lol
<smoser> why is it HACKING.rst ?
<harlowja> idk
<harlowja> lol
<harlowja> seems common to what i've seen
<harlowja> https://github.com/openstack/nova/ (HACKIGN.rst)
<harlowja> ...
<smoser> oh. ok. i didnt'r ealize github would render .rst
<smoser> thought .md only
<harlowja> ya
<harlowja> it handles both
<harlowja> and since rst is what sphinx also uses; i like just 1 format
<jetole> Something else I wanted to ask, is there a (relatively) easy way for me to create a local metadata server for qemu/kvm for testing images when building them?
<jetole> doesn't have to be on the same machine, etc since the KVM images share the LAN adapter of the host 
<smoser> so the reason i had testenv:docs is so tha tyou could build docs without the rest of the stuff
<smoser> due to my general hatred of pypi
<harlowja> maybe just get used to running `python setup.py build_sphinx` then :-P
<harlowja> its a few more characters, lol
<harlowja> but u can do it
<harlowja> lol
<harlowja> i believe in u
<harlowja> we can switch it back; i think its ok if u really want smoser 
<harlowja> https://github.com/openstack/glance/blob/master/tox.ini#L40 seems to be the common practice
<harlowja> although 	deps = {[testenv]deps} i think is fine either way
<harlowja> so not sure it matters
<harlowja> u sorta need the deps to run that comand anyway
<smoser> so you switched it just to have less tox env ?
<smoser> ie, less stuff in .tox/
<smoser> right?
<harlowja> well to install the right dependencies
<harlowja> sphinx will crap out if u start including code from cloud-init into docs...
<smoser> no.. i had it i thought in its own testenv.
<harlowja> ya, well [testenv:docs] will do that
<harlowja> afaik that will make a .docs venv
<smoser> ah. but the .docs will have all of the requirements in it.
<harlowja> smoser the openstack folks mirror pypi afaik so the mirror issues should be less of an issue
<harlowja> smoser right
<harlowja> and if we have docs that include code snippets; thats probably what we want
<harlowja> so those code snippers can be tested as well (which can be done)
<harlowja> *snippets
<smoser> stop making rational arguments for your side
<smoser> its annoying
<smoser> :)
<smoser> so the code snippets is godo point. and i'm fine with that. 
<smoser> harlowja, and we officially dropped 2.6 ?
<harlowja> ha
<smoser> did you mean to remove that ?
<harlowja> well it seems like the openstack CI isn't testing it
<harlowja> sooo
<harlowja> i'm ok with that for now
<harlowja> we'll see :)
<smoser> istn't that rhel 6 ? didn't you tell me that ?
<harlowja> ya
<harlowja> it is
<harlowja> i can add back later
<harlowja> will see
<smoser> and they dont have a python3 either.
<harlowja> gate-cloud-init-python34 
<harlowja> ?
<harlowja> thats running 34
<smoser> ok. 
<harlowja> but gate-cloud-init-python26 not there right now
<smoser> k. good enouh
<harlowja> so just chopped 26 from the rest of it for now
<harlowja> someone at yahoo will notice that and probably say, whats up
<harlowja> but hasn't happened yet :-P
<harlowja> also removed 'Environment :: OpenStack'
<harlowja> since cloud-init isn't really 'openstack'
<harlowja> ...
<harlowja> its just cloud...
<smoser> yeah, i'm godo wth that.
<harlowja> cools
<harlowja> merging it then (+a)
<harlowja> and merged
<harlowja> alexpilotti guess u'll have to rebase
<harlowja> https://github.com/stackforge/cloud-init should be all good to code on now :-P
<harlowja> world peace to come later, lol
<alexpilotti> heh
<harlowja> smoser do we want to drop https://github.com/cloud-init/cloud-init ?
<harlowja> or link it or mirror it or something?
<harlowja> https://review.openstack.org/169904 for those that are bored, ha
<smoser> is there any infrastructure to magically keep those in sync ?
<harlowja> smoser unsure
<harlowja> probably :-/
<jetole> alexpilotti: I just posted the updates you asked for via http://ask.cloudbase.it/question/401/issues-with-meta-activate-serial-and-hdd-extend/ - It looks to me like python is not finding the hostname module. I've never done python on Windows and wouldn't know where to look for that plus you may know if it's more than it appears from experience :-)
<claudiupopa> jetole: plugins=cloudbaseinit.plugins.windows.sethostname.SetHostNamePlugin,
<claudiupopa> That's actually plugins=cloudbaseinit.plugins.common.sethostname.SetHostNamePlugin,
<claudiupopa> s/windows/common/ ;-)
<claudiupopa> The unattend conf had it right though.
<jetole> OK. I actually copied the plugins lines from the stackforge readme
<jetole> Is there a more current one?
<claudiupopa> Yep.
<claudiupopa> Just updated it today.
<jetole> on stackforge?
<claudiupopa> Yeah.
<claudiupopa> In fact, ups, sethostname is still wrong.
<jetole> ok...
<jetole> s/windows/common/ <- is that for all plugins?
<claudiupopa> No, some of them are windows specific.
<claudiupopa> The rest are nicely abstracted so that they work on any platform.
<jetole> claudiupopa: can you check my whole plugins line there and let me know which other ones I should adjust, please?
<claudiupopa> Sure.
<jetole> thank you
<claudiupopa> networkconfig is common, as well as mtu.
<claudiupopa> As well as sethostname.
<jetole> claudiupopa: extendvolumes and activate windows still use "windows"?
<claudiupopa> Yes, they windows specific.
<claudiupopa> Except extendvolumes, which is somewhat a leaky abstraction right now.
<claudiupopa> https://github.com/stackforge/cloudbase-init/blob/master/cloudbaseinit/plugins/common/factory.py#L23
<claudiupopa> This should give you a fair idea of what is what.
<jetole> If I just reboot this instance on the stack, will that test the changes I made to cloudbase-init.conf or do I have to re-sysprep?
<claudiupopa> You'll have to resysprep. Or delete a key from the registry.
<claudiupopa> There's a key where cloudbaseinit logs the status of the executed plugins.
<jetole> do you have the path?
<jetole> quicker to test and than I can re-sysprep on my workstation and re-upload once I know I have a working config 
<claudiupopa> Should be under 'SOFTWARE\\Cloudbase Solutions\\Cloudbase-Init\\'
<claudiupopa> HKLM.
<harlowja> windows registry
<harlowja> eck
<harlowja> runs away
<jetole> harlowja: I don't have a \\HKLM\SOFTWARE\Cloudbase Solution\
<harlowja> i defer to others here
<jetole> Do you think the import error may have caused an exit before it was written 
<harlowja> really will run away ha
 * harlowja running to food
<harlowja> lol
<jetole> ;-/
<jetole> :-/
 * jetole does the reboot 
<claudiupopa> jetole: if you don't have any, that's good, no plugin was executed then, so doing a reboot should do it.
<jetole> Something else I wanted to ask about. I use a app which provides the initial password, in plain text, in json ['meta']['initialpass'] via http://169.254.169.254/openstack/2012-08-10/meta_data.json. Can I set a user password via this data?
<jetole> I had to write some python code to use this in Linux as I could not find a way to do it natively via cloud-init and a few members of this room (weeks ago) confirmed it could not be done natively 
<claudiupopa> jetole: in cloudbaseinit it needs to be in ['meta']['admin_pass'].
<jetole> so that's a no
<claudiupopa> Yes, basically.
<jetole> it's ok
<jetole> I am getting a terminal prompt for a password via sysprep now and will port the python later 
<jetole> claudiupopa: Here's the updated console output: http://paste.openstack.org/show/Jf99AeXArfzL3INP4QRi/
<jetole> I am seeing the hostname is set but the volumes are not extending and windows is not being activated. It has a valid MAK key and if I click on activate windows than it activates just fine but cloudbase-init isn't extending hdd or activating 
<jetole> whoa
<jetole> wait a second 
<jetole> the hdd now appears to be extended 
<jetole> and windows is activated 
 * jetole does a happy dance 
<jetole> claudiupopa: If I test this on a local kvm-qemu without the 169.254.169.254 meta data, will this run or fail @ hdd-extend and activate?
<claudiupopa> jetole: you mean without any metadata service?
<jetole> right. This is just for testing on local workstation before uploading to the stack
<claudiupopa> Then it will definitely fail. cloudbase-init tries to load the metadata before starting any plugin.
<claudiupopa> If it finds no available metadata service, it fails.
<jetole> OK
<jetole> any recommendation for how I could run a easy to deploy testing meta server on my local net 
<claudiupopa> Probably we could add some sort of local for-testing service.
<jetole> actually I have hit the meta server wall on linux / cloud-init too 
<jetole> would be better if there is a for testing meta framework 
<claudiupopa> In the mean time, I guess you could "mock" a metadata service, by starting a local Python / Ruby / Perl server which exports the expected api.
<claudiupopa> And then just setting metadata_base_url to point to your mocked service.
<claudiupopa> metadata_base_url in config file I mean.
<alexpilotti> claudiupopa: we could also just provide a DummyMetadata class
<alexpilotti> which implements the base class w/o doing anything else
<jetole> claudiupopa, do you want to add a answer to my issue @ http://ask.cloudbase.it/question/401/issues-with-meta-activate-serial-and-hdd-extend/ regarding s/windows/common/ ? I was about to but I wanted to reference you for credit and don't know how since I hardly ever use this service 
<jetole> alexpilotti and claudiupopa: How do you guys deal with building images? Do you upload every test image to the stack? Isn't that time consuming?
<alexpilotti> jetole: itâs ok, itâd be great if you could answer and mark it as an answer :-)
<alexpilotti> about images: they get cached on compute nodes, so after the first boot itâs fast even if they are relatively big
<jetole> alexpilotti: no I know but I mean when you're building images like, for example, right now I'm building the redeployable win2kr2 image for our stack and I need the meta data to run it which I don't have locally so right now I am doing a lot of dev, upload, test, debug, upload, test, Q&A with you guys (would have been lost without it), upload, test and for me it's pretty time consuming
<jetole> I mean if there is no dummy meta server than there still has to be a better way than this 
<alexpilotti> we use these Powershell cmdlets to automate the process: https://github.com/cloudbase/windows-openstack-imaging-tools
<alexpilotti> itâs fully automated, but it takes 1-2 hours per image
<alexpilotti> the Windows updates are the time consuming part
<alexpilotti> if you dont care about updates, it can be fully done offline
<jetole> not bad. I'm on Linux workstation but I could always rdp. There's a bunch of Linux options too for automated builds but for build and test, with the whole upload thing, it just seemed to me like there would have been a less time consuming way. For example, I mean if it wasn't for the meta-data, I could just poweroff the system after a sysprep, qemu-img a backing file for qcow2, boot it up and test in a matter of minutes 
<jetole> yeah I did the updates on the pre-rollout but that was a one-off task
<jetole> for dev & QA, I'm having to re-deploy for every test but oh well
<jetole> just hoped the pros had a better path 
<jetole> Forgot to mention I'm using qcow2 locally for snapshots and backing_files to expedite dev but the stack is using ceph and qcow2 on ceph isn't a good idea so I also have the qemu-img convert time before each upload
#cloud-init 2015-04-02
<Odd_Bloke> Reading backlog, I really feel I should note that I see more Hash sum mismatches from Ubuntu mirrors than I do any sort of problem with PyPI. :p
<ndonegan> jetole: Using something like https://gist.github.com/smoser/1278651 on local image build services.
<ndonegan> Have edited it slightly so it can read in the userdata it serves from a local file, but that's about it.
<smoser> ndonegan, patches are welcome to that :)
<ndonegan> smoser: Will see if I can put what I've done up as a proper project on github.
<ndonegan> It's even setup to install cleanly onto a Centos 6 box and just work.
<ndonegan> (as rpm that is)
<smoser> ndonegan, nice. for cloud-init 2 , i want to have sort of a set of datasource mimickers that we can easily test against.
<ndonegan> We've purposely disabled all data-sources except for EC2, and None which is just setup to report if EC2 fails.
<smoser> ndonegan, nice.
<smoser> you shouldnt have to disalbe other servcies.
<smoser> other sources
<ndonegan> We have a security team who'd prefer it to be strictly defined where the data is coming from ;)
<ndonegan> And we had some interesting issues with Config Drive.
<ndonegan> (Although some of that was due to faulty deployment)
<smoser> that makes sense.
<smoser> so going forward, i'd like for nocloud to fit your bill
<smoser> would there be somethign we could do that would make it work better for you ?
<smoser> harmw, i just /join'ed #cirros
<harmw> no shit
<jetole> ndonegan: that's cool
<harlowja> alright https://review.openstack.org/#/c/170242/ seems useful; is that ok to just clone over smoser  or do i need to do something else (license wise?)
<smoser> we have to ge tlicense header set right
<harlowja> k
<harlowja> should i just set it to the apache one?
<harlowja> or do we need to contact 'juerg.haefliger@hp.com' (the other editor i think of that file)
<smoser> no we dont.
<smoser> well, i dont :)
<smoser> because i'm acting as canonical. and juerg signed CLA, so canonical has right to do that.
<smoser> hold on
<harlowja> k
<harlowja> holding horses
<smoser> harlowja, https://review.openstack.org/170249
<smoser> take that one, then change header to match
<harlowja> k
<harlowja> done
<harlowja> alright, let's see; what else should we take over
<harlowja> https://review.openstack.org/#/c/170252/ (the yaml stuff)
<harlowja> and maybe the templating stuff
<harlowja> those seem like generally useful
<smoser> so .. wrt pulling stuff over. 
<smoser> i think largely lets wait a bit.
<smoser> 2 things i am afraid of from just pulling stuff without tihnking
<smoser> a.) LOG verbosity
<smoser> b.) string translation looseness
<smoser> string.decode() + translation.encode() + looseness
<harlowja> k
<harlowja> don't fear josh is here!
<harlowja> lol
<harlowja> ok, https://review.openstack.org/170257 (template stuff) and safeyaml and url helping
<harlowja> all i move over
<harlowja> all seem generally like we'll need them anyway...
 * harlowja killed cheetah though
<harlowja> sorry cheetah
<smoser> poor cheetah
<harlowja> not many tears are shed i think
<harlowja> ha
<smoser> what do you think about ditching requests.
<harlowja> for?
<smoser> it generally doesn't seem useful. urllib2 or urllib3 maybe ?
<smoser> we went to requests for https sanity.
<harlowja> i had urllib2
<harlowja> urllib3 maybe
<smoser> but iknow that urllib3 does the right thing in pytho3n
<harlowja> *hate urllib2
<smoser> i dont knwo tha trequests really buys us antyhign.
<harlowja> sure
<harlowja> urllib3 powers requests though; so idk
<smoser> right
<harlowja> looks like we could jsut use it though
<harlowja> i guess i'd be ok with it, urllib3 seems fine
<harlowja> nothing to crazy like urllib2, lol
<smoser> look at https://github.com/stackforge/cloud-init
<smoser> why does that say "http://openstack.org" ?
<harlowja> :-/
<harlowja> unsure
<harlowja> my guess infra folks put that there?
<smoser> i really want to be careful about dependencies.
<harlowja> sure
<harlowja> do u want to jump on #openstack-infra and ask why? thats there
<smoser> sure
<harlowja> "Cross-platform instance initialization http://openstack.org" -> "Cross-platform instance initialization"...
<harlowja> https://github.com/stackforge/anvil also has that; and a few others
<harlowja> so i guess its just a common template or something
<harlowja> although weird
<harlowja> smoser the mordered using it was a joke, apparently he was complaining about it recently...
<harlowja> smoser http://paste.ubuntu.com/10726145/
<harlowja> blah blah, lol
<harlowja> https://review.openstack.org/#/c/165914/ ...
<harlowja> stupid stuff, lol
<smoser> :)
<harlowja> hehe, let's see here
<harlowja> http://paste.ubuntu.com/10726170/ ...
<harlowja> smoser ^
<harlowja> more fun fun
<harlowja> lol
<harlowja> i'm pretty sure its because hp cloud is using 0.6.3 (which isn't fully functional i think with newer openstacks)
<harlowja> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-03-19.log 
<harlowja> search '2015-03-19T17:14:19'
<harlowja> and read from there, ha
<harlowja> then i don't need to paste 
<harlowja> lol
<smoser> odd. 
<harlowja> i'm not sure what they made instead :-/
<harlowja> let's see what he says, ha
<smoser> i dont know why you'd mount the config drive and leave it mounted. that doesnt seem to make much senes. 
<harlowja> ya, u got me :-/
<harlowja> bb food; smoser  u got it covered :-P
<harlowja> will read in a little
<harlowja> fun fun
<harlowja> lol
<smoser> thanks. 
<smoser> :)
<smoser> i didn't eed such discussion.
<smoser> i want to make cloud-init not suck for sure.
<harlowja> smoser agreed; but some of these statements from all those people are all conflated
<harlowja> and putting blame in the wrong place....
<harlowja> packaging sucks, its all your fault
<harlowja> i packaged my images in weird ways, its all your fault...
<harlowja> blah blah
#cloud-init 2015-04-03
<smoser> harlowja_away, yeah.  a lot of it amounts to "i'll just build everything on my own, and then it will be done right." which is perfectly fine with me, if you want to do that. it turns out, though, its kind of expensive in the long run.
<tennis> hi.  I'm getting this on one of my aws instances, and I'm not sure where to begin to debug it.  Any ideas anyone? https://gist.github.com/anonymous/eb9fd02646e6a760ed34
<tennis> For some reason, the url_helper.py cannot find '169.254.169.254' for metadata.
<xerxas> hello everyone ! I'm using the chef module to install ... chef. I would like to use omnibus packages and specify the versions I want to install but can't find a way to do it. Any idea ? 
<harlowja> smoser agreed
<kwaping> when using an #include url, are there any variables available that can be used to identify the host that's phoning home?
<kwaping> (hi harlowja)
<harlowja> kwaping i assume u are oliver :-P
<harlowja> or something else, ha
<harlowja> not alien?
<kwaping> oh, duh, yes
<harlowja> hi oliver!
<harlowja> i see u
<harlowja> lol
<kwaping> sorry, this is my obscure public name, long story
<harlowja> smoser just want to introduce u to another y! (oliver aka kwaping ) he's been doing some stuff with cloud-init on baremetal (simialr to maas i guess) annnnnd will be helping out :)
<harlowja> sooo hiiii kwaping 
<harlowja> :-P
<harlowja> kwaping loves the baremetal
<harlowja> lol
<kwaping> I work with harlowja, don't hold that against me :P
<harlowja> lol
<kwaping> anyway, my ideal end goal is to include cloud-init in a ramdisk used for installing the final OS, and I want that ramdisk version to phone home and grab a dynamic config for disk setup (especially hardware RAID config), based on the individual host that's making the request
<harlowja> smoser that sounds similar to maas and maybe other stuff i think right :-P
<harlowja> i guess the smartos folks are doing similar things, idk
<harlowja> sooo kwaping  http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/user_data.py#L204 is the code that does all the include stuffs
<harlowja> i didn't think the url has any params that get filed in
<harlowja> at least it doesn't right now
<kwaping> ok thanks, checking
<harlowja> basically the userdata gets read, and then out put as X mime segements
<harlowja> these mime (mail) segements are then later analyzed to do things
<harlowja> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/user_data.py#L89 is where the fun begins
<harlowja> blob there being the starting 'user data blob'
<kwaping> fascinating that #include-once really means "fetch and cache"
<harlowja> gotta do the 'once' part somehow :-P
<harlowja> since #include stuff is recurisvely expanadble
<harlowja> #includes urls having content with #includes ....
<kwaping> does it also get run only once, or is it "fetch once, run every time"? (still reading code)
<harlowja> so this is just the code that does that extraction; its typically called once
<harlowja> but depends on what u mean by 'run everytime'
<harlowja> all this does is download the userdata and such into a multipart mime message
<harlowja> which then gets processed later
<kwaping> ah ok, so include-once can load files that are run once or run lots
<harlowja> how it gets processed later is dependent on what the processing is
<kwaping> content and fetch method are decoupled
<harlowja> ya
<harlowja> this is just the fetch
<kwaping> cool thanks
<kwaping> that helps
<harlowja> np
<harlowja> http://cloudinit.readthedocs.org/en/latest/topics/dir_layout.html is also useful to look at/understand kwaping 
<harlowja> user-data.txt.i (is the multipart pieces)...
<harlowja> 'user-data.txt' is the unprocessed/initial stuff
<kwaping> yeah I've been all over that site lately, but still digesting contents
<harlowja> k
<harlowja> smoser wanted to eventually i think destroy the usage of mime multipart stuff; maybe in the next version of cloud-init
<harlowja> its somewhat confusing to use mime messages for this, ha
<harlowja> but thats how it goes :-P
<kwaping> yeah I was going to say something about that, but figured there was a good use case that I was missing
<kwaping> config by email
<harlowja> smoser can explain, i forget the reason back in the day
<harlowja> probably something along the line of 'python has real good mime support and such, lets use it'...
<harlowja> ^ predates me, ha
<kwaping> funny how major decisions get made for reasons like that :)
<harlowja> :)
<kwaping> ok I'm going to head out since smoser is afk
<kwaping> we can continue the witty banter elsewhere
<harlowja> ha
<harlowja> all we do here is witty banter :-P
<harlowja> thats a major part of my day, lol
<harlowja> hmmm, will have to get him to idle in here vs leave, will work on that, lol
#cloud-init 2016-04-04
<smoser> harmw, around ?
<smoser> was looking to merge https://code.launchpad.net/~andatche/cloud-init/freebsd-improvements/+merge/278427
<smoser> and interested in your thoughts on 'freebsd' ratehr than 'beastie'
<Odd_Bloke> smoser: For GCE, they've asked us to set the MTU on network interfaces; is there a way in the new cloud-init networking world that we can say "apply this setting to all network interfaces"?
<Odd_Bloke> (rharper: ^)
<rharper> Odd_Bloke: no;  we don't have global tags
<Odd_Bloke> rharper: Is that a "not currently" or a "that doesn't fit the model"?
<smoser> it'd have to be done on the interfaces, there is no global settings.
<rharper> Odd_Bloke: certainly not currently;  we've not discussed a global apply
<smoser> it does seem to be an odd ask though
<smoser> "always use mtu 1400"
<rharper> heh, openstack does that with ovs
<smoser> where as really it is device (connection) specific
<Odd_Bloke> Well, if all the network interfaces are going to be virtualised the same, then it doesn't seem super-unreasonable.
<Odd_Bloke> It does look like we could do it by editing dhclient.conf though.
<Odd_Bloke> I _really_ wish there was a dhclient.conf.d.
<smoser> Odd_Bloke, this is settable in dhcp
<smoser>  option interface-mtu 9000
<Odd_Bloke> Yeah, I'm building a test image now, I'll see what MTU it gets by default.
<smoser> there were bugs where cirros wasnt respecting that
<smoser> http://www.microhowto.info/howto/change_the_mtu_of_a_network_interface_using_dhcp.html
<smoser> if our dhclient is not applying that setting, then that is something we can/should fix. i think we hit this with openstack.
<Odd_Bloke> Looking at a leases file on an existing instance, GCE's DHCP server is sending it over.
<Odd_Bloke> So hopefully this is all going to Just Work (TM). :p
<smoser> well what does it show, Odd_Bloke
<smoser> ifconfig should show the mtu
<Odd_Bloke> smoser: Oh, right, because cloud-init is wiping away our hard-coded file?
<Odd_Bloke> So, yes, it looks like eth0 got the right MTU. \o/
<Odd_Bloke> (I thought we were still using our hard-coded file)
<smoser> Odd_Bloke, right.k i opoened a bug for you on that
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1563487
<Odd_Bloke> smoser: Yep, that's what I'm looking at now.
<smoser> ah. and you're assigned :)
<Odd_Bloke> smoser: ^_^
<Odd_Bloke> Yeah, so I had forgotten that the new cloud-init was wiping away our hard-coding, so I didn't think I was looking at an instance where the networking code was doing its thing. :)
<Odd_Bloke> But I was.
<Odd_Bloke> Next step: enable predictable interface names and see what happens
<smoser> it "should work", and cloud-init should actually keep you getting the eth0 ones.
<smoser> you shoudlt really need to re-build an imae to see immediate fallout
<smoser> just remove /etc/udev/rules.d/* and /etc/systemd/network/* and /var/lib/cloud/* and reboot
<Odd_Bloke> "keep you getting the eth0 ones"?
<Odd_Bloke> s/you/you from/ ?
<smoser> no. you should get the eth0
<smoser> but reliably
<Odd_Bloke> Oh, OK.
<smoser> in that cllud-init writes rules that enforce the macs -> name
<smoser> based on the first time that they were seen
<smoser> so, basically its as if the old  udev persistent naming that autoomatically wrote rules
<smoser> (without the blacklisting of "virtual" nics)
<Odd_Bloke> smoser: So I've booted and I'm seeing ens4.
<Odd_Bloke> (Which is fine, but doesn't match up with what you said before, so I'm checking that something isn't off)
<cbolt-> i have an issue where i am injecting the network-interfactes into the metadata (using nocloud datasource) and the network interfaces arent up yet before the runcmd section in the user-data. any suggestions on how to deal with this? i am setting up routes via runcmd and the network not being up yet is problematic.
<sputnik13> is there any requirement for an ISO configdrive other than the name for the volume?
<sputnik13> created an ISO with my ssh key like described here https://github.com/kelseyhightower/coreos-vmware-tutorial
<sputnik13> and ran a ubuntu cloud image under qemu with qemu-system-x86_64 -cdrom <iso> <cloud.img>
<sputnik13> and the vm comes up and says it can't find any keys
<sputnik13> but logs do show that the ISO is found and mounted
<sputnik13> and openstack/latest/user_data is read
 * sputnik13 is stumped
<cbolt> what does your userdata contain?
<sputnik13> http://pastebin.com/64EE0VSD
<sputnik13> essentially
<Odd_Bloke> smoser: We're seeing some problems in the way DataSourceConfigDrive handles network_config in precise (and trying to determine if the cloud we're working with are doing something wrong); are there any gotchas that I should be aware of?
<smoser> Odd_Bloke, its probably racy
<smoser> yeah, it is. i'm pretty sure
<Odd_Bloke> smoser: But it's known to work?
<smoser> maybe for some things.
<smoser> basically have to know more of whats going on there./
<smoser> and woudl have to think
<Odd_Bloke> smoser: Because it looks to me like read_config_drive_dir_v2 will take the network_config content file and store it in metadata["network_config"] but then the code that should write it out look in metadata["network-interfaces"] and metadata["interfaces"].
<smoser> you're looking at precise code or trunk code ?
<cbolt> smoser: when you set network interfaces in metadata, does cloud-init wait until they come up before running the userdata?
<Odd_Bloke> smoser: precise code.
<Odd_Bloke> smoser: (Line 397 is the read; line 206 is the write, if you're following along at home)
<Odd_Bloke> smoser: I'm EOD'ing, so I'll bug you more about this tomorrow. ^_^
<smoser> cbolt, you're asking about config drive ?
<cbolt> nocloud
<smoser> on xenial,  yes.
<smoser> it should do the rigth thing
<larsks> sputnik13: fyi, using the ubuntu wily cloud image, using this user-data (http://chunk.io/f/f9ce25f474f04fd683847ad92234ea99) allowed me to log in to the 'ubuntu' user sans password with any errors.
<bdx> smoser: any idea if cloud-init will support puppet4 (puppet-agent)?
<bdx> oooh, nm, just found the bug for it
<smoser> waldi, around ?
<sputnik13> larsks did you do that on openstack or straight qemu with an ISO?
<larsks> That was qemu + an iso (although I am using the nocloud format for the iso, rather than the openstack format)
<larsks> sputnik13: ^^^
<larsks> sputnik13: specifically, it was this iso image: http://chunk.io/f/ca6ca61dcf29402395c415a3e395e419
<sputnik13> larsks: thanks, would you mind sharing the qemu options you used?
<sputnik13> I know this all *should* work, I'm stumped as to what I might be missing
<larsks> well, I'm booting with 'virsh', so that was: virt-install -n citest --disk vol=default/ubuntu-wily-x86_64.qcow2,bus=virtio --import --vnc --noautoconsole --disk path=/path/to/config.iso,device=cdrom -w network=default -r 1024
<sputnik13> larsks: cool, thank you
<sputnik13> larsks: that iso works with the image I had...  so it seems like your image is using the "NoCloud" option described here http://cloudinit.readthedocs.org/en/latest/topics/datasources.html
<sputnik13> hmm, I wonder why the config-2 thing doesn't work
<larsks> sputnik13: that's correct; that what I meant when I said I was using the nocloud format.  Sorry for not linking.  I have previously used the openstack config-drive format with success, but it's been a while.
<rharper> smoser: http://paste.ubuntu.com/15621102/
<rharper> ubuntu user not in /etc/passwd, so the ssh key add failed (xenial cloud image from 2016-04-03 )
<sputnik13> larsks no worries, thanks for the hints, I was banging my head against the wall for a couple days
<ybathia> harlowja: hi, is the admin_pass feature of cloudinit borken? I see the admin_pass value that I pass during nova boot in metadata.json
<ybathia> but it does not seem to work
<harlowja> ybathia i don't think cloud-init does admin_pass stuff
<harlowja> that's more nova file injection
<harlowja> from what i remember
<spandhe> harlowja: hello :)
<spandhe> harlowja: we see admin_pass in metadata.json. I also see https://github.com/openstack/cloud-init/blob/d0f880277b152ecf82aa2e15a281ffc02ff5eaac/cloudinit/sources/openstack/base.py#L113 . but its not being called from anywhere
<harlowja> spandhe  ya, probably look at the 0.7.x branch
<spandhe> harlowja: the method is not there on that branch
<harlowja> then, ya, i guessed its not used by cloud-init
<harlowja> nova afaik injected that password into the disk(s)
#cloud-init 2016-04-05
<kalx> Hi all. I recently made a new AMI on AWS after doing running through linux system updates. I noticed the new one has some severe lag after startup.
<kalx>  Per logs, it seems the 'config_apt-pipelining module is taking 4-5 mins to execute now. Anyone run into similar issues before?
<kalx> Does anyone know what the apt-pipelining config module does? Is it purely just disabling apt-pipelining? or does it do anything else?
<kalx> Checked code, it just writes a config file to apt config dir to disable/enabling apt pipelining, nothing else
<kalx> so likely something else causing the issue actually that just happens block that config module from finishing
<Odd_Bloke> rharper: smoser: What clouds do you know of that use network_config?
<smoser> Odd_Bloke, it works on openstack with config drive and nocloud currently.
<smoser> and the 'fallback' will be in place on others.
<smoser> Odd_Bloke currently its in place only for local data sources (not requiring network)
<smoser> the next step is to add it for data sources that require a network. example EC2 or Openstack Metadata
<Odd_Bloke> smoser: Sorry, I should have explained what I was looking for more fully: I'm trying to track down a problem with precise's ConfigDrive handling of network_config, and I'm looking for a place where the OpenStack configuration is known-good.
<Odd_Bloke> smoser: Because I don't want to consider fixing it in precise if it turns out I'm just looking at a funky OpenStack configuration. :p
<smoser> oh. that.
<smoser> Odd_Bloke, what is it that you're looking at? is it openstack metadata service ? or config drive?
<Odd_Bloke> smoser: Config drive.
<smoser> ok, so that has a shot at working, but even then i think that probably on precise the cloud-init local job doesn't fully block networking from coming up.
<Odd_Bloke> smoser: (I'm hoping that we'll be able to convince the partner to just not have precise in the region they're seeing this issue, but want to be sure of all the facts before pushing for that :)
<Odd_Bloke> Because it EOLs in a year anyway, and this is a new region, etc.
<smoser> and thus if it doesn't block networking coming up, then it best case we ifdown something and then ifup it back up.
<smoser> the new stuff is better, in that we block networking from coming up.
<Odd_Bloke> smoser: Well, looking at it, I'm not sure it _does_ work at all.  It looks like configuration is put in to keys that aren't later read from; but I want to confirm that I'm not just dealing with a weird OpenStack configuration that cloud-init mishandles.
<Odd_Bloke> smoser: (This was totally refactored by trusty, and that works fine)
<smoser> oh.
<Odd_Bloke> So I want a cloud which does network_config "properly" so I can just validate my finding of brokenness. :)
<smoser> Odd_Bloke, and you want that to work with precise
<Odd_Bloke> smoser: Well, once we know where we're at, we can go and talk to the partner about whether it's worth making it work.
<smoser> Odd_Bloke, ok. quickly reading that..
<smoser> i think that what is there is support for config drive v1
<smoser> which is probably not alive in any openstack cloud
<smoser> config drive v2 is what you'd probably see anywhere.
<smoser> v2 came probably 3 years ago at least
<rharper> smoser: did you see my ping re: xenial cloud-image not getting user ubuntu installed, which breaks when we add keys ?
<smoser> yikes.
<smoser> no. i didnt.
<rharper> <rharper> smoser: http://paste.ubuntu.com/15621102/
<rharper> <rharper> ubuntu user not in /etc/passwd, so the ssh key add failed (xenial cloud image from 2016-04-03 )
<rharper> running a synced curtin vmtest should trip it
<smoser> hm..
<rharper> (ie new enough xenial cloud image)
<smoser>  cat /etc/cloud/build.info
<smoser> build_name: server
<smoser> serial: 20160403-141429
<smoser> that works in lxc at least.
<smoser> lxd
<rharper> you're not useradding ubuntu ?
<smoser> (ie, just laucnhed an instance here and there is a ubuntu user)
<rharper> where is that normally added? (default users/groups) ?
<smoser> its part of config (/etc/cloud/cloud.cfg)
<rharper> right. cloud-init does;
<rharper> also, there is another one related to booting an image a second time; http://paste.ubuntu.com/15630937/
<smoser> rharper, so what happened is you failed to get the datasource
<smoser> and you used the fallback datasource
<smoser> which just generates ssh keys
<rharper> heh, *I didn't* fail
<smoser> and there is apparently a bug in that where it creates a driectory rather than symlinking
<smoser> :)
<rharper> well, maybe I'm speaking too soon
<rharper> always chance for a PEBCAC
<rharper> we're providing the normal seed via iso
<smoser> well, it is probably failing to find a source.
<smoser> if you can get a aconsole log it will probably mention that
<smoser> fallback datasource BAD THINGS TO COME or somethign liek that
<rharper> cloud-init.log or ?
<smoser> cloud-init.log should have WARN in it
<rharper> yeah
<rharper> hard to know what to look for
<rharper> why would it fail ?
<rharper> to find the iso ?
<rharper> that's after it actually loads and reads seed from /dev/vdc
<rharper> smoser: which datasource should our seed.img in our curtin tests show up under ? (NoCloud) right ?
<smoser> yeah.
<rharper> http://paste.ubuntu.com/15631292/
<smoser> rharper, ok. i'll take a look in 10 mimnutes trying to finish something up for matsubara
<rharper> wouldn't the fallback seed still read and use /etc/cloud/cloud.cfg (which the default users get installed?)
<rharper> smoser: thanks
<smoser> yeah.. i'm not sure why the warn about the user.
<rharper> smoser: so, another reason curtin needs to disable cloud-init network;  nic name races;  cloud-init emits the systemd link stuff, it classed with the udev rules we wrote and now I got a renam5-eth2 in there
<smoser> hm..
<smoser> it shoudlnt clash though.
<smoser> as 70-persistent should always be favored
<rharper> well
<rharper> it's not =/
<rharper> the fallback code decided that my eth2 would ne a nice eth0 link
<rharper> then I suppose it got a rename even, and raced (and 70 won)
<rharper> but not before ifup had run and setup some link information in the kernel
<smoser> ok. i'll start poking
 * rharper is trying again with networking disabled 
<rharper> that fixed my test run
<rharper> it appears that the disturbance of networking takes cloud-init down some other path that fails to use the local datasource
<smoser> oh. yeah.
<smoser> it does.
<smoser> thats why you're seeing the DataSourceNone
<smoser> because networking never comes up.
<rharper> that seems odd; especially for a nocloud ds
<rharper> but I'm sure I'm missing something
<smoser> rharper, what curtin tests were failing for you ?
<smoser> rharper, ^ i just ran successfully on diglett all tests except for one trusty one (which i think failed due to io load with --processes=-1)
<smoser> ie: http://paste.ubuntu.com/15634350/
<rharper> smoser: it's a new one I'm adding for vlan stuff
<smoser> well, pfft.
<rharper> in particular, it's due to the RTNETLINK stuff
<rharper> smoser: I think the more general concern is that in the case that network fails, cloud-init fails even when nocloud datasource is present
<rharper> and networking is failing due to cloud-init networking (in my case, emitting the systemd link file is the direct cause)
<rharper> so before I add the vlan case which triggers this 100% of the time
<rharper> I'm adding the code to emit network: config: disabed in curtin target
<smoser> well, boot failed because cloud-init wanted networking (as do other things in boot).  they wait until networking is available (network.target)
<smoser> and there was no network.target reached, so it failed.
<smoser> i dont think its related to systemd link file
<smoser>  see /lib/udev/rules.d/80-net-setup-link.rules
<smoser> .link files are only paid attention to if NAME==""
<smoser> and your 70-persistent.... would have set NAME=
<rharper> yes
<rharper> it's related
<rharper> the link file forced a iface rename of eth2 to eth0
<rharper> and it's racing with existing data in the routing table
<smoser> how ?
<rharper> I don't fully understand the sequence
<rharper> but when the vlan config goes to ip set link up on the interface for the vlan
<rharper> it finds an existing route
<rharper> and fails
<rharper> in the routing table, I see things like eth2-rename
<rharper> and if I disable cloud-init networking, the eni is perfectly fine and solid
<rharper> I'll post the branch in a minute
<rharper> if you'd like to debug it more
<rharper> smoser: https://code.launchpad.net/~raharper/curtin/trunk.test-vlan/+merge/291023  ;  if you remove the bit in curtin/net/__init__.py where I now write a config to disable cloud-init networking, the XenialTestNetworkVlan testcase will fail and you can boot the install disk to see what's going on
#cloud-init 2016-04-06
<smoser> Odd_Bloke, https://code.launchpad.net/~daniel-thewatkins/cloud-init/lp1460715/+merge/274897
<smoser> where are we on that ?
<Odd_Bloke> smoser: Urgh, lost track of that.
<Odd_Bloke> smoser: In calls until my EOD, will catch up on it tomorrow morning.
<smoser> Odd_Bloke, ok. well, please take alook tomorrow and we can get it into xenial still i think.
<harlowja> smoser what's the status on network_data.json support?
<harlowja> is that fully supported in 0.7.7?
<smoser> fully for ubuntu at this point.
<harlowja> k
<smoser> and /you can add the centos support right now!
<smoser> :)
<harlowja> :)
<harlowja> i just might!
<harlowja> so there!
<harlowja> lol
#cloud-init 2016-04-07
<Petar_> Hi, I`m need help for cloud-init ?
#cloud-init 2016-04-09
<harmw> smoser: regarding beastie vs freebsd, I'm fine with the latter :)
<goose_> hey guys, is it possible to run cloud-init with a #cloud-config file using "cloud-init --file ~/myfile init" currently it runs but not every definition in the file especially the runcmd definition
#cloud-init 2016-04-10
<jfcastro> hi all! how can I configure cloud-init to using a device as swap?
<jfcastro> I saw that openstack do it but I'm trying to do manually without luck :(
<jfcastro> thanks in advance ;)
#cloud-init 2017-04-03
<smoser> jgrimm, yeah. there are some others. we didn't know that tiny url expired.
<smoser> i dont really know what to do. i guess we could make our own ... just add a 'urls' file or something.
<smoser> and then references into it.
<jgrimm> smoser, yeah.. or just use the long URLs ugly as they may be
<smoser> well, pep8 cries
<smoser> or flake8
<jgrimm> oh, ewh
<smoser> thats why.
<jgrimm> tho that should be able to be worked around, no?
<smoser> maybe.
<smoser> i'd be open to that if you figured out how to do so. make it be ok with long lines that consisted only of a url or something.
<smoser> the other problem with shorteners is that mnost do not let you update the target.
<jgrimm> smoser, yeah, though that's a smallish issue as you'd just update the docs with the new short url
<jgrimm> but i'll look at see if i can escape it enough to make flake8/pep8 happy and be done with it
<smoser> rharper, or larsks if you would review https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/321709 i'd appreciate it.
<smoser> if not, i'm probably just going to pull l)
<rharper> smoser: boo =(  we're never going to win the battle if we keep merging these;  you're too nice to downstream =)
<smoser> well, larsks +1'd upstream, and some others too. maybe some traction
<smoser> https://review.openstack.org/#/c/400883
<smoser> unfortunately, my latest chnage seems to FAIL more.
<smoser> oh duh
<smoser> there. https://review.openstack.org/400883/5 shoudl be better now.
<rharper> I wish review.openstack had a .diff or .patch link like github
<smoser> got to be one in those 674 buttons
<rharper> lol
<smoser> really weird
<smoser> i can get a .base64 file. or a .zip file of the diff!
<smoser> but not just the diff
<rharper> haha
<smoser> rharper, so... if you click 'gitweb' link on the commit at https://review.openstack.org/#/c/400883
<smoser> takes you https://review.openstack.org/gitweb?p=openstack/nova.git;a=commitdiff;h=0892c10388b35bb3eb2e2ce5b93b31dae5c9e0a3
<smoser> which has links you were after. ('raw', 'patch' and such)
<rharper> hah, so escape gerrit
<rharper> then sanity
<rharper> smoser: thanks for finding the sanity button
<rharper> even the gitweb view itself is already enough
<smoser> yeah, but the gitweb view can't get you a zip file of the .patch file ;)
<rharper> only a raw patch which HTTP likely will compress for you for free
<rharper> smoser: would it make sense to put 'phy' in VIF_TYPE_PHY ?
<rharper> since the rest are enumerated
<rharper> maybe there is already a VIF_TYPE_PHY instead of 'phy'
<smoser> there is not
<smoser> but yeh, i considered that. but its not really a 'vif type'.
<smoser> so, ii'll let it be. i pinged a nova core, hopefuly get a +1 out of that.
<rharper> y
<smoser> larsks, what do you know about ovh ?
<smoser> vbah
<smoser> not ovh.
<smoser> ovirt.
<larsks> Not much beyond "manage-y thing for libvirt".
<larsks> But I can probably find someone who knows more about ovirt.
<smoser> ok. so someon told me that they were having trouvble with ubuntu images in ovir
<smoser> and they gave me access to a vm
<smoser> and it seems it using a config drive like (openstak config drive)
<smoser> but the config drive only has 'latest/
<smoser> '
<larsks> Would that ever have worked?
<smoser> well, it will.
<smoser>  http://paste.ubuntu.com/24308380/
<smoser> thats the contents.
<larsks> Okay.  And briefly looking at sources/helpers/openstack.py we should find the metadata with just a latest/ directory, right?
<smoser> yeah, it will. but i'd like to file a bug against ovirt/rhev/whatever-is-doing-that that they should not do that.
<smoser> as reading something called 'latest' is kind of never going to really work, and i wish it didnt work :)
<larsks> I can do that.  Is there a specific problem it's causing, or is this just proactive?
<larsks> Also, what version of ovirt is this?
<smoser> larsks, no. its not a specific problem. i should probably put a WARN in or something that says deprecated on reading that.
<smoser> it just seems so not really supported, and they very easily could have made the directory named 2014-11-04 or whatever it was.
<rharper> smoser: in the past, you mentioned to me that latest was just a pointer, that parsers of the metadata support specific versions/releases of openstack metadata;  latest is a moving target versus the various dated releases
<smoser> well, on config drive its a copy. but yeah. its a moving thing.
<chatter29> hey guys
<chatter29> allah is doing
<chatter29> sun is not doing allah is doing
<chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
#cloud-init 2017-04-04
<Guest71437> Hello all, I want to if runcmd works with get_file function in HEAT template?
<Guest71437>  Hello all, I want to know if runcmd works with get_file function in HEAT template?
#cloud-init 2017-04-05
<syed_> Hey guys, I am a commiter for apache-cloudstack and wanted to contribute a change to the CloudStackDataSource in cloud-init
<syed_> I am signing for a contributer licence agreement and it is asking for a canonical project manager
<syed_> who am I supposed to add there?
<syed_> ok so I put  "Don't know" in there and it let me continue
<jgrimm> syed_, sorry, just noticed your question,  you can put Scott Moser (smoser) or Jon Grimm (jgrimm) if you find out its needed
<jgrimm> \o/ for cloudstack contributions, would be great to have someone tune the codebase up for cloudstack
<smoser> jgrimm, i'll update HACKING.rst as i'm guessing that would have sufficed to answer syed_'s question
<jgrimm> smoser, cool
<syed_> jgrimm: regarding tuning for Cloudstack, what needs to be done?
<jgrimm> syed_, I just don't think anyone's particularly tested it out lately.. code is constantly in development so great to have someone go make sure its all working and open bugs and even fix
<syed_> jgrimm: I see. We've been using it pretty much everyday. I found that it doesn't work for newer CentOS templates so was planning to open a PR
<jgrimm> gotcha.  thanks for jumping in to help
<syed_> I can take a more thorough look with folks here. Happy to contribute!
<jgrimm> Excellent!
<smoser> jgrimm, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/322021
<jgrimm> smoser, I'd just put Scott Moser (commented as such)
<jgrimm> your email and irc nick area already there. its consistent
<smoser> ok.
<syed_> Thanks for the help guys. The documentation is very well written. Opening a PR was a breeze
<syed_> https://code.launchpad.net/~syed1/cloud-init/+git/cloud-init/+merge/322024
<smoser> syed_, only comment.... chang your Description to be git style
<smoser> Summary
<smoser> <blank line>
<smoser> Body Paragraph
<jgrimm> smoser, and i think closes this bug https://bugs.launchpad.net/cloud-init/+bug/1368747
<smoser> syed_, ^ you agree with that ?
<smoser> if so, please put a LP: #1368747 in the last line of commit message (or description)
<madorn> hello
<madorn> is anyone here
<nacc> madorn: yes
<syed_> smoser:  sorry was out for lunch. Yes I'll change that
<syed_> smoser: Done.
#cloud-init 2017-04-06
<smoser> rharper, fyi, shadowusers really sucks.
<smoser> s/shadowusers/extrausers/
<smoser>  http://pad.lv/1679765
<smoser> bug 1679777
<smoser> bug 1679765
<smoser> i thought i had amup here.
<rharper> the first bug is a coreutils issue; I believe the --extrausers was introduced in some of those tools; I guess not all of them
<rharper> the second issue is related to cloud-init's concept of 'system-user';   that does need to be addressed;
<smoser> generally agree.
<smoser> the first bug is just a half-done implementation
<smoser> but so is the second.
<smoser> there are well defined ways that you can iterate users or get information about users.
<smoser> python uses such ways
<smoser> and they dont work with 'extrausers'
<rharper> it may end up being the same bug 'incomplete extrausers implementation'
<rharper> I suspect it's broken without cloud-init; ie, if you snap create-user with an assertion, the same issues remain w.r.t user manipulation/configuration/enumeration
<rharper> we should likely sync with foundations
<smoser> well, yes. cloud-init *does* have the ability to know that for users it created.
<smoser> but "what is the home directory for bob". should not require "is bob an extra user?"
<rharper> correct, which to me is a bug in the getent failure;  I think if --extrausers is available in the tools, the the default search should try both paths (/etc/passwd , /var/lib/extrausers/passwd)
<rharper> etc
<harlowja> smoser hey, for the reporting stuff in cloud-init, what's a common usage of it?
<smoser> to do the bidding of harlowja
<harlowja> some folks at godaddy are trying to gather more insight into how long modules in cloud-init run for and ...
<smoser> but it was added for maas
<harlowja> and this reporting stuff looks designed for doing that (sort of)
<smoser> maas wanted status
<harlowja> ya, makes sense
<harlowja> sounds similar
<smoser> but yeah. the reporting stuff is doing that too
<harlowja> cool beans
<smoser> and rharper has stuff that reads a journal
<harlowja> ?
<harlowja> out loud?
<smoser> and timestpams things
<harlowja> cool cool
<smoser> it reads harlowja's journal
<harlowja> my diaryyyyy
<harlowja> noooo
<smoser> "Dear diary, today I met a new friend"
<harlowja> his name was bob
<smoser> "I think i love cloud-init"
<harlowja> lol
<harlowja> the things i write about smoser ...
<harlowja> don't read those
<harlowja> lol
<smoser> rharper, where do we see cloud-init annotate ?
<rharper> https://lists.ubuntu.com/archives/ubuntu-server/2016-October/007419.html
<rharper> https://code.launchpad.net/~raharper/+git/cloudinit-analyze
<harlowja> hmmm
<harlowja> interesting
<harlowja> cool, thx
<harlowja> forwarding some of htis
<rharper> also, it'd be highly useful if we wrapped each subp with a reporting event
<smoser> not the journal entries i hope
<rharper> that'd give us per exec data
<rharper> smoser: lol
<harlowja> whats this 'ubuntu-server' ml
<harlowja> :-P
<rharper> I also sent it to the cloud-init ML
<rharper> which is hosted on launchpad
<harlowja> oh
 * harlowja shuts up now
<harlowja> lol
<erick3k> hi
<erick3k> can someone help me debug a timeout
<erick3k> please
<erick3k> https://paste.fedoraproject.org/paste/VifD~R6n8t78gj-m-Iv4CV5M1UNdIGYhyRLivL9gydE=
<smoser> erick3k, at very least you'll need to paste /var/log/cloud-init.log and also journalctl output
<erick3k> ty
<erick3k> https://paste.fedoraproject.org/paste/7rV6y2UBLmcJ77KPWP4PjF5M1UNdIGYhyRLivL9gydE=/raw
<erick3k> that is /var/log/cloud-init.log (although there is nothing other than a warning that config drive was unplugged)
<erick3k> and journalctl output says invalid argument
<erick3k> can someone help me when get a change, https://paste.fedoraproject.org/paste/6-WoDBw8~Pw9QtOd2KV~lF5M1UNdIGYhyRLivL9gydE=
<erick3k> getting big timeouts
<nacc> erick3k: which line?
<erick3k> nacc i dont really care about the warnings if they are not causing the timeout nor i care it can't print to the console but i am guessing this is the big timeout
<erick3k> Cloud-init v. 0.7.5 finished at Thu, 06 Apr 2017 22:16:28 +0000. Datasource DataSourceConfigDrive [local,ver=2][source=/dev/sr1].  Up 311.33 seconds
<erick3k> that is 5 mins
<erick3k> i guess?
<nacc> are these from multiple installs? why is cloud-init 'finished' more than once
<nacc> also it's a bit odd that the timezone appears to jump in the logs?
<nacc> 2017-04-06 18:16:28,208
<nacc> Thu, 06 Apr 2017 22:16:28 +0000.
<nacc> lines 120 and 121
<erick3k> ummm maybe i didn't clear the log
<erick3k> yes its weird and i can't find why those huge timeouts
<nacc> starts at Thu, 06 Apr 2017 22:11:22
<erick3k> like the vm takes after is up, like 20 mins before it reboots (runcmd)
<nacc> ends at Thu, 06 Apr 2017 22:16:28 +0000
<nacc> erick3k: which log is this?
<nacc> 2017-04-06 18:11:26,935 - cloud-init[WARNING]: Stdout, stderr changing to (>> /var/log/cloud-init-output.log, >> /var/log/cloud-init-output.log)
<erick3k> yes
<nacc> Cloud-init v. 0.7.5 running 'init' at Thu, 06 Apr 2017 22:16:21 +0000. Up 305.12 seconds.
<nacc> there is the 5 minute gap
<erick3k> changes cloud-init.log to cloud-init-output.log
<nacc> (i think)
<erick3k> not sure why, am using centos 7 cloud image
<erick3k> and why that gap? how can i tune that timeout thingy
<erick3k> any ideas nacc?
<nacc> erick3k: so that output is from cloud-init.log, erick3k ?
<erick3k> there is nothing on cloud-init.log
<nacc> erick3k: i'm not sure why you think it's a timeout?
<erick3k> well what could it be doing for 15 minutes?
<nacc> erick3k: there is a delay, but that doesn't necessarily equate to a timeout, unless you have some other data?
<nacc> 5 minutes, i thought?
<erick3k> na
<erick3k> once the machine is up and ready for login
<erick3k> last command in runcmd: -reboot is run like 15 - 20 minutes later
<nacc> erick3k: but i can only go off what's in the logs
<nacc> erick3k: you might need smoser or rharper for this
<erick3k> yes he tried but busy
<erick3k> thats all there is on logs, cloud-init.log is changed to cloud-init-output.log
<erick3k> so is same file
<nacc> erick3k: https://lists.ubuntu.com/archives/ubuntu-server/2016-October/007419.html
<nacc> erick3k: i wonder if that profiler would help
<erick3k> he saying 13% improvement
<nacc> erick3k: by making changes based upon profiling
<nacc> erick3k: my point was the profiler might tell you where all your time is being spent, if it's cloud-init
<erick3k> i have the machines in raid 0 with two NVME intel latest pci ssd, so you can imagine how fast things are
<erick3k> it boots in like 3 seconds so 15 minutes is just beyond ridiculous amount of time to have an instance ready
<erick3k> you think it could be a timeout on the os?
<erick3k> *OS
<nacc> erick3k: i really don't know -- was hoping it'd be something obvious, but i don't know enough about cloud-init or cento
<nacc> *centos
<erick3k> kind of same happens in ubuntu 14
<erick3k> takes a long time
<erick3k> now that you tell me might be OS timeout for network gonna check that first
<erick3k> is there a way to have the network not even look for dhcp while turning on until cloudinit injects the network?
<nacc> erick3k: i assumethat would be centos specific
<erick3k> all
<erick3k> ubuntu / centos
<erick3k> i seal the vm with this in interfaces
<erick3k> DEVICE=eth0
<erick3k> TYPE=Ethernet
<erick3k> BOOTPROTO=dhcp
<erick3k> ONBOOT=yes
<erick3k> what if i delete that?
<erick3k> can cloud-init still inject?
<erick3k> YES SR
<erick3k> THAT WAS THE TIMEOUT
<erick3k> FINALLY
<erick3k> hehe
<erick3k> now instance ready in 6 seconds
<erick3k> thats how its done :)
#cloud-init 2017-04-07
<larsks> smoser: when you're around, I have a question for you re: vendor-data.  I am having a hard time reconciling the information in http://cloudinit.readthedocs.io/en/latest/topics/vendordata.html with the discussion in https://bugs.launchpad.net/cloud-init/+bug/1469260. They seem to describe different syntaxes for the vendor-data blob.
#cloud-init 2018-04-02
<blackboxsw> look out, It's that time again.
<blackboxsw> #startmeeting bi-weekly status meeting
<meetingology> Meeting started Mon Apr  2 16:05:50 2018 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<blackboxsw> Welcome to the post-Easter episode of cloud-init's status meeting ð°
<blackboxsw> Today's meeting will probably be light as we are fairly light on attendees given various holiday schedules
<rharper> o/
<rharper> nice rabbit ears
<blackboxsw> heya! As always, we'll go through recent changes, in progress work and ~30 minutes of office hours
<blackboxsw> feel free to interject and ask quesitons at any time.
<blackboxsw> #topic Recent Changes
<blackboxsw> Here's a brief run down of what we have committed to master in the last couple weeks
<blackboxsw> - Support for setting hostname from metadata prior to network bringup.
<blackboxsw>   This fixes vsphere multi-vm deployments all coming up with the same
<blackboxsw>   'ubuntu' hostname. [LP: #1746455](http://pad.lv/1746455)
<blackboxsw> - Support initramfs iscsi root so network devices aren't disconnected
<blackboxsw>   before shutdown
<blackboxsw> - Added cloud-config module `cc_snap` which enables loading snap
<blackboxsw>   assertions, configuring snapd and installing snap packages on Ubuntu.
<ubot5> Launchpad bug 1746455 in cloud-init "cloud-init vSphere cloud provider DHCP unique hostname issue" [High,Fix released]
<blackboxsw>   Deprecated `cc_snappy` and `cc_snap_config` modules.
<blackboxsw> - Make salt minion work on FreeBSD (Dominic Schlegel)
<blackboxsw>   [LP:#1721503](http://pad.lv/1721503)
<blackboxsw> - Simplify compound conditionals (RÃ©my LÃ©one)
<ubot5> Launchpad bug 1721503 in cloud-init "salt module not able to be used on FreeBSD" [Medium,Fix released]
<blackboxsw> - Change some list creation and population to literals (RÃ©my LÃ©one)
<blackboxsw> - Add puppet 4 support configurable in `cc_puppet` module (Romanos
<blackboxsw>   Skiadas)
<blackboxsw> - Fix datasouce Azure `get_hostname` function for hostname bounce
<blackboxsw>   (Douglas Jordan) [LP:#1754495](http://pad.lv/1754495)
<blackboxsw> - OpenNebula datasource now uses network config v2 to support IPv6
<ubot5> Launchpad bug 1755965 in cloud-init (Ubuntu) "duplicate for #1754495 util.subp regression: no longer accept commands as string" [Critical,Fix released]
<blackboxsw>   config (Akihiko Ota)
<blackboxsw> - Add Hetzner Cloud datasource support (Markus Schade)
<blackboxsw> The highlights of this work that will affect various clouds:   hostname setting before network bringup, in cloud-init's init-local stage.
<blackboxsw> so if your cloud's metadata provides hostname information (per your instance creation) that hostname gets set before any potential dhcp discovery on the instance. This is a big win for Azure and may allow us to avoid/deprecate some of the hostname_bounce functionality
<blackboxsw> which was baked in to re-dhcp in order to publish updated hostname information to DDNS
<blackboxsw> We also have landed support for two new clouds: Hetzner Cloud and IBMCloud. A big thanks to Markus Schade for the Hetzner work there and smoser for the IBMCloud datasource
<blackboxsw> do3meli (Dominic Schlegel) has also been on a blitz fixing and updating a lot of FreeBSD support in cloud-init tip so thank you sir for that work as well.
<blackboxsw> We've just also landed some zfs resize support by rharper as well that should be making it's way into your friendly neighborhood Ubuntu Bionic series in a cloud near you
<blackboxsw> anything else I'm missing on rharper or powersj ?
<blackboxsw> ahh hold the phone
<rharper> blackboxsw: well, not my zfs-resize
<rharper> but I do have some fixes for it
<rharper>  https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+ref/fix/cc_resizefs_on_zfs_root
<blackboxsw> We officially released cloud-init 18.2  in master. There is an 18.2 tag in the repo for folks wanting to take an early cut of it.
<rharper> our ci-test backend normally runs with zfs, it's not right now so it missed a couple edge cases that we need to handle
<blackboxsw> Per cloud-init 18.2 here is an email sent to the cloud-init mailing list describing the details: https://lists.launchpad.net/cloud-init/msg00145.html
<blackboxsw> #link https://lists.launchpad.net/cloud-init/msg00145.html
<blackboxsw> #link https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+ref/fix/cc_resizefs_on_zfs_root
<blackboxsw> #topic In-progress Development
<blackboxsw> The upstream team has released 18,2 to Bionic as of last week, and we started an Ubuntu SRU process into Xenial and Artful.
<blackboxsw> We expect the 18.2 to be present in Xenial and Artful within 2 weeks in your cloud, so if you are waiting on a feature, it won't be very long.
<blackboxsw> Also in-progress are some of rharper's zfs fixes, and some exception callback cleanup that will affect Azure, EC2, OpenStack and  Scaleway clouds.
<blackboxsw> #link  https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+ref/fix/cc_resizefs_on_zfs_root
<blackboxsw> #link https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342007
<blackboxsw> And we are doing our part to finally purge net-tools dependencies from cloud-init (in favor of iproute2)
<blackboxsw> #link https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342428
<rharper> blackboxsw: I responded to your ip -6 route q from last week, did you see that ?
<blackboxsw> rharper: haven't yet, but I'll grab those comments today for sure (I think I missed some of your earlier review comments)
<rharper> ok
<rharper> the tl;dr for that one is that you want this: ip -6 route list table all
<blackboxsw> ahh excellent, I was wondering why we were missing content for local routes etc
<rharper> right
<blackboxsw> thanks
<rharper> np
<blackboxsw> also, on our continuous integration front , powersj  has put up a branch that I'd like to see us land with some ssh improvements
<blackboxsw> #link https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/342010
<powersj> :) yep
<blackboxsw> any other in-progress work worth noting?
<blackboxsw> Intereseted parties can always track our public trello board for a glimpse of what we are working on
<blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
<blackboxsw> #topic Office Hours (next ~30 minutes)
<blackboxsw> We'll all have eyes glued to the screen for the next 30 minutes for rants, feature discussion and bug work.
<blackboxsw> With that, the floor is open for any topics. Thanks for tuning in.
<blackboxsw> My day today will be Ubuntu SRU(stable release update)-related, so I'm getting on rharper's zfs branch now and they running a couple manual tests on ec2/azure/openstack
<rharper> +1
<rharper> oh, the ntp-spec update is ready for review and testing
<rharper> https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438
<blackboxsw> ahh +1 we want that in too
<blackboxsw> #link https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438
<blackboxsw> Alrighty, happy spring break all.
<blackboxsw> Next meeting will be two weeks from today.
<blackboxsw> powersj: rharper 4/16 look good for folks?
<powersj> +1 from me
* blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 4/16 16:00 UTC | cloud-init 18.2 released (03/28/2018)
<blackboxsw> #endmeeting
<meetingology> Meeting ended Mon Apr  2 17:03:58 2018 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2018/cloud-init.2018-04-02-16.05.moin.txt
<blackboxsw> rharper: done with https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/342467
<blackboxsw> minor nit
<rharper> k
<rharper> blackboxsw: added
<rharper> pushed update
<rharper> blackboxsw: ci says yes, are you going to run the lander or should I for the merge ?
<blackboxsw> rharper: yep, yep, I'll run the lander.  I was just getting through your ntp one
<rharper> cool
<blackboxsw> couple doc nits (on all schema dedent) I'll just post that now. How do I ascertain on timsyncd that the proper settings were honored, systemctl status systemd-timesyncd is not really too helpful
<blackboxsw> default timsync service detection looks to work fine on ubuntu
<blackboxsw> just going though a couple of tests
<blackboxsw> landed https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/342467
<rharper> blackboxsw: we *cant* really
<rharper> so the ci-test checks that we wrote the conf file correctly
<blackboxsw> hrm, also when I have ntp installed, should I expect timesyncd to also report that it is configured and running?
<rharper> via journalctl -u systemd-timesyncd.service, if you know the ips, it shows you stuff, but there is no *direct* way to confirm configuration
<blackboxsw>    Active: active (running) since Mon 2018-04-02 19:34:37 UTC; 2min 15s ago
<rharper> so, in containers, no
<rharper> ntp is special in that it just continues even if it can't modify host clock (you'll see a failure due to missing CAP_SYS_TIME)
<rharper> but, timesyncd and chrony both have a CondtionContainer!
<rharper> which prevents them from actually starting in a container
<rharper>  % journalctl -o short-precise -u systemd-timesyncd
<rharper> -- Logs begin at Thu 2018-03-15 18:44:40 CDT, end at Mon 2018-04-02 14:38:12 CDT. --
<rharper> Mar 20 16:07:20.165833 neipa systemd-timesyncd[1962]: Timed out waiting for reply from [2001:67c:1560:8003::c8]:123 (ntp.ubuntu.com).
<blackboxsw> I'm on ec2 instances which is xen right.
<rharper> ah, that should work fine
<blackboxsw> https://www.irccloud.com/pastebin/rZTvr122/
<rharper> yeah, vms are OK
<rharper> containers which share the same kernel (and time) don't need to sync themselves
<blackboxsw> yeah, I just forgot if there was a conflict issue w/ timesyncd or ntp where one or the other realizes that another client is running an falls over
<rharper> vms maintain their own clock offset due to how time is kept (in registers)
<rharper> so, xenial only, timesyncd won't stop if ntp or any other client is installed
<blackboxsw> ok yeah I'm on xenial and seeing that
<rharper> in bionic, there is a conficts in timesyncd config which forces timesyncd to stop if ntp/chrony is installed
<blackboxsw> thanks for the context. had forgotten
<blackboxsw> rharper: another round on ntp https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438
<blackboxsw> I'm almost there.
<blackboxsw> but I'll probably have a few more comments
<rharper> sure
<rharper> blackboxsw: ok, I already pushed the rebase to drop the zfs, and I *think* all of the doc changes;  working on the rest of your comments
<rharper> blackboxsw: maybe you can help, the decode_text() is subtly different than decode_binary(); note we testing if the input is a six.binary_type  and if so decode, where as the other case test if it's a string-type and converts;
<blackboxsw> s/check_exec/check_exe/ throughout
<blackboxsw> oops, I may have misread those functions. looking again
<rharper> blackboxsw: generally this was to work around the fact that if we util.load_file() on our chrony template, we get a UTF-8 char that's not decodeable without specifying the encoding value, py27 needed this before we call the jinja template render code
<blackboxsw> hrm, not specifically hitting the problem just on load_file but you are talking during template render we hit this issue?
<blackboxsw> load_file in py27 on all template files worked for me the additional decode_text. and I see you folded that into the detect_template call; will check
<blackboxsw> rharper: dpb1 I know I'm not going to be able to wrap up the ntp spec branch reviews today and I'd like to get both that and powersj branch in before we re-upload to bionic and start a refresh on the SRU into xenial and artful. So, can we wait to post a bionic upload on zfs fixes until tomorrow?
<blackboxsw> rharper: I'm looking to you for "criticality" on zfs resize how much do we want to do two uploads to bionic today and tomorrow?
 * dpb1 parses
<rharper> blackboxsw: tox -e py27 blows up without the decode_text wrapper when redendering from the "real" templates in tree
<dpb1> blackboxsw: rharper, I mean, we should put all those in the upload to bionic, why waste cycles?
<dpb1> tomorrow sounds like a better target
<blackboxsw> right, I'm for waiting on uploads to bionic until we get everything we want
<dpb1> on one hand, there is no reason to wait
<dpb1> it's like "committing" your source code
<rharper> right, push what we want (or can) to master today, and then prepare an upload tomorrow
<dpb1> however
<dpb1> in this case, we don't have the upload rights at work
<blackboxsw> right, commiting the source code is easy, but upload rights being the 'hard' at the moment
<dpb1> so, let's just be a bit judicious
<blackboxsw> :)
<blackboxsw> ok thx
<blackboxsw> will continue on ntp spec branch review/assessment
<dpb1> ok
<dpb1> yes, lining it up for tomorrow: +1
<blackboxsw> instead of changing gears for upload
<blackboxsw> tomorrow it is
<blackboxsw> ok rharper I think I'm done on ntp config comments. a couple more timeconsuming changes requested for ntp config validation.
<rharper> sure
#cloud-init 2018-04-03
<Beret> blackboxsw, seen this - https://www.dropbox.com/s/tqao6tbumfb0vbh/Screenshot%202018-04-03%2010.47.30.png?dl=0 before?
<dpb1> Beret: if this is NUCs, typically I see that when disks start going bad?
<Beret> it is
<Beret> I figured
<Beret> ok
<blackboxsw> I haven't yet. was looking over other bugs saw andreas hit a bug related to that failure path, but it was zfs
<hexorg> Hello all
<hexorg> I can't seem to find a particular answer for cloud init - can I ask it here?
<hexorg> Is there a way to tell write_files module to wait until the users module is finished?
<blackboxsw> hexorg: ask away questions/discussion is always welcome in this channel
<blackboxsw> if someone doesn't know now, maybe others will be able later
<hexorg> thanks :)
<hexorg> I'm trying to write_files into a newly created user folder, but cloud-init seems to run write_files before users
<hexorg> as a result, write files fails with no such directory
<blackboxsw> hexorg, generally,  ordering of module sequence  is not user- (#cloud-config) configurable, but the order in which modules are run is defined in /etc/cloud/cloud.cfg for each stage cloud_init_modules(run at init stage), cloud_config_modules: run in modules-config stage, cloud_final_modules: run last
<blackboxsw> let me see if I can answer the specific question or if I have to pass.
<blkadder> Yeah I have run into that... Hacky way of dealing with it was to to write files to a temp directory then move them into place after user is created.
<hexorg> Yeah ok. Just making sure I'm not missing some more direct way
<blackboxsw> hexorg: right, write-files is run in init stage which happens in the init-network stage of cloud-init boot per http://cloudinit.readthedocs.io/en/latest/topics/boot.html. And  - users-groups
<blackboxsw>  lives by default at the end of that list because it might depend on files written by write_files.
<blackboxsw> you might also be able to run your user creation logic in runcmd which runs in the cloud-init final stage (after both write_files and user creation modules)
<blackboxsw> which I think is what blkadder is referring to
<hexorg> Understandable. Thanks!
<blkadder> blackboxsw, Yep.
<rharper> blackboxsw: almost done with your fixes
<blackboxsw> +1 rharper, I've got to get my branch in shape for dropping ifconfig
 * blackboxsw tests the last branch for SRU now
<blackboxsw> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342007
<rharper> platform: lxd encountered error: 'Operation' object has no attribute 'description'
<rharper> blackboxsw: powersj thoughts?
<rharper> oh, I think that's the parsing of the cloud-init result/status json files
<rharper> that likely happens if it's not yet booted
<blackboxsw> meh rharper I'm going to reject exception_cb  branch, the raising of exceptions in principle makes sense, but the logic checking for exc.code doesn't seem to behave as expected on 404s (the httpcode isn't attached to the UrlError raised)
<rharper> ok
<blackboxsw> so, I think more rework is needed there, and we'll need discussion on it
<rharper> I read that code multiple times, but really needed either more unittests or integration tests to validate exactly what behaviors we want
<rharper> I think we should have a series of unittests that cover the various expected behavior paths we need, and then run this against that
<blackboxsw> yep, and the existing code I believe actually doens't work right
<blackboxsw> even before the rewrite. or the previous rewrite
<blackboxsw> not a critical issue (as it'll ultimately retry more than it is supposed to) which costs time, not functionality
<blackboxsw> but yeah something smells a bit there
<blackboxsw> yep confirmed, that exception_cb refactor needs work, I confirmed that even the implementation smoser took was inconsistent. after we SRU I'll put up a branch which adds unit test coverage to examine proper exception raising behavior from readurl.
<blackboxsw> we can pay this "risk" cost on next SRU.
<blackboxsw> in terms of having to retest on the affected clouds
<blackboxsw> to make sure there isn't a regression
<blackboxsw> ok I'm putting up bionic merge proposal now
<blackboxsw> rharper: here's the proposal for syncing tip to bionic https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342605
<blackboxsw> rharper: I'm putting together the SRU for xenial and artful now (should have the same content bump)
<blackboxsw> xenial SRU: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342606
<blackboxsw> rharper: artful SRU too https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342608
<rharper> blackboxsw: ok, reviewing
<blackboxsw> there are the three release candidates
<blackboxsw> thanks
 * blackboxsw ran the new-upstream-snapshot from qa-scripts https://github.com/cloud-init/qa-scripts/blob/master/scripts/new-upstream-snapshot
<blackboxsw> meh my comments on smoser's branch are as follows rharper  https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/342007
<bjonnh> Hi, I'm trying to disable ipv6 in an lxd container (ubuntu), I got cloud-init to write the sysctl for it
<blackboxsw> sorry for the thrashing. only one minor diff is required in his implementation, but I'd feel better if we got some good unit test coverage on the function
<bjonnh> but I can't get it to restart systemd
<dpb1> bjonnh: does lxd have a setting to do that?
<bjonnh> dpb1: NOâ¦
<bjonnh> and they say "just set the sysctl"
<bjonnh> ipv6 is disabled on host
<nacc_> bjonnh: restart systemd?
<bjonnh> but containers still get an ipv6 link-local
<bjonnh> nacc_: sorry restart a systemd service
<nacc_> bjonnh: ah ok :)
<blackboxsw> https://github.com/lxc/lxd/issues/3333
<bjonnh> oh
<bjonnh> - [systemctl, restart, systemd-sysctl]
<bjonnh> I think I had to put "" around systemd-sysctl
<blackboxsw> see alberto's comment about disabling ipv6 in containers
<blackboxsw> if that helps
<blackboxsw> lxc network set lxdbr0 ipv6.address none
<bjonnh> is it possible to start something really early in cloud-init with my user conf?
<bjonnh> blackboxsw: I can't do that becauseÂ I'm using my own bridge
<dpb1> yes, what blackboxsw said
<bjonnh> (that doesn't have ipv6â¦)
<bjonnh> so lxc complains that it cannot manage my device
<dpb1> bjonnh: you want this globally or per container?
<bjonnh> globally
<bjonnh> I don't have anything ipv6 here
<rharper> blackboxsw: approved push to ubuntu/devel, I got the same you did
<dpb1> bjonnh: so, can't you just reconfigure your bridge to not have anything ipv6?
<bjonnh> dpb1: that's my pointâ¦ It doesn'â¦
<blackboxsw> rharper: will push to ubuntu/devel and see if we can get an upload there.
<dpb1> bjonnh: sorry, don't follow that one
<rharper> I don't think you can disable the kernel ipv6 setting from within an unpriv container
<bjonnh> the host has ipv6 disabled
<rharper> with what setting? are you ignore RA s ?
<bjonnh> net.ipv6.conf.vlanbr2.disable_ipv6 = 1
<bjonnh> should I do
<bjonnh> net.ipv6.conf.vlanbr2.accept_ra = 0
<bjonnh> too?
<rharper> yes
<bjonnh> oh
<rharper> that will prevent any RAs from showing up on your interfaces
 * dpb1 hesitates to ask why the need to disable ipv6 on this host :)
<rharper> When this value is changed from 0 to 1 (IPv6 is being disabled),
<rharper> 	it will dynamically delete all address on the given interface.
<rharper> I suspect that at the time it's set, it drops addrs, but if you accept RAs then new ones can come in
<bjonnh> dpb1: because I have nothing ipv6 and the update of packages throws me:  Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1562::16). - connect (101: Network is unreachable) [IP: 2001:67c:1562::16 80]
<bjonnh> and waits for a second then switch to the ipv6
<bjonnh> ipv4 sorry
<dpb1> bjonnh: yet the router is advertising it?
<bjonnh> maybe dnsmasq is doing something on its side
<bjonnh> â¦
<dpb1> that wouldn't totally shock me :/
<blackboxsw> powersj: pylxd issue we've seen before? https://pastebin.ubuntu.com/p/VZHHpHD4WN/
<blackboxsw> per ci build https://jenkins.ubuntu.com/server/job/cloud-init-ci/968/console
<powersj> blackboxsw: yes usually that's when pylxd is out of sync with lxd
<powersj> :\ so hopefully that isn't the case
<blackboxsw> possible given the release push on friday I suppose
<rharper> blackboxsw: what's the xenial and artful sru bug numbers ?
<rharper> new-upstream-snapshot said something to me about that
<rharper> hrm, your xenial branch didn't have the changelog update ?
<rharper> nor the artful one
<blackboxsw> rharper: the SRU bug is https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1759406
<ubot5`> Ubuntu bug 1759406 in cloud-init (Ubuntu) "sru cloud-init (17.2-35-gf576b2a2-0ubuntu1~16.04.1 update to 18.2-0ubuntu1~16.04.1)" [Medium,Confirmed]
<blackboxsw> hrm, checking xenial
<rharper> blackboxsw: shouldn't we see a changelog diff between your branch and origin/ubuntu/xenial ?
<rharper> like we did for ubuntu/devel ?
<blackboxsw> rharper: line 100 of the visual diff at https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342608
 * rharper is blind
<rharper> yes
<rharper> confirmed blind
 * rharper continues
<bjonnh> |  eth0  | True | fe80::216:3eff:fed9:8c65/64 |  still gets a link-local
<bjonnh> router has ipv6 fully disabled
<blackboxsw> the changelog diff between xenial and devel debian/changelogs should only have minor diffs in package version numbers & maybe the new-upstream-snapshot which hasn't been removed from bionic
 * blackboxsw is still fumbling around with why pylxd is complaining (as I thought we pinned it in tox integration-requirements.txt)
<blackboxsw> bjonnh: hrm, I'm not quite sure on the ipv6 container at the moment, maybe somebody else has a clue there.
<powersj> blackboxsw: pylxd probably didn't change, but lxd could have
<bjonnh> so I'm able to disable it but this happens after the package upgrade
<bjonnh> that cloud-init does
<powersj> blackboxsw: ah yes... lxd 3.0 is now installed
<powersj> that happened yesterday
<powersj> blackboxsw: https://github.com/lxc/pylxd/issues/284
<dpb1> bjonnh: honestly, I'd ask about this in #lxcontainers.  it's weird to me that you are having to try to workaround this in cloud-init on each instance
<dpb1> bjonnh: you should do it on the host
<rharper> blackboxsw: xenial is the same, though you put xenial-proposed in the release ? is that what we normally do /
<bjonnh> well I'm using a bridge
<bjonnh> so it is an instance by instance problem
<rharper> blackboxsw: same for artful
<bjonnh> (it is not the lxd bridge, it is a bridge over a vlan on its specific subnet)
<rharper> bjonnh: and your link-local ipv6 interrupts apt ?
<bjonnh> inside the instances yes
<bjonnh> it slows them down
<bjonnh> it does it only during startup
<bjonnh> after that I'm able to set the required sysctls
<bjonnh> so it stop allowing ipv6
<rharper> I'm surprised, I've no ipv6 available here but I wouldn' think the ipv6 addr for the archive is reachable via the link-local, wouldn't think it would try
<bjonnh> me neitherâ¦
<bjonnh> I've never seen thatâ¦
<rharper> nor I
<rharper> so something is special about this setup I think
<blackboxsw> rharper: for each release xenial and artful we should run dch --release -D artful-proposed or xenial-proposed for the debian/changelog to match the former released stream in debian/changelog
<blackboxsw> at least as the final step prior to the upload
<rharper> blackboxsw: ok, I wasn't sure
<blackboxsw> so I have always dch --release -D artful-proposed     or xenial-proposed instead of UNRELEASED
<rharper> I do the dch release
<rharper> it was whether it should have -proposed or not
<rharper> it seems (to me) strange to put a pocket value into the changelog when it's going to get copied over into the archive
<blackboxsw> yeah we decided in  changelog we want to leave it all as -proposed to indicate when we started performing SRUs for a given stream
<rharper> but maybe there's some backend magic that fiddles that value in the change log
<rharper> ok
<blackboxsw> because any changelog entries before the first SRU would have the base 'xenial|artful'
<rharper> yeah, makes sense
<blackboxsw> yeah nothing seems to fiddle with it post-release: https://pastebin.ubuntu.com/p/b2639pyZtx/  that's from apt-get changelog on xenial
<blackboxsw> xenial-proposed still listed in there
<blackboxsw> was wondering whether it'd get scrubbed
<rharper> hehe
 * rharper relocates back home
<blackboxsw> yeah that pylxd traceback started happening between Apr 2, 2018 8:27 PM and  Apr 3, 2018 7:11 PM
<blackboxsw> and looks like it affects rharper's ntp branch too
<blackboxsw> ok so I feel good this isn't related to the branches I put up against ubuntu/devel|artful|xenial
<blackboxsw> but need to fix ci
<blackboxsw> trying to reproduce the problem locally
<powersj> blackboxsw: you can also hop on the CI box
<blackboxsw> it'll be faster.... locally on my xenial box, no error. trying on my other box now not seeing it either
<blackboxsw> will do
 * blackboxsw digs up the doc
<powersj> blackboxsw: lxd version?
<blackboxsw> 2.0.11
<powersj> need 3.0 ;)
<blackboxsw> yep need the snap looks like
<powersj> or bionic ;)
<blackboxsw> snap == faster path to the failure I'm expecting
<blackboxsw> :O0
<powersj> yeah
<powersj> much faster
<rharper> powersj: blackboxsw: is ci back up now? we did some backend storage work for lxd
<powersj> rharper: it is, but it appears we did get a lxd 3.0 upgrade last night
<blackboxsw> powersj: rharper I can't ssh as ubuntu to ci
<powersj> blackboxsw: jenkins@
<rharper> blackboxsw: Ill import you to ubuntu as well
<rharper> blackboxsw: your lp name ?
<rharper> I don't see you in either key files
<blackboxsw> ssh-import-id chad.smith
<rharper> ok, in as ubuntu
<blackboxsw> thx
 * blackboxsw can take over the world now
<blackboxsw> thx
<rharper> and jenkins
<rharper> what's the pylxd trace back ?
<blackboxsw> rharper: https://pastebin.ubuntu.com/p/VZHHpHD4WN/
<rharper> so, lxd pushed 3.0 into the stable branch ?
<rharper> that doesn't seem right
<powersj> snap info lxd
<rharper> so I thought that was related to the cloud-init result.json but that's really pylxd ?
<powersj> yep
<rharper> we can switch to 2.0 track
<rharper> For the LXD snap, 3 tracks are provided:
<rharper> latest (latest LXD feature release, currently 3.0)
<rharper> 2.0 (previous LTS release)
<rharper> 3.0 (current LTS release)
<powersj> I'm fine with moving to 2.0 temporarily, especially if you are trying to get a release out
<rharper> if they're not going to release pylxd in step with the base, then we need to run  behind tip
<rharper> and dpb1 I'd like to raise this as an issue with the lxd team
<rharper> we continually get broken every single time they change
<powersj> well... pylxd did get updated to fix things
<powersj> we just hard code the version
<rharper> but not *before*
<rharper> the release
<rharper> it should block a release
<powersj> it was updated before
<powersj> month ago or so
<rharper> so, then I'm confused
<rharper> oh, it's not packaged with lxd ?
<powersj> correct
<rharper> but there is a dependency ther e
<rharper> that's still crappy
<powersj> it is
<powersj> frustrating even
<rharper> I suspect this is one of those snap not-yet-solved thingys
<powersj> smoser and I chatted about getting rid of pylxd at sprint
<rharper> it's supposed to be stand alone ?
<rharper> yeah
<dpb1> yes, that ^
<rharper> dpb1: that said, openstack has to have this problem as well
<rharper> they're not going to switch to a cli anytime soon
<rharper> one should be able to express dependencies between snaps, or the snap (lxd) would need to provide the pylxd bindings in the snap
<rharper> powersj: so we switch back to 2.0 channel or can we bump the pylxd or do we have to change the ci call ?
<powersj> either a) switch back to 2.0 to fix things quickly and move on or b) bump pylxd (which we will have to do eventually anyway)
<dpb1> yes please, let's focus on practicle.  we can corner stgraber at the sprint
 * blackboxsw  just reproduced the issue on jenkins workspace
<blackboxsw> ok
<powersj> I'd prefer to update tox.ini to use a newer pylxd
<powersj> that way we keep using lxd 3.0 and move on
<blackboxsw> +1
<powersj> and can talk about this at later date
<blackboxsw> I'm updating now to test
<blackboxsw> hrm just updating to tip of github/lxc/lxd isn't cutting  it . lemme do a tox -r -e to make sure it actually pulled in latest
<powersj> yeah good idea to blow away .tox
<powersj> or do that
<rharper> urg
<rharper> one more commit to master =P
<blackboxsw> yeah will have to respin on that
<blackboxsw> so tomorrow for SRU
<blackboxsw> I'll have the branch queued and landed in tip tonight with powersj blessing, then we can do the dance on bionic artful xenial tomorrow
<blackboxsw> note grabbing tip of pylxd hits another traceback that'll need a tiny tweak to integration tests :)
<powersj> paste?
<blackboxsw> enroute
<blackboxsw> heh version 3 :)
<blackboxsw> https://pastebin.ubuntu.com/p/8WtSTRKCTh/
<powersj> ooooo yes
<powersj> the logging
<blackboxsw> there'll never be a v. 3 :)
<powersj> well the issue we were having with v1 and v2 should be fixed in v3
<powersj> which is why that is there
<powersj> it has to do with console logging with the lxd snap
<powersj> wow smoser and I messed up there :)
<rharper> oi
<rharper> are we sure we don't want to just revert to 2.0
<blackboxsw> str has no attribute startwith
<rharper> and sort this lxd/pylxd/ci mess out later ?
<rharper> that's typo
<blackboxsw> :)
<blackboxsw> a tiny little s
<rharper> unless blackboxsw typo'd irc
<blackboxsw> nope official committed typo
<rharper> =/
<rharper> how'd flake8 not get that ?
<blackboxsw> good pt
<rharper> or pylint
<blackboxsw> flake8 look at cloud_tests?
<blackboxsw> or ignore that dir
<powersj> it should look at tests
<blackboxsw> nope both flake and pylint look at tests, right
<rharper>  cloudinit/ tests/ tools/
<powersj> which has unit and cloud tests
<rharper> what file ?
<blackboxsw> tox.ini
<powersj> failure was in tests/cloud_tests/platforms/lxd/instance.py", line 213, in _has_proper_console_support
<blackboxsw> next failure: powersj: rharper: https://pastebin.ubuntu.com/p/nykVTpBQrh/
<rharper> I found it, I  meant the startwith
<rharper> it's in instance.py
<rharper> my flake8 says something about local variable e
<rharper> so maybe it's a lint issue
<blackboxsw> to speed up iterations, I'm running tox -r -e citest -- run --verbose --os-name xenial --test modules/apt_configure_sources_list.yaml --platform lxd
<powersj> lxc is not operational on torkoal
<powersj> $ lxc list
<powersj> Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory
<rharper> ok, bbiab
<blackboxsw> well that could cause problems ;)
<rharper> shocking that flake8 and pylint don't care
<dpb1> powersj: group/permissions errors?
<powersj> hmm that socket file doesn't exist
<powersj> blackboxsw: try now
<powersj> fwiw looked at https://github.com/lxc/lxd/issues/4245
<blackboxsw> powersj: yep good find
<blackboxsw> ... runs fine now with tox and cloud_test patch
<blackboxsw> getting patch together
<blackboxsw> http://paste.ubuntu.com/p/sVNf2nKCnS/
<blackboxsw> setting pin now
<blackboxsw> ok pin works http://paste.ubuntu.com/p/DQ799wxc4K/
<powersj> +1
<blackboxsw> powersj: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342617
<dpb1> what did you do?
<dpb1> to fix it?
<blackboxsw> powersj: fixed the world. I just changed pinned version and fixed cloud_tests
<blackboxsw> powersj: did you sudo snap refresh lxd ?
<blackboxsw> per that issue?
<powersj> blackboxsw: I did a sudo snap refresh lxd and a sudo snap restart lxd
<powersj> lxc list sat there for 2mins and then the world worked
<powersj> blackboxsw: you left in a debug statement
<powersj> blackboxsw: in cloudinit/url_helper.py
<blackboxsw> bah powersj I had uncommitted changes in that branch that I pulled in unknowingly... repushing in 2 mins
<blackboxsw> powersj: force pushed. https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342617
<powersj> +1'ed
<rharper> so no idea why lint or flake didn't find it ?
<blackboxsw> thanks powersj, yeah just awaiting completion of https://jenkins.ubuntu.com/server/job/cloud-init-ci/970/
<rharper> so strange
<blackboxsw> and looks good
<powersj> blackboxsw: yea ship it
<blackboxsw> ok landed
<blackboxsw> will repropose branches xenial|artful|devel tonight
<blackboxsw> but need to make some dinner at the moment
<rharper> something about our .pylintrc in cloud-init blocks it
<rharper> if I put a simple test into a different dir, then I get
<rharper> Module test
<rharper> E:  2,31: Instance of 'str' has no 'startwith' member (no-member)
<powersj> interesting sine we specifically allow errors
<rharper> yeah, havent' tracked down the line yet
 * blackboxsw repushed https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342605
<blackboxsw> for bionic releaes
<rharper> no, that's not it
<rharper> something else inthe structure, I removed .pylintrd and didn't find it either
<rharper> hrm, so, info is a dict (load_yaml), then we have two gets, which return a value from the dict, which it cannot know
<rharper> so, if you str(dver) in there
<rharper> then pylint finds it
<blackboxsw> just force pushed xenial and artful branches
<blackboxsw> need to await CI on them
<blackboxsw> -> dinner
<rharper> but, our .pylintrc still isn't happy with that
<rharper> oh man
<rharper> pylint just does a regex on the source file
<rharper> the http.client and m_.* have pylint ignore that file
<blackboxsw> meh we should improve/limit that ignore if we can
<rharper> I don't know what to do about that
#cloud-init 2018-04-04
<blackboxsw> why did our CI speed improve so much lxd 3.0?
<blackboxsw> Run speed before today: LTS: 9min 11s LTS integration: 8min 15s  MAAS Compat: 19min 4s
<blackboxsw> run speed today: LTS: 2min 12s LTS integration: 2min 37s MAAS compat: 2min 41s
<blackboxsw> looks like 2 mins after cloud-init sbuild completed in https://jenkins.ubuntu.com/server/job/cloud-init-ci/970/consoleFull we were running ci-tests   yesterday, https://jenkins.ubuntu.com/server/job/cloud-init-ci/964/consoleFull there was generally an 11 minute gap between sbuild completion and citest runs.
<powersj> blackboxsw: rharper moved the storage file for lxd onto the nvme, sooo yeah :0
<powersj> in the past we have used zfs, which was fast as it, then moved to btrfs currently.
<blackboxsw> man that's fast
<powersj> the move to the nvme will help everything
<powersj> I wish all our systems had at least ssds in them
<rharper> powersj: hehe, will be nice when we can switch back to zfs on nvme
<rharper> powersj: blackboxsw: pylint will find it (need a pylint newer than what's on xenial, we do in our tox) ; but it needs some help; it doesn't know that the variable is a string: https://paste.ubuntu.com/p/YkP4QW9qwb/
<powersj> wow
<rharper> well, the dver comes from a get on a yaml load;  so I *think* I can understand that it doesn't know until runtime what the yaml parsing will return
<blackboxsw> weird
<blackboxsw> rharper: I'm updating the sru manual test scripts now. and I'll ping when I push them to https://github.com/cloud-init/ubuntu-sru/
<blackboxsw> once I have the scripts updated, I'll hit up gce manual validation
<rharper> blackboxsw: nice
<rharper> I'm back, I can start on ec2 if that's ready
<blackboxsw> so far everything is queued for release, but still in unapproved state. https://launchpad.net/ubuntu/artful/+queue?queue_state=1&queue_text=cloud-init  doesn't look like anyone is around to tick the done status for the upload to succeed into artful & xenial.
<blackboxsw> so that part will have to wait until tomorrow.
<blackboxsw> but yes we can start manual verification if you can build your own deb.  launch-ec2 (from qa-scripts) could help make that easier on the cmdline
<rharper> blackboxsw: did you ping the vanguard in #ubuntu-release ?
<rharper> blackboxsw: do you have your .debs  from the sbuild ?
<blackboxsw> rharper: I did, only vanguard today was robie or arges,
<rharper> oh, huh
<blackboxsw> and robie was EOD and arges isn't in channel
<rharper> no ? bummer
<rharper> blackboxsw: where do you get your awscli ? deb or snap ?
<blackboxsw> rharper: I used a deb locally
<rharper> on bionic ?
<rharper> or xenial ?
<blackboxsw> awscli	1.11.13-1ubuntu1~16.04.0
<blackboxsw> i
<rharper> huh
<blackboxsw> either should work
<rharper> I only had /usr/bin/aws
<blackboxsw> sorry paste fail
<rharper> oh, I see, we really only need the ~/.aws/credentials as we're using boto3
 * blackboxsw wonders why I still have the deps check on awscli though
<rharper> the package name ?
<blackboxsw> boto uses it maybe
<rharper> maybe
<blackboxsw> had I written a helpful comment in the launch-ec2 script I'd know ;)
<rharper> blackboxsw: do you have an example launch-ec2 command for testing one of these bugs ?
<blackboxsw> rharper: https://github.com/cloud-init/ubuntu-sru/blob/master/manual/ec2-sru-17.2.35.2.txt
<rharper> nifty
 * blackboxsw needs to stop trying to writeup  the SRU regression upgrade process in qa-scripts/doc
<blackboxsw> I'll push what I have over there
<rharper> thx
<rharper> and the .deb as well
<rharper> and I can start running through the list
<blackboxsw> ahh yes one min, will rebuild
<blackboxsw> rharper: scp sru@blackboxsw.com:*deb .
<blackboxsw> your lp id is imported there temporarily
<rharper> k
<rharper> % sha512sum cloud-init_18.2-4-g05926e48-0ubuntu1~16.04.1_all.deb
<rharper> ea85001e818aefe6967626d78f45e816c5ded1be71a8b58cdebbdab6906e8113a201668afaaebd983e3090e0570ae95da76308c2931ccee84f68365a76f37a6d  cloud-init_18.2-4-g05926e48-0ubuntu1~16.04.1_all.deb
<rharper> blackboxsw: match ?
<blackboxsw> match ea85001e818aefe6967626d78f45e816c5ded1be71a8b58cdebbdab6906e8113a201668afaaebd983e3090e0570ae95da76308c2931ccee84f68365a76f37a6d
<blackboxsw> almost done w/ artful deb there
<rharper> k
<blackboxsw> rharper: /home/sru/cloud-init_18.2-4-g05926e48-0ubuntu1~17.10.1_all.deb is avail there now too
<blackboxsw> 6d8c642c8cfb470a28d50e8a5e0e1c229b0ff173e579dddc6d53670179b374dba7cb9f634ed040b24d74d127b068056f78e571c71763502c7618048def655487
 * blackboxsw is going to test gce now 
<rharper> blackboxsw: snagged and matches
<rharper> k
 * rharper hits up ec2 
 * blackboxsw kills the sru user
<rharper> Unknown parameter in input: "AmazonProvidedIpv6CidrBlock", must be one of: DryRun, CidrBlock, InstanceTenancy
<blackboxsw> hrm
<blackboxsw>  python3-boto3                               1.2.2
<blackboxsw> python3-boto3                               1.2.2
<blackboxsw> and python3-botocore                            1.4.70-1~16.04.0
<rharper> same
<rharper> anything special in your ~/.aws/config ?  I've just output and region set
<blackboxsw> [default]
<blackboxsw> region = us-east-2
<rharper> huh
<rharper> shrug
<rharper> may be related to my account and it's vpc defaults
<rharper> s/it's/its
<blackboxsw> it's what we pass to boto3's ec2.create_vpc
<rharper> yeah, I saw that in the code, not sure why it doesn't like it
<blackboxsw> ec2 = boto3.resource('ec2', region_name=args.region); ec2.create_vpc( CidrBlock='10.41.0.0/16', AmazonProvidedIpv6CidrBlock=True)
<blackboxsw> yeah, hrm
<blackboxsw> checking boto3 docs
<blackboxsw> boto3 1.6.23 docs has it
<blackboxsw> http://boto3.readthedocs.io/en/latest/reference/services/ec2.html
<rharper> blackboxsw: did up update the ubuntu-sru github ?
<powersj> FYI when I run integration tests typically use tox, which pulls in boto3 1.5.9 or bionic's 1.4.2
<blackboxsw> rharper, just qa-scripts now
<blackboxsw> rharper: just pushed ubuntu-sru
<rharper> thx
<blackboxsw> need to tweak the manual scripts for 18.2.4 for each platform
<blackboxsw> I only copied 17.2.35.txt -> 18.2.4 as templates
<blackboxsw> I'm updating the readme to include markdown checklist items for this SRU now
 * rharper is going to see about getting an up-to-date boto3 
<powersj> rharper: tox -e citest -- {args}
#cloud-init 2018-04-05
<rharper> blackboxsw: ok, I've finished ec2 manual tests for 18.2 SRU
<blackboxsw> thanks rharper, artful on gce is good, I'll get to xenial just after lunch
<rharper> blackboxsw: where do you want me to put the log ?
<rharper> MP to the github repo ?
<blackboxsw> rharper: ok back from lunch.
<blackboxsw> rharper: can you put in in ubuntu-sru:  ... https://github.com/cloud-init/ubuntu-sru/blob/master/manual/ec2-sru-18.2-4.txt
<blackboxsw> in the manual subdir ec2-sru-<version>.txt
<rharper> y
<rharper> https://github.com/raharper/ubuntu-sru/pull/1
<blackboxsw> rharper: no need for PR if you want to git push origin master I'm good with that for this repo
<blackboxsw> the pr is against your repo, which I can't squash merge to
<rharper> blah
 * rharper should stick to the git cli 
<rharper> ok
<rharper> https://github.com/cloud-init/ubuntu-sru/pull/1
<rharper> ok, merged and then PR'd to the base
<rharper> I can't write there, so you'll need to merge
<blackboxsw> clicked merge button, I'll add your perms if I can
<blackboxsw> rharper: you're invited in github to write access on ubuntu-sru
<rharper> y
<powersj> rharper: added you to github group
<rharper> thx
<rharper> blackboxsw: thx for merge, now do we also update the desc in the SRU bug, or wait till the end and do that all at once ?
<rharper> should I test azure now ?
<blackboxsw> rharper: we can just work through ubuntu-sru repo and then all at once attach the files to the SRU bug once we're done
<rharper> +1
<blackboxsw> just updated qa-scripts sru-changelog-to-trello to accept num-sections param to allow us to create markdown across multiple sections.
<blackboxsw> rharper: I've kicked off an azure xenial test
<rharper> nice
<rharper> ok
<rharper> do we have lxd and nocloud going as well ?
<blackboxsw> just waiting on the instance to cume up
<rharper> those are run on jenkins right ?
<blackboxsw> nocloud-kvm would be helpful to kick off
<blackboxsw> in jenkins
<rharper> k
<rharper> oh, we need to wait for 2.4 to hit -proposed though
<blackboxsw> lxc we generally can cut-n-paste from a previous run of our choosing which closely matches our release (tip of master)
<rharper> we can't feed a deb to the job
<blackboxsw> rharper: it all truth, yes we need to actually truly wait on official -proposed publish of the debs, which won't be until tonight
<rharper> ok
<blackboxsw> since both were officially approved within the hour
<blackboxsw> both == (artful/xenial)
<rharper> then I'll hold for now, please ping when I can help start more things
<blackboxsw> great thanks rharper, yeah I just wanted a spot check to make sure we don't have to start an SRU-regression on the major cloud
<blackboxsw> great thanks rharper, yeah I just wanted a spot check to make sure we don't have to start an SRU-regression on the major clouds
<rharper> gotcha
<blackboxsw> good work on ec2, just read through logs. yeah I think we should bake in systemd analyze (and cloud-init analyze) into our manual sru tests (and CI per powersj suggestion).  we can then at least capture SRU-related differences on each cloud performance.
<blackboxsw> I'm adding it to a new sru-templates subdir in ubuntu-sru for each 'manual' cloud.
<blackboxsw> upgrade looks fine on azure
<blackboxsw> ok capturing logs and simple test cases for reference
<blackboxsw> cloud-init analyze blame is nice.... so initial boot on stock cloudimage blames config-disk_setup @ 4 seconds and config-locale @ 3.3 seconds
<blackboxsw> that makes up the near half of cloud-init's boot time cost
<rharper> blackboxsw: yeah, I think this manual-no-regression sequence can be scripted
<rharper> we could use write_files in the initial yaml we pass to generate the file, and then just remotely execute it through the upgrade, and then a post-upgrade-reboot sequence
<blackboxsw> +1, we can just contain the basics, only different intial cmdline instance launch and IP obtaining
<rharper> y
<rharper> I wonder if we should write this as a cloud_test with the ec2 platform
<blackboxsw> rharper: that'd be an effective start as we'll have to do something comparable on each platform we add to ci
 * blackboxsw just hadn't spent more than a fleeting glimpse at trying to establish a cloud_test that'll perform the live upgrade test/ post-upgrade-reboot sequence
<rharper> yeah, I think we have a reasonable list of "basic" tests here; boot, confirm no tracebacks and status OK, upgrade (via dpkg -i or -proposed); re-run cloud-init init; check status/logs, clean --logs --reboot;  wait for boot;  repeat
<rharper> and add in the collect of systemd-analyze and cloud-init analyze show/blame if we like
 * rharper gives it a go 
<rharper> I've not tried the ec2 backend in cloud_tests yet
<blackboxsw> it's slick. powersj did well
<rharper> powersj: is it possible to feed one's on creds in there ?
<powersj> rharper: locally yes http://cloudinit.readthedocs.io/en/latest/topics/tests.html#ec2
<rharper> right
<rharper> if I launch it myself (vs on jenkins)
<powersj> correct, I don't want jenkins knowing any of our stuff
<rharper> perfect!
<blackboxsw> ok just pushed ubuntu-sru with doc output from create-sru.py.
<blackboxsw> ok, now onto SRU validation script writing for the 24 related bugs (most of them should be covered in part by our manual tests).
<blackboxsw> rharper: can you rebase https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/339438 for ci to work
<blackboxsw> everything is done there right?
<blackboxsw> minor fix for tools/make-tarball to fix nacc's discoveries during the test/upload process.
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/342761
<blackboxsw> I didn't want to lose that fix changeset
<rharper> blackboxsw: sure
#cloud-init 2018-04-06
<blackboxsw> sru-create.py -b 1759406 --bugs 1420018
<ubot5`> bug 1420018 in cloud-init "docs: user-groups uses - instead of _" [Low,Fix committed] https://launchpad.net/bugs/1420018
 * blackboxsw forgot the    -d 03/29/2018 -v 18.2.4 on that command
<blackboxsw> rharper: https://github.com/cloud-init/ubuntu-sru/pull/2
<rharper> k
<rharper> blackboxsw: https://github.com/cloud-init/ubuntu-sru/pull/3
<rharper> blackboxsw: do you want me to mark those done in the trello checklist ?
<blackboxsw> rharper: you can ~~ ~~ on the hackmd file to mark the line item done. We mark it done in the SRU trello card when we've run the specific test and captured output within === begin|end SRU verification output ===  bookends in https://github.com/cloud-init/ubuntu-sru/blob/master/bugs/lp-1750884.txt
<rharper> ok
<blackboxsw> rharper: ec2 -proposed on artful and xenial has our SRU'd cloud-init. we can start manual testing there, gce and azure looks like
<blackboxsw> I'm back from lunch and can start wherever
<rharper> k
<blackboxsw> if you are working on cloud_tests that's more valuable probably (from a reuse perspective)
<blackboxsw> initial check on ec2 upgrade path, clean boot looks fine
<rharper> sure, sounds like you've got a groove there
<blackboxsw> so I'm not worried about initial regression potential and putting up a new upload today... but I wanted to get through openstack test today if I could too
<blackboxsw> that and vsphere check for the init-local hostname set
<rharper> y
<blackboxsw> I think I'll add a manual vsphere test script to this SRU just because we've altered things to specifically fix ubuntu/juju k8s solution
<blackboxsw> minor manual changes https://github.com/cloud-init/ubuntu-sru/pull/4
<blackboxsw> next PR will be the SRU verification results
<blackboxsw> sru results ec2 and azure, + a bump in sru-templates for next time https://github.com/cloud-init/ubuntu-sru/pull/5
#cloud-init 2020-03-30
<Odd_Bloke> Gaffel: Your paste appears to have expired, it's 404ing.
<Gaffel> Odd_Bloke, I uploaded it again: https://paste.centos.org/view/1077fa4e
<Gaffel> I'm unable to log in through both the TTY and SSH. The passwords have no effect. The file is literally "user-data" but nothing I type in that works. It only likes "meta-data".
<Odd_Bloke> Gaffel: I don't have time to dig into this too much right now, but line 17 appears to be indented by an additional line.
<Odd_Bloke> That might be causing the password setting to fail?
<Gaffel> That's weird. I don't see that on my system. I pasted it manually. I'll check if there's no indentation issues in the file.
<Gaffel> There are no tabs. That must haven't happend when I pasted.
<Gaffel> I cut away the "users" stuff and only kept the chpasswd and ssh_authorized_keys. Now it lets me log in. :|
<Gaffel> It uses cloud-init 18.5
<Gaffel> How can I test user-data since it won't let me access anything until now when the things I want to test has been removed?
<powersj> Gaffel, we have a couple of ways documented via LXD and Mulitpass in addition to validating your YAML: https://cloudinit.readthedocs.io/en/latest/topics/faq.html#how-can-i-debug-my-user-data
<powersj> I would at least try the devel schema command
<powersj> not everything has schema, but it is worth a try
<ananke> Gaffel: you also may want to start with something simpler: don't add additional users, just set password & keys for the default one
<ananke> Gaffel: is this in ec2?
<Gaffel> It's just local, I just want to test cloud-init, so you can use images without modifying them
<Gaffel> It looks like it doesn't like my "sudo" string.
<Gaffel> powersj, thanks for the tip. I thought that this was valid YAML but it seems to have issues. :/
<ananke> multipass has a few issues though with cloud-init
<ananke> although your configs are simple enough they shouldn't be impacted by that. namely jinja2 templates
<Gaffel> It works now. Thanks for the help! :] (now I just need to fix the things I actually wanted to try from the beginning)
<Odd_Bloke> I'm still waiting for review on one of my cleanup PRs from last week: https://github.com/canonical/cloud-init/pull/283
<ananke> uhmm, is cloud-init.service supposed to be enabled or is it handled via other means? Wonder if my kali linux issue is related to the following:
<ananke> root@kali:/etc/cloud# systemctl status cloud-config.service
<ananke> â cloud-config.service - Apply the settings specified in cloud-config
<ananke>      Loaded: loaded (/lib/systemd/system/cloud-config.service; disabled; vendor preset: disabled)
<Odd_Bloke> ananke: All the units should be enabled together, either explicitly on installation or by cloud-init.generator (if it's been installed).
<Odd_Bloke> rharper: We forgot to assign PRs in our daily meeting, but there's only one pending assignment, so I've just put your face on https://github.com/canonical/cloud-init/pull/289
<ananke> makes me think that Kali people mangled cloud-init installation
<Odd_Bloke> It's certainly a possibility.
<rharper> Odd_Bloke: ok
<Odd_Bloke> blackboxsw: When you merged #288, the merge commit message ended up truncated at the first comma.  Don't know how that happened, just a heads-up to keep an eye out for it.
<blackboxsw> Odd_Bloke: thanks, yeah that's strange. and probably my end. Odd_Bloke rharper when we are squash merging are folks redacting the per-commit messages that github appends to the original PR description?
<blackboxsw> for instance, if I have 10 commits in my squashed PR, github adds one line preceded by an asterisk  for each additional commit
<blackboxsw> in cloud-init's launchpad world, our squash merge would only use the branch description or top-most commit message as the squashed commit
<Odd_Bloke> I have been removing any that don't add useful information.
<Odd_Bloke> Like "* typo fix" doesn't help anyone.
<blackboxsw> +1 Odd_Bloke, I generally have been doing the same.
<blackboxsw> I think I just botched 288
<Odd_Bloke> But "* Fix the behaviour of Spam.eggs() when "ham" is passed" is useful for someone ending up at the squash commit via blame.
<ananke> hmm, so it appears that Kali AMI has both cloud-config.service and cloud-final.service disabled by default. that doesn't make sense
<blackboxsw> so, rharper I guess we have https://github.com/canonical/cloud-init/pull/267 then for Focal.    and Odd_Bloke or rharper more jsonschema coverage to help folks w/ malformed user-data  https://github.com/canonical/cloud-init/pull/152
<rharper> ananke: saw you update, will look at the data later.    w.r.t the services,  they should be 'vendor preset: enabled'
<ananke> rharper: makes me wonder if kali linux folks have done some rudimentary setup required by amazon, and disabled what they didn't think was necessary. I've opened a bug with them as well: https://bugs.kali.org/view.php?id=6239
<ananke> now I need to figure out if I can fix this on first boot for cloud-init status --wait to function, or would it require another reboot
<rharper> ananke: something that might work is a bootcmd which ran 'systemctl enable cloud-config.service; systemctl enable cloud-final.service'
<ananke> rharper: funny enough, that's exactly what I'm trying to test now. I was also going to put in 'systemctl start' for both of them, or is that not necessary at that stage?
<ananke> problem is that the first time cloud-config.service starts it fails, but then starting it the second time works
<ananke> this is the error, I think I'll have to fix it before I can try the other stuff: http://dpaste.com/37R31X6
<Odd_Bloke> ananke: That looks like a misconfigured /etc/cloud/cloud.cfg{,.d/*}.
<ananke> it appears to break on 'security' key, and the only place that word appears is in /etc/cloud/templates/sources.list.ubuntu.tmpl
<ananke> and templates/sources.list.debian.tmpl
<Odd_Bloke> ananke: My guess is that you're missing something like https://github.com/canonical/cloud-init/blob/master/config/cloud.cfg.tmpl#L172-L196
<Odd_Bloke> Or, alternatively, "apt-configure" shouldn't be configured to run, if Kali isn't a Debian derivative.
<ananke> it's debian alternative, but I wonder if they don't have 'security' repo. Here's their /etc/cloud/cloud.cfg: http://dpaste.com/2DM65Q3
<Odd_Bloke> Yeah, I was about to say that yet another alternative is that cc_apt_configure needs to learn that security mirrors are not universal.
<Odd_Bloke> But if you want something that doesn't require cloud-init code changes, then replicating that final line with s/primary/security/ might do it.
<ananke> for rolling release distros that may be the case
<ananke> Odd_Bloke: thank you, that's a great idea
<Odd_Bloke> (It should definitely get you past the KeyError, I'm not sure if apt will warn you about duplicate sources, though.)
<ananke> I'll test it on this instance, after I remove the proper semaphore. in theory this section of cloud.cfg could be overwritten by a file in /etc/cloud.cfg.d/myfile.cfg, right?
<Odd_Bloke> Yes, though you have to replace the entire value of top-level keys.  So you'd need to replicate the entire `system_info:` dict in there, and then make your changes to that.
<Odd_Bloke> s/dict/mapping/, let's not confuse YAML and Python terms.
<ananke> k. I was wondering about that. I'm already dropping in a file with partial system_info: set of keys, including default_user. Do I have to copy the _entire_ default system_info: section?
<rharper> ananke: depends on what you're changing;  dict merging rules will apply, so the last config will clobber previous values;
<rharper> ananke: I don't think you want to start them;  they just need to be enabled ... and I updated the bug, the other issue is that /etc/cloud/cloud-init.target.wants did not have all 4 services added
<ananke> rharper: I'm changing the default username and gecos field. looks like I could do: http://dpaste.com/094W28X
<rharper> so combine all services enabled, and an updated .wants and systemd should run all 4 services ... and need to sort out if you're making it to network-online.target
<rharper> you can modify the default users with 'users': top-level key instead of system config
<rharper> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
<ananke> ohh, interesting. thank you. not sure how we arrived at system config for the default users, perhaps some old examples
<ananke> rharper: I was under the impression that 'users' provided ability to add additional users, but to change the username/gecos of the default we had to override it with system config
<rharper> sure
<rharper> unless you want to keep the distro default user and modify it; in your case you're changing the name anyhow, so users: [{'name': 'student', 'gcos': 'Cyber Range Student'}]  will do what you want, making 'student' the default user,
<rharper> IIUC
<rharper> if you wanted both, you have users: ['default', {'name': 'student'}] ...
<rharper> but I'm guessing you want the former
<ananke> in my tests with ubuntu 18.04 it seemed to add a new user, rather than override the default one
<rharper> hrm,
<ananke> and regarding earlier comment about cloud-init.target.wants, looks like I had to also enable cloud-init-local.service.
<ananke> so lesson learned, simply enabling cloud-config.service and cloud-final.service via bootcmd didn't seem to be sufficient, I had to ssh in and start them. wonder if reloading systemd would help
<ananke> it may be too late in the boot process
<rharper> ananke: so, since they weren't enabled or included in the cloud-init.target.wants, when cloud-init systemd generator runs it does not pull in all of the cloud-init units;  if you are making changes in early boot, you will also need to:  1) systemctl daemon-reload 2) likely need to run systemctl start --no-block  <service name> ;;; but I'm not sure if that will attempt to run the job righ t away or if it will wait for the right time to start ...
<rharper> ananke: you really need modifications to the base image; I suspect your bug against kali itself would be where they'd fix the install of cloud-init correctly
<ananke> rharper: thanks for the explanation. Yes, the problem needs to be addressed in the base image. I'm suspecting whoever is making that AMI wasn't too familiar with cloud init. I'll pass everything I find to the bug I have open with Kali
<ananke> with that in mind, our goal is to take a base image, adjust it as needed for our purposes via automated means (packer + cloud init + scripts) and create a new image. So I'm creating a temporary band-aid solution here
<rharper> ananke: great.
<rharper> does packer let you run scripts on the image before booting it ?
<ananke> rharper: yes and no. since the source is an existing AMI, the only thing we can pass is user-data with cloud-init specific items
<ananke> so it's like a chicken & egg problem. I have to fix cloud-init via cloud-init :)
<Odd_Bloke> rharper: blackboxsw: https://github.com/canonical/cloud-init/pull/291 <-- the mirror URL substitution PR
<rharper> ananke: I wonder if you used the virtualbox image, convert it to raw or qcow2 format;   then you can loopmount it and make changes
<ananke> the basic steps are: 1) packer creates a new ec2 instance from a source AMI, and passes it user-data, 2) the very first 'provisioner' we have configured with packer is 'cloud-init status --wait'. packer connects via ssh as soon as sshd is available and runs that. 3) we run various other scripts, including cloud-init configs/scripts
<rharper> ananke: and then upload that as an AMI
<Odd_Bloke> (Another small one will follow, to address the TODO called out inline.)
<ananke> rharper: it was a thought, but we're trying hard to avoid that. it requires tons of additional work
<rharper> ananke: ok, one other thought;  once you're in; instead of just running cloud-init status --wait;  you can drop the wait; and if you know what stage of cloud-init you are running (or looking to see if it's actively running) wait for the stage to complete and then manually run the following stages;  cloud-init modules --mode=config;  cloud-init modules --mode=final
<rharper> you may be able to avoid "fixing" the original kali image to boot correctly and then you only need to apply proper fixes to the current image in your script (rather than bootcmds)
<rharper> Odd_Bloke: ok
<ananke> thanks! I'll try that
<Odd_Bloke> rharper: Thanks for the initial comments, I've replied to them all.
<ananke> success! https://gitlab.com/kalilinux/packages/base-files/-/compare/d5519cfc422157245bfec9d6d35271b1b810480b...09e4f2bbd27044a6cb274847400e646b1703ad5e
<ananke> I also suggested added the workaround for security repository, so hopefully that will be added soon
<rharper> ananke: nice!
<ananke> all credit goes to you for finding various things. I should comment on the bug I have open with cloud-init and link my bug open with Kali
<ananke> I haven't looked in the cloud-init code much, but for the security repository issue in kali would creating a new /etc/cloud/templates/sources.list.kali.tmpl without that key work, along with setting distro type to 'kali' under system_config key?
<Odd_Bloke> rharper: I've updated the commit message; where would you suggest we document the change in behaviour other than that?
<Odd_Bloke> (We don't really have release notes per se, right?)
<rharper> Odd_Bloke: I was thinking in docs around apt templating ... maybe we don't have a section on that
<rharper> maybe we don't
<rick_h_> pvt
<rick_h_> bah
<Odd_Bloke> blackboxsw: rharper: A small design section on type annotations: https://github.com/canonical/cloud-init/pull/293 and a couple of small cleanup PRs: https://github.com/canonical/cloud-init/pull/292 https://github.com/canonical/cloud-init/pull/294
#cloud-init 2020-03-31
<meena> Odd_Bloke: i see you've started using types! yay! â¦ uhm, what do we get from that? do we have type checkers in the test system?
<Odd_Bloke> meena: For now, they serve as documentation.  Per https://cloudinit.readthedocs.io/en/latest/topics/hacking.html#type-annotations, we can't use the typing module yet, so we won't really be able to annotate much of the codebase.
<rick_h_> type hints vs type enforcement?
<Odd_Bloke> It's really all type hinting, because Python doesn't use the type annotations at runtime at all.
<Odd_Bloke> But yeah, we won't have a CI check that enforces them for now.
<Odd_Bloke> meena: I would expect @ing a person and commenting on a PR they opened to produce roughly the same notifications in GH.
<shibumi> hi, is dhclient really necessary?
<shibumi> why is cloud-init not using systemd-networkd for dhcp?
<shibumi> I got a bug report for it on arch: https://bugs.archlinux.org/task/66035?dev=16600&type%5B0%5D=&sev%5B0%5D=&due%5B0%5D=&cat%5B0%5D=&status%5B0%5D=open&percent%5B0%5D=&reported%5B0%5D=
<rharper> shibumi: dhclient is used in early boot only to bring up nic to talk to IMDS;  after crawling metadata service, if the distro has a renderer cloud-init writes out network config in the requested format;
<rharper> dhclient is required, on AWS, for example to enable interaction with the IMDS;
<blackboxsw> aaaaaand... it's about that time again :)
<rharper> the system is not configured to use dhclient and we do not keep the dhclient around
<blackboxsw> #startmeeting cloud-init status meeting
<meetingology> Meeting started Tue Mar 31 16:19:14 2020 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<blackboxsw> Hello and welcome to another cloud-init community status meeting folks. please feel free to continue any current conversations
<blackboxsw> I'll interleave status meeting notes with existing conversations.
<blackboxsw> #chair Odd_Bloke smoser rharper
<meetingology> Current chairs: Odd_Bloke blackboxsw rharper smoser
<blackboxsw> our IRC channel topic carries the next planned status meeting for those that wish to participate.  All are welcome to interject or drive converstation topics here
<blackboxsw> let's set that now. to +2 weeks from now
* blackboxsw changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting April 14 16:15 UTC | 19.4 (Dec 17) drops Py2.7 : origin/stable-19.4 | 20.1 (Feb 18) | https://bugs.launchpad.net/cloud-init/+filebug
<blackboxsw> April 14th, same time
<shibumi> rharper: thanks!
<blackboxsw>  Previous meeting notes are here
<blackboxsw> #link https://cloud-init.github.io/status-2020-03-10.html#status-2020-03-10
<blackboxsw> The topics we generally cover in this meeting are the following: Previous Actions, Recent Changes, In-progress Development, Community Charter, Upcoming Meetings, Office Hours (~30 mins).
<blackboxsw> #topic Previous Actions
<blackboxsw> and from last meeting, no previous actions were unaccounted for.
<blackboxsw> #topic Recent Changes
<blackboxsw>  recent changes landed in tip of master via git log --since 2020-03-10
<blackboxsw> #link https://paste.ubuntu.com/p/55hqVCfnpV/
<rharper> shibumi: also, the issue with openstack metadata service not being found is likely related to classless statci route support in the EphemeralDHCP class in cloud-init, we fixed an issue there last fall,  https://github.com/canonical/cloud-init/commit/07b17236be5665bb552c7460102bcd07bf8f2be8
<rharper> shibumi: long term, we would like to replace dhclient with a python-based one;  we've just not had a change to implement a minimal dhclient in python only
<blackboxsw> in that paste or recent changes, some big items have landed since last we 'met':   both NetBSD and OpenBSD distro support is now added to cloud-init. vmware support of guest info gc status, SAP Converged cloud gets identified as OpenStack and Ubuntu Focal prioritizing netplan over ifupdown if both are present
<blackboxsw> Also Odd_Bloke has been landing and improving cloud-init automated process with near daily branches. github actions/workflows and docs about review and coding style expectations are landing to make it a lot easier for upstream to help speed contributions and reviews
<blackboxsw> Also, you'll note a lot of dropping python six and other py2-related artifacts from our codebase. Since upstream support is py3.4 or later we can simplify and prune a lot of the vestigial py2 functionality.
<blackboxsw> Ahh I also forgot, Ec2 now by default (Ubuntu Focal or tip of cloud-init) renders  full networking, including secondary IPv4/IPv6 addresses, for all interfaces attached to a VM based on network config supplied by IMDS. Old releases of cloud-init used to only render basic networking on the primary (eth0) nic.
<blackboxsw> #topic In-progress Development
<blackboxsw> thanks Goneri and meena for all your the BSD ongoing development and support work  there BTW
 * Goneri waves
<blackboxsw> :)     upstream is focused a bit on continuing to clean up py2 remnants from tip, adding support for reading netplan configuration from initramfs and continuing to add automation to the github development and release process/tooling to speed reviews.
<blackboxsw> we are also hammering the review queue a bit better than in the past with daily PR assignments to ensure the system remains more efficient
<blackboxsw> thanks again for all the code submits folks!
<blackboxsw> waiting In the wings we will eventually get around to handling the network hotplug solution for cloud-init (if configured)
<blackboxsw> #topic  Community Charter
<blackboxsw> This section is generally reserved to discuss any general community goals for cloud-init. Per last cloud-init summit we discussed prioritizing the following:
<blackboxsw> * json schema validation for each cloudinit/config/cc_*py module
<blackboxsw> * correcting, extending stale datasource documentation under doc/rtd/topics/datasources
<blackboxsw> those tasks are easy to split up and so we set a goal to try to chunk through it this year
<blackboxsw> they are also categorized as bugs for easy pickup/assignment for anyone interested.
<blackboxsw> #link https://bugs.launchpad.net/cloud-init/+bugs?field.tag=bitesize
<blackboxsw> #topic Office hours (next ~30 mins)
<blackboxsw> thanks for listening to the above......   In this section a couple upstream devs should be available with eyes on the channel for the next 30 minutes for and bug, feature, PR review questions or concerns.
<blackboxsw> again thanks for joining. We'll have this meeting again in 2 weeks
<blackboxsw> I'm going to spend this time working on our cherry-pick script for publishing the Netplan -> ENI work into Ubuntu Focal today.
<blackboxsw> Thanks for tuning in.
<blackboxsw> #endmeeting
<meetingology> Meeting ended Tue Mar 31 17:17:39 2020 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2020/cloud-init.2020-03-31-16.19.moin.txt
 * meena blames baby for *not* doing BSD work or support lately.
<blackboxsw> ohhhh sssssuuuuure meena. blame it on the baby ;) that's why I have kids too ;)
<blackboxsw> that, and I needed someone in the house that also likes board games
<meena> currently, she loves screaming, a lot. And honestly, as the youth say: mood.
<blackboxsw> :)
<rick_h_> that's the problem, you think "a kid, yea we're going to do all these fun things together!"
<rick_h_> and then you realize none of that starts until you put in a LOT of foundation time (like 3+ years)
<blackboxsw> true story
<sarnold> "have you considered a puppy instead?"
<meena> sarnold: we already have 2 cats (they're not fans)
<sarnold> haha :)
<Gaffel> Do I have to resort to raw commands to create a system user and group where a home dir is created and its ownership and security label is set properly? (UID and GID below 1000, but create home)
<Odd_Bloke> Gaffel: It does look like not creating a homedir for a system user is hardcoded, unfortunately: https://github.com/canonical/cloud-init/blob/master/cloudinit/distros/__init__.py#L471-L478
<Gaffel> I was worried about that. That's not how useradd works though, which makes me disappointed. :[
<Gaffel> I want to create a user+group with a UID & GID between 500 and 999 that also creates a home directory with the right ownership and SELinux labels. But I guess I'll have to do it differently.
<Odd_Bloke> Yeah, I'm not sure what the reasoning there is, but it looks like that behaviour has been in place since 2012.
<Odd_Bloke> It certainly seems like overriding that default behaviour would be desirable.
<Odd_Bloke> This looks like https://bugs.launchpad.net/cloud-init/+bug/1864728 to em.
<ubot5> Ubuntu bug 1864728 in cloud-init "Unable to create interactive "system" user" [Undecided,Incomplete]
<Odd_Bloke> *me
<Odd_Bloke> Gaffel: Do you have any interest in addressing the issue in cloud-init? ^_^
<Gaffel> Changing the behaviour will be tricky, won't it?
<Gaffel> I guess no_create_home could be checked and the current behaviour would remain unless you set that to false.
<Gaffel> I assume that leaving it blank results in none though.
<Odd_Bloke> Yeah, I think you're along the right path with using no_create_home.
<Odd_Bloke> The defaulting would be a little tricky, but not super hard once you've reasoned out the matrix of possible combinations.
<Gaffel> I have the same usecase and OS family as that person. I'm trying to introduce more security hardening at work. I'll look into this. :]
<Gaffel> I'm new to cloud-init so I'll do my best. :)
<Odd_Bloke> Gaffel: \o/ You'll want to read through https://cloudinit.readthedocs.io/en/latest/topics/hacking.html to get started.
<Odd_Bloke> Gaffel: Though actually the CLA info there is out-of-date, you can just fill in your GH username and that's sufficient.
<Odd_Bloke> (The wording here predates that field being in the CLA form. :p)
<Gaffel> You mean the field where it wants my Launchpad ID?
<Gaffel> Or just leave that blank?
<Gaffel> I can't even read the text in the verification image. :/
<rharper> Odd_Bloke: I guess we should update the docs and mention the field in the form
<Odd_Bloke> rharper: Yep, addressing that now.
<rharper> awesome!
<Gaffel> And do I really need to enter my physical address? :|
<Odd_Bloke> AFAIK, you do.
<Gaffel> :/
<Odd_Bloke> rharper: https://github.com/canonical/cloud-init/pull/297
<rharper> reviewing
#cloud-init 2020-04-01
<Odd_Bloke> Goneri: I've labelled #298 as wip, give me a shout when you want that undone.
<Goneri> Odd_Bloke, sure, thanks :-)
<AnhVoMSFT> Gaffel which cloud provider is this? You can also deploy a new VM, attach the OS disk that you used for deploying the Centos VM as a datadisk, and get the logs that way
<Gaffel> AnhVoMSFT, libvirt
<AnhVoMSFT> Can you attach the disk to another VM and extract the logs that way?
<Gaffel> AnhVoMSFT, I got around the problems and I'm able to get it working.
<Odd_Bloke> powersj: So I realised that we can't exercise the new URL handling code via user-data, only via cloud.cfg{,.d/*} modification.  AFAICT (via grep), we don't have any tests that currently modify cloud.cfg.  Am I missing something that already exists?
<powersj> Odd_Bloke, ah, you are correct we do not have tests that do that today
<rharper> Odd_Bloke: well, you can write_files a new cfg entry; and then use runcmd to exec cloud-init with the module you want in question
<rharper> Odd_Bloke: the alternative is changes to the image build section where we're building the snapshot image (inject deb, upgrade) and it seems like we'd want to optionally include cloud-cfg
<Odd_Bloke> I think the latter would be the path to go; this probably _shouldn't_ be the only test we have that exercises cloud.cfg. :p
<rharper> yeah, makes sense
<Odd_Bloke> Upon digging a bit further, I think we snapshot the image once for an entire test run, so I don't think we can do this per-test by changing the snapshot phase.
<Odd_Bloke> I've captured what I've found in the card and will leave it at that for now.
<blackboxsw> rharper: has all of Odd_Bloke's invalid URL handling landed as of https://github.com/canonical/cloud-init/commit/1bbc4908ff7a2be19483811b3b6fee6ebc916235 ?
<powersj> that's what it sounded like in standup
<powersj> 291 and 296 I think were the #s
#cloud-init 2020-04-02
<meena> absolutely love how the most innocent do this on *BSD functions turns into a can of worms https://github.com/canonical/cloud-init/pull/298
<beezly> Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml.
<beezly> Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml.
<beezly> oops - sorry about that.
<beezly> I can see it reading the YAML ok and it says... mounts configuration is [['fs-10e8d4db:/', '/opt/jenkins_workspace', 'efs', 'defaults,_netdev', 0, 0]]
<beezly> but then it says "Attempting to determine the real name of fs-10e8d4db:/" followed by "Ignorming nonexistant named mount fs-10e8d4db:/"
<Odd_Bloke> beezly: It looks to me like the code doesn't handle "devices" that don't look like an actual /dev/... device, unfortunately: https://github.com/canonical/cloud-init/blob/master/cloudinit/config/cc_mounts.py#L115
<beezly> yeah - i'm just looking through that. Might have a patch soon-ish.
<beezly> I'm making an assumption that anything that matches ^.+:/.* should be considered a network device.
<beezly> what version of python do people develop cloud-init against?
<Odd_Bloke> We support 3.4+.
<beezly> thanks.
<Odd_Bloke> beezly: https://cloudinit.readthedocs.io/en/latest/topics/hacking.html lists the steps to get setup for development (presumably you know what you're doing with git &c., but you will need to sign the CLA).  Thanks for looking at this!
<beezly> ok, thanks.
<beezly> @Odd_Bloke I'm finding the Launchpad/Github process really confusing. I'm trying to create my launchpad fork at lp:~beezly/cloud-init but each time I push I get "fatal: remote error: Path translation timed out."
<Odd_Bloke> beezly: Apologies, we appear to have a doc building error, but there are new docs on CLA signature here: https://github.com/canonical/cloud-init/blob/master/HACKING.rst
<Odd_Bloke> beezly: But I can see your GH username in our CLA records, so you're good to go.
<beezly> ah great!
<Odd_Bloke> The docs I pointed you at (which I fixed in master yesterday) predate the GitHub username field being in the CLA form.
<Odd_Bloke> rharper: blackboxsw: If I could get a quick review of https://github.com/canonical/cloud-init/pull/301 to add beezly as a CLA signer, I'd appreciate it.
<blackboxsw> Odd_Bloke: approved
<Odd_Bloke> Thanks!
* powersj changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting April 14 16:15 UTC | 20.1 (Feb 18) | 20.2 (TBD) | https://bugs.launchpad.net/cloud-init/+filebug
<beezly> @Odd_Bloke thanks for the super-quick turnaround on that stuff. I just tried out that PR on an EC2 instance of mine and it works as expected.
<Odd_Bloke> beezly: Nice, I'm bouncing between a lot of meetings today, but will hopefully get to reviewing it before too long. :)
<beezly> no problem - I'll try and respond quickly, but I have a pretty full day tomorrow so it might be the weekend.
<Odd_Bloke> rharper: https://github.com/canonical/cloud-init/pull/300/ <-- is r"^.+:/.*" a reasonable regex to match NFS shares (remote:/path/on/server) and no other devices that you're aware of?
<rharper> Odd_Bloke: hrm
<Odd_Bloke> (Disk devices, that is.)
<Goneri> r"^\S+:/\S*"
<Goneri> .+ will catch the spaces. This is not what you want.
<Odd_Bloke> You can't have spaces in NFS mount targets?
<Goneri> not in the hostname, maybe in the mount point.
<Odd_Bloke> Hmm, not as we're going to put it in fstab, I think.
<Odd_Bloke> They'd have to be escaped to \040.
<Goneri> and here come the non-breaking space :-)
<rharper> Odd_Bloke: AFACIT, ":" in the spec will be considered NFS,  left of the colon is a hostname or FQDN, and afterward is any "path" spec
<rharper> FQDN/hostnames cannot have whitespace IIUC, and paths cannot either without encoding/escaping
<Odd_Bloke> I think you _might_ be able to have unencoded/escaped whitespace if you're just running mount yourself, but it definitely won't work in fstab (which is whitespace-separated) regardless.
<Odd_Bloke> So I think what we would want in an ideal world is two checks.  First, we check if something looks like it's intended to be a network share (which the proposed regex is sufficient for, I think).  If it does then, second, we check if it's valid for use in an fstab, and emit a warning if it isn't.
<Odd_Bloke> Or, in fact, that second step could perform the encoding and only emit a warning if that also fails.
<Odd_Bloke> So I think I've convinced myself that this check is fit for purpose: it allows people to configure NFS shares through cloud-init, and shouldn't match anything other than NFS shares.
<Odd_Bloke> We could further improve the experience to not configure broken mounts, but users could also do that themselves by modifying their cloud-config.
<Odd_Bloke> Whereas today they can't even do that.
<rharper> We don't issue the mount directly; right, we only encode it into fstab entry and we mount -a
<rharper> so, it has to be correct enough to have fstab parsable
<Odd_Bloke> If `mount -a` will skip unparseable lines, then we don't necessarily have to have it parseable.
<rharper> I think we get a unit failed error
<rharper> it's not fatal to cloud-init
<Odd_Bloke> Arguably it's easier for a user to figure out what's going on if we write out the unparseable line than if we just emit a warning.
<rharper> thinking, we don't normally sanitize/fixup cloud-config provided values
<rharper> yes
<rharper> I was heading just there
<Odd_Bloke> Yep, I wasn't disagreeing with you. :)
<rharper> and I was just violently agreeing with you as well , haha
<Odd_Bloke> OK, so I think that regex is fine, except I'm not sure if there _must_ be a leading slash on the path.
<rharper> could we not just check for ":" in the line ?
<rharper> since we're not parsing it ?
<Odd_Bloke> Yeah, that's what I'm wondering about.
<rharper> and we could possibly help with the _netdev option , if that's not present
<rharper> https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html  is useful;
<Odd_Bloke> Yeah, I think we could improve the experience further, and I might ask beezly if they're interested in doing more work after this.
<rharper> yeah
<rharper> I wonder if the efs-utils is in the ubuntu image? or the aws snap ?
<Odd_Bloke> There isn't a likely-looking package in the archive, so I don't think it's deb-packaged at least.
<Goneri> Odd_Bloke, could you please remove the "wip" flag here https://github.com/canonical/cloud-init/pull/298
<blackboxsw> done Goneri
<Goneri> thanks!
<blackboxsw> I'm assigned to that PR as of today. will get you a review there
<Goneri> also, is there anything required here: https://github.com/canonical/cloud-init/pull/289
<Goneri> blackboxsw, I'm mostly just some over due cleanup/refactoring.
<blackboxsw> sounds good
<Goneri> blackboxsw, ultimately I would like to move these functions in distros/
<Odd_Bloke> rharper: OK, so I'm thinking I'll ask beezly to drop the "/" from the regex, and call that good.  OK with you?
<rharper> Odd_Bloke: any reason for the regex over just the ':'  ?
<blackboxsw> Odd_Bloke: what git commits should I be cherry picking into ubuntu/devel for the dropping invalid underscores from hostnames
<blackboxsw> I was thinking the following in this order: 4f825b3e6d8fde5c239d29639b04d2bea6d95d0e  c478d0bff412c67280dfe8f08568de733f9425a1.... but I think I might need another before c478
<vasartori> Hi all, I'm getting a strange behaviour in cloud-init with https://github.com/vmware/cloud-init-vmware-guestinfo. When the vm is customized by open-vm-tools, it works, after reboot, cloud-init (from user-data) fails, because an directory exists (/var/lib/cloud/instance). If I remove manually this directory, works. Any tip?
<rharper> Odd_Bloke: so, I just rebased Goneri 's set-passwd change, and now the squash-and-merge wants to put in a Co-Authored by me (which it is not) ... how do I merge the PR and ensure that Goneri is in the author field ?
<rharper> vasartori: there are known issues with vmware workflows and cloud-init, specifically around changing the instance id.  There's upstream work on trying to sort out how to prevent changing the instance-id but still running for additional "customizations" ;  it's somewhat a misunderstanding of how cloud-init is designed (it performs most customizations just once) where the vmware workflows invole multiple rounds of customization aainst the same
<rharper> instance;
<Goneri> rharper, I can rebase it myself if you prefer
<blackboxsw> Goneri: that'd work , but rharper that issue was a github snaffu. they broke something on the backend when I had merged. I think make sure that your squashed commit message does not contain Co-Authored-By: ryan.harper as a footer
<rharper> Goneri: I think if you do and force push, that should reset it to you as last author
<vasartori> rharper So,  should i give up using it?
<Goneri> done
<rharper> vasartori: I don't know;
<Goneri> I've a script for that: https://gist.github.com/goneri/c0fab52388465f5da2b2979ee5166f08
<blackboxsw> vasartori: it'd be nice if we could ultimately work with vmware somehow to get that datasource upstream in cloud-init to ease support also it'd be good to file a support ticket against VMWare somehow reporting that issue. But, there may be a bug in cloud-init upstream that still needs resolving as rharper states
<rharper> vasartori: as a community, we don't have access to all the different versions of vmware, customizations, platforms; so it's difficult for us to develop a way to 1) provide uniqie instance-ids to vmware VMs  2) design a workflow that translates the "recustomize" and alreadu "customized" VM;   we continue to work with the VMware engineers when they submit PRs to cloud-init;
<rharper> =(
<blackboxsw> yeah it's a strange relationship that we hope can be improved with engagement from VMware and the cloud-init community in general through fix requests against vmware (and our discussions with VMWare reps/developers)
<rharper> blackboxsw: they left
<blackboxsw> well shucks
<vasartori> rharper and blackboxsw Thanks a lot for the answers... I'll try to talk with vmware...
<Odd_Bloke> rharper: I believe that they reverted things so that the submitter of the PR is the author of the squashed commit.
<Odd_Bloke> s/they/Github/
<blackboxsw> thanks vasartori I had worried you got frustrated and took off :)
<Odd_Bloke> So you shouldn't need to do anything special, just remove the Co-Authored line.
<blackboxsw> vasartori: the more voices, the more likely we can improve engagement thanks
<vasartori> blackboxsw my internet has gonna away :p . You are 100% correct, I'll try to make some noise there :p
<Odd_Bloke> blackboxsw: I believe you'll want (in reverse order): 1bbc4908f c478d0bff 2566fdbee c5e949c02
<Odd_Bloke> (`git cherry-pick c5e949c02 2566fdbee c478d0bff 1bbc4908f` onto ubuntu/devel looks right to me, and pytest runs.)
<Odd_Bloke> s/runs/passes/ of course. :p
<rharper> Odd_Bloke: I see (just remove the co-author bit);
<rharper> I hope so, even after a rebase / force-push by Goneri the UI still wants to inject that Co-Authored line...  dropping that and merging
<rharper> Odd_Bloke:  yeah, that worked.  merged and Goneri has credit.
<Odd_Bloke> \o/
<Goneri> Yeah!
<blackboxsw> rharper: Odd_Bloke I've put up a cherry-pick branch for review into ubuntu/devel with neew tooling
<blackboxsw> https://github.com/canonical/cloud-init/pull/303
<rharper> k
<meena> oh no github down
<powersj> doh
<powersj> https://www.githubstatus.com/
<blackboxsw> farrr
<blackboxsw> well at least github has a near 500 error splashscreen
<blackboxsw> *neat*
<meena> well, time to sleep then!
<meena> Good night.
<blackboxsw> Odd_Bloke: rharper, the big question with https://github.com/canonical/cloud-init/pull/303 is do we want to support --force push of dropped commits in ubuntu/devel per cherry-pick -t in https://github.com/CanonicalLtd/uss-tableflip/pull/45
<blackboxsw> or do we want to re-add cherry picks that we dropped to support fixing our daily recipes?
 * rharper has to shift metal gears .. 
<rharper> IIUC, we shouldn't have to force push, if we cherry pick, when we merge, the merge will see we've already landed the cherry picks ... no ?  /me 's git fu is weak-sauce
<rharper> we have a list of cherry picks;  we need to cherry pick each one, refresh patches, update changelog.   then post release, we'll do a new-upstream-snapshot, the merge should see the cherry picks, I though it ill replay the commits that are interleaved automatically ?  or does it not do it that way ?
<rharper> https://stackoverflow.com/questions/14486122/how-does-git-merge-after-cherry-pick-work
<blackboxsw> rharper: we'll have to --force if we drop the 'dropped cherry-picks to make daily recipe build'
<rharper> blackboxsw: I thought daily was master with merged release branch ... and IIRC, the steps post cherry-pick and release is to remove the cherry picks on the release branch, right ?
<rharper> we only need the cherry picks *in* for a release
<rharper> once we've uploaded, we can un do them
<rharper> right ?
<rharper> which is what the refresh --patches only was to be ?
<blackboxsw> rharper: correct. we only need cpicks 'in' for a release
<blackboxsw> in this circumstance, we've already released a cherry pick once and applied a followup couple commits to remove that cpick so dialies work
<blackboxsw> *dailies*
<blackboxsw> rharper: right but refresh --patches is fairly shallow in that it actually imports all commits via new-snapshot, but only uses that to drop cpicks, so the code in ubuntu/devel will now officially match tip
<blackboxsw> the intent of --refresh-patches was that our next release would perform the full new-upstream-snapshot and end up listing all the commits is had already pulled into ubuntu/devel in the debian/changelog
<blackboxsw> during feature-freeze, we can't import all commits from tip to properly refresh/drop patches, because we also can't release those other non-cherry commits
<rharper> then I'm not sure what problem we're fixing ... we manually cherry pick, release/upload,  and commit changes to drop cpick patches right ?
<rharper> at that point, daily still works;
<blackboxsw> rharper: right I'll draw up a hackmd doc with the scenarios we are handling.
<blackboxsw> then we can comment on best plan for a second cherry-pick release
<rharper> the new case is, we'd like to release a second cpick only release ?
<rharper> and what does't work there ?
<blackboxsw> rharper: here are the use-cases https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?both
<blackboxsw> and yes this new cases is second cpick release instead of new-upstream-snapshot
<blackboxsw> so we already have dropped the earlier released cpick from ubuntu/devel so we either need to unwind that dropped cpick to add new cpicks, or re-add all cpicks and re-release
<blackboxsw> I'm not sure about the best approach. maybe the best approach is not to unwind/pop the dropped cpick commits. and just re-add the original cpicks that we dropped from ubuntu/devel which allowed us to fix daily builds
<blackboxsw> also, maybe I'm misunderstanding what we did with new-upstream-snapshot --update-patches-only. I thought it actually pulled in all commits. rerunning now to confirm.
<rharper> blackboxsw: so, I was thinking for cpick releases, we'd have a file with the cpick hashes we wanted ... then we could easily revert those to get daily working, and then if we did a second cpick release, the list is just longer this time (we update it with new cpicks) and then pull them out afterwards
<Odd_Bloke> Found a gap in our pytest testing vs non-pytest testing, which I've addressed in https://github.com/canonical/cloud-init/pull/304
<Odd_Bloke> (As it happens, no pytest tests are using util.subp, but this will catch any future additions.)
<blackboxsw> rharper: the problem is if we are reverting the original cpick hashes from ubuntu/<series> then we've botched the commits that we ended up releasing already
<blackboxsw> so we don't have an exact point anymore in ubuntu/<series> that was used to build/dput
<rharper> do you want to do hangout?  I might be faster
<blackboxsw> so maybe the best bet is to just re-cpick
<blackboxsw> yeah I'd like to
<rharper> k
<rharper> standup ?
<blackboxsw> yessir
<Odd_Bloke> blackboxsw: I'm not on the call so this may not be helpful, but reverts in git create a new commit (whereas in bzr they reset the history to before the commit in question).  Just in case there's a confusion in nomenclature, a git revert would not mean we'd lose the commit that was previously uploaded.
<Odd_Bloke> (Apologies to you and rharper if I've just stepped all over your conversation unhelpfully. :p)
<blackboxsw> Odd_Bloke: always a help. will read now
<blackboxsw> rharper: Odd_Bloke ok PR 303 in in good shape I think for cloud-init
<blackboxsw> updated description with the manual steps I performed
<blackboxsw> and shouldn't involve a --force push due to mangled commitishes
 * blackboxsw starts reworking the uss-tableflip script changes for handling --pop-all
<rharper> blackboxsw: +1, Iemme look
<rharper> blackboxsw: "I manually deleted the 'drop ec2 secondary nics cpick changelog" ; you just opened the file and deleted content ? or is there some dch command that will drop it ?
<blackboxsw> rharper: hrm my dch command dropped me into that (maybe because I actually wasn't running that as a bash script but individual commands on the commandlin
<blackboxsw> if you try that separate cherry-pick 6600c64 on the commandline directly (instead of inside a bash script) it might interactively prompt you for the changelog changes that were added. I always have to :wq after each step
<blackboxsw> maybe it's because I have EDITOR=vi in my env?
<rharper> blackboxsw: I see, you manually removed it after running cherry pick, during the dch -i interactive session
<rharper> I've reproduced your branch;  have you build-package and test-built it yet ?
<blackboxsw> rharper: will do that now. was testing tooling (which I think is almost there for --pop-all)
<rharper> blackboxsw: I'll hold off approving; but I've commented that I can reproduce your branch
<rharper> I'll check back later
<blackboxsw> +1 thanks rharper
<blackboxsw> rharper: ok tooling isn't going to change this manual process I forgot
<blackboxsw> so 303 is good
<blackboxsw> if +1'd then I can upload that.
<blackboxsw> tomorrow we can sort the uss-tableflip and my followup PR to --pop-all for daily recipe builds
 * blackboxsw needs to make dinner
<blackboxsw> build-package works on my branch
<blackboxsw> can build-and-push if agreed
#cloud-init 2020-04-03
<rharper> blackboxsw: Sounds good, I'll +1 the branch
<rharper> blackboxsw: it looks like there's a package build failure related to pytest not in the build-reqs ...
<rharper> do we need to sort that out ?
<blackboxsw> rharper: we do unfortunately
<blackboxsw> I'm trying to work that now
<Goneri> rharper, I just tested on the 3 BSD, if two interfaces use DHCP, the system uses the default route of the first NIC.
<blackboxsw> rharper: yeah I think we also need to cherry-pick the tox pytest deps changes
<blackboxsw> cherry-pick 986f37b01
<blackboxsw> redoing that branch now to test
<blackboxsw> rharper: finally repushed PR 303. and added new manual steps to the PR description. basically I cherry-pick the already released 6600c642, then the tox pytest  986f37b01, then the rest of the picks
<blackboxsw> hrm I think the build fails again the issue is that sbuild "Install cloud-init build dependencies (apt-based resolver)" gets dependencies direct from test-requirements.txt before the cpick is applied to drop nose and add pytest. So, it's not present in the sbuild image.
<blackboxsw> even though the end result has a proper test-requirements.txt file once patches are applied
<blackboxsw> bummer, may have to fix this tomorrow.
<rharper> blackboxsw: I think the github CI build is just a package/bddeb; so I doesn't have any patches applied, so I think it's a funny state;
<rharper> blackboxsw:  https://github.com/canonical/cloud-init/pull/306  is up
<blackboxsw> ahh that makes sense rharper
<blackboxsw> maybe we can change that too in a separate pr against cloud-init
<blackboxsw> looking over 306
<PodioSpaz> When using cloud-init multi-part MIME with multipass, I receive "error loading cloud-init config: yaml-cpp: error at line 5, column 13: illegal map value".
<PodioSpaz> 1 Content-Type: multipart/mixed; boundary="===============8832723955843212272=="2 MIME-Version: 1.03 4 --===============8832723955843212272==5 Content-Type: text/cloud-config; charset="utf-8"6 MIME-Version: 1.07 Content-Transfer-Encoding: base64
<Saviq> hi PodioSpaz, we only currently support plain user data in the --cloud-init argument
<Saviq> there's a handful of feature requests for more complex scenarios on our github
<Saviq> https://github.com/canonical/multipass/issues/557
<Saviq> https://github.com/canonical/multipass/issues/1295
<Saviq> https://github.com/canonical/multipass/issues/1040
<Saviq> https://github.com/canonical/multipass/issues?q=is%3Aissue+is%3Aopen+cloud-init
<Saviq> please file one for multi-part if none of those fits your use case
<PodioSpaz> Thanks
<blackboxsw> rharper: approved 306, needs a rebase against master
<blackboxsw> and can land
<blackboxsw> rharper: I'll rebase and push.
<rharper> blackboxsw: thanks!
<blackboxsw> ci running on it. will merge it when I see that green
<blackboxsw> almost have new manual steps for you on release (it'll take into account your PR which we will land momentarily
<rharper> OK
<blackboxsw> rharper: just landed https://github.com/canonical/cloud-init/pull/306
<blackboxsw> I'll get you the steps for focal release
<blackboxsw> okay rharper new manual steps and push for focal release to include your 306 https://github.com/canonical/cloud-init/pull/303
<rharper> blackboxsw: ok, running them now
<blackboxsw> rharper: might want to refresh the pull request UI I updated the description
<rharper> yes
<rharper> thx
<blackboxsw> building package now to test
<blackboxsw> I had missed a couple things in adding my additional quilt patch
<rharper> what's the ./picks.sh ??  is that our wrapper  you're working on ?
<blackboxsw> rharper: that is just a script to automate the first half of the manual operations to avoid having to cut and paste each step
<blackboxsw> so you should be able to run that directly as is.... and manually continue with =     cd cloud-init and beyond
<rharper> hrm
<blackboxsw> cd /tmp; bash ./picks.sh
<rharper> why do I need it?  I've created all of the cpicks;  then I push them on
<rharper> fix up the unittest
<rharper> etc.
<rharper> I don't see why I need it? (or is it automating all of the cpick work I did following your steps?)
<rharper> sorry, I see it now
<blackboxsw> if you've already created the cpicks. then you only need additionally to:
<blackboxsw> cherry-pick 09fea85fd1f6fd944f4cdd8b97e283090178ae88  # your PR 306
<blackboxsw> and the quilt push -a .... .steps below
<rharper> gotcha
<blackboxsw> the problem you have with your branch is that you've already updated and commited for release in debian/changelog. so you might was to get reset HEAD~1 to back off of that
<rharper> You forgot an EOF
<rharper> that's why
<blackboxsw> ahh bad paste.
<rharper> I reset to upstream/ubuntu/devel first
<rharper> so I was replaying all of the picks from the beginning but I didn't notice it was a heredoc  ...
<rharper> no worries.  I'm on it now
<rharper> did you delete the trailing whitespace after removing the unittest  ?
<rharper> oh there's the EOF ... hehe
<blackboxsw> rharper: my EOF is middle of that description as I kicked it out during the quilt header as I figured there'd be some other wrangling
<rharper> picks calls itself =)
<blackboxsw> rharper: picks doesn't call itself the EOF was above that just before calling bash ./picks.sh
<blackboxsw> I'll update the description and include the entire script within the picks.sh
<rharper> your last git commit -am pulls in local cruft, lemme try again
<rharper> blackboxsw: I'm pretty close; the only delta I have is in the last quilt patch (how you describe editing the unittest file)
<rharper> https://paste.ubuntu.com/p/PDmfQ2HQwv/
<rharper> blackboxsw: ^
<blackboxsw> rharper: I just finally pushed again
<blackboxsw> and updated description
<rharper> ok, let me replay pick.sh
<rharper> it does speed things up =)
<blackboxsw> rharper: man , lintian is still comiing up with an error on the branch :/
<blackboxsw> E: cloud-init changes: inconsistent-maintainer Chad Smith <chad.smith@canonical.com> (changes vs. source) Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
<blackboxsw> E: cloud-init: mismatch-translated-choices cloud-init/datasources choices-c
<rharper> eww
<blackboxsw> I'll try applying that Author instead of me
<blackboxsw> when editing the header
<rharper> oh, I bet you missed a file
<rharper> the datasources choices breaks when the distro values aren't matching
<rharper> we pulled out netbsd/openbsd  from the unittst
<rharper> but where else are they ?
<rharper> oh, datasource isn't distro, but when we add/remove a datasource;  isn't there some manual release branch updates ?
<rharper> blackboxsw: ok, diff looks good, once you sort out the lintian failure
<rharper> testing out my source package build right now
<blackboxsw> great rharper, I'm rebuilding now with Author: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> in the dep3 header
<blackboxsw> to check
<rharper> that's what I've got in mine now after your update
<blackboxsw> rharper: looks like this https://bugs.launchpad.net/ubuntu/+source/lintian/+bug/1862787
<ubot5> Ubuntu bug 1862787 in lintian (Ubuntu) "inconsistent-maintainer error not applicable to Ubuntu" [Undecided,Fix released]
<rharper> ah, yes
<blackboxsw> hrm, what does this 'mean'.  does it mean our focal images in sbuild don't yet have that released version of lintian?
<blackboxsw> I'll ask in ubuntu-devel
<rharper> not sure
<rharper> Lintian: fail
<rharper> Status: successful
<rharper> Version: 20.1-10-g71af48df-0ubuntu13
<rharper>  
<rharper> the package builds just fine
<rharper> lintial errors are non-fatal to package builds
<rharper> I know I recently asked about this when I was testing out probert packaging changes
<rharper> blackboxsw: so, I can reproduce your branch, git diff is OK, and sbuild is OK from my perspective;
<rharper> is your build-and-push sbuild completing with Success ?
<blackboxsw> ahh right. ok cool. I was worried this'd break things
<rharper> me too
<blackboxsw> rharper: ok. then I think we are good
<blackboxsw> I will build-and-push
<rharper> one sec
<blackboxsw> I have commented those operations out
<rharper> are you going to leave the author as you or ubuntu ?
<blackboxsw> ok holding. (and reperforming changes to track Author: chad.smith@c.com
<rharper> I'd like to get a clean diff against your branch and I'll comment with that
<rharper> ok
<rharper> I'll replay and diff
<blackboxsw> yeah that was unrelated change that didn't fix anything
<blackboxsw> same
<blackboxsw> the issue was related the the generate *dsc file I think not the dep3 header on the patch
<blackboxsw> sooo many cherry-picks
<blackboxsw> rharper: force pushed and updated PR description
<blackboxsw> building  oone more time
<blackboxsw> rharper: +1 successful build. with latest changes.
<blackboxsw> I won't build-and-push until you compare diffs against what I've pushed
<rharper> blackboxsw: same here, I'm approving ;  but you push from cli, not from gui.    Do we mark approved or something else in the UI ?
<blackboxsw> rharper: yep we push releases from cli, not github
<rharper> is marking approve OK? it won't autoland
<blackboxsw> to perform dput, tag and push to origin/ubuntu/devel
<blackboxsw> correct, no auto-land in github
<blackboxsw> ... YET. muhahhaa
<rharper> heh, also CI will fail due to the no-patch-bddeb
<rharper> so that'll block merging via UI
<rharper> ok, I've Approved it
<rharper> thanks for working on this
<blackboxsw> schweet. rharper thanks  for the counsel here. also, I'm thinking I'll put up a tiny separate PR against ubuntu/devel now to add the debian/cloud-init-cherry-picks so we can use the uss-tableflip 'cherry-pick --pop-all' to fix daily builds
<rharper> nice
<blackboxsw> merged thx rharper https://github.com/canonical/cloud-init/pull/303
<blackboxsw> powersj: rharper [ubuntu/focal-proposed] cloud-init 20.1-10-g71af48df-0ubuntu3 (Waiting for approval)
<rharper> \o/
<powersj> sweet
#cloud-init 2020-04-04
<andras-kovacs> wow there is sg. strange with cloud-init on RHEL7.8 It didn't start as previously
<andras-kovacs> I could trigger cloud-init-local, but cloud-init.service didn't run
 * andras-kovacs sent a long message:  < https://matrix.org/_matrix/media/r0/download/matrix.org/iAKNrOfaoBqRjPsUCkIAQSak >
<andras-kovacs> I'm disabling cloud-init from my kickstart file, than enable it when I finish the VM build with packer from a cleanup script. So the first run should happen only when I make a new vm from the template. Now I'll give a try to disable cloud-init with a file or kernel parameter instead of messing with the systemd units from the kickstart. Because the service units are enabled but they don't start on the "second" boot
<andras-kovacs> -> even there is no cloud-init logs, I ran cloud-init clean, etc.
<andras-kovacs> Ofc if I find the source of the problem, I'll report it to RHEL
<andras-kovacs> maybe cloud-init.target is not in the boot goals hmm.
<andras-kovacs> That's the case! I can't see any cloud-init related service in the critical chain.
<andras-kovacs> * systemd-analyze critical-chain I mean
<andras-kovacs> "already disabled: no change needed [no /run/systemd/generator.early/multi-user.target.wants/cloud-init.target]"
<andras-kovacs> cloud-config.service is now WantedBy=cloud-init.target
<andras-kovacs> perviously it was WantedBy=multi-user.target
<andras-kovacs> same story with cloud-init.service
<andras-kovacs> So IMHO this is the same issue: https://bugs.launchpad.net/cloud-init/+bug/1771382
<andras-kovacs> or really similaer
<ubot5> Ubuntu bug 1771382 in cloud-init (openSUSE) "ds-identify: fails to recognize NoCloud datasource on boot cause it does not have /sbin in $PATH and thus does not find blkid" [Medium,Fix released]
<andras-kovacs> it don't need to modify / override the unit files, only: mv /etc/systemd/system/cloud-init.target.wants/* /etc/systemd/system/multi-user.target.wants/
<andras-kovacs> And it works!
<andras-kovacs> finally
<andras-kovacs> The bug was filed already here: https://bugzilla.redhat.com/show_bug.cgi?id=1820540
<andras-kovacs> I added my notes and findings.
<ubot5> bugzilla.redhat.com bug 1820540 in cloud-init "cloud-init package broken post 7.8 upgrade" [Medium,New]
<andras-kovacs> Maybe... it's not so obvious now cloud-init has many modules what get executed all the time a server boots up.
#cloud-init 2020-04-05
<PodioSpaz> Can `write_files` be merged?  I can't seem to get the `merge_how` syntax correct.  Only files in the last `write_files` are being created.
