#cloud-init 2014-03-03
<harlowja> smoser u see all my changes?? lol
<harlowja> i think https://code.launchpad.net/~harlowja/cloud-init/changeable-templates was what u were thinking of
<smoser> harlowja, i did see that go by. i haven't really  lookd at it.
<harlowja> np
<smoser> i'd like to have that for 14.04, as a way to tell pepole "use this for backwards compat to 16.04"
<smoser> but i need a FFE to get it.
<harlowja> ah
<seanwbruno> bah
<seanwbruno> I need a brain reboot this morning.
<harlowja> reboot!
<harlowja> smoser let me know if i can help, i can say it needs a FFE 
<harlowja> lol
<harlowja> seanwbruno hey, the whole whatsapp stuff ran on freebsd, there u go :-P
<seanwbruno> indeed it did
<seanwbruno> a lot of ex-Y! people over there.
<harlowja> swimming in $$ now, lol
<smoser> i loved/love whatsapp.
<smoser> but fear where facebook will take it.
<smoser> i surely don't wantto be inundated with pics of harlowja everytime he posts on facebook.
<harlowja> lol
<harlowja> smoser u know u want it
<harlowja> lol
 * harlowja should now be quiet or else i get in trouble
<smoser> harlowja, one thing in that changeable template...
<harlowja> sure
<smoser> it'd be nice if it worked without cheetah
<harlowja> hmmm
<smoser> as in, it wouldn't work in backwards compat mode
<smoser> but it woudn't stck trace
<harlowja> like it would just use mako?
<smoser> well, i guess what i'd really like would be:
<harlowja> or are u thinking some basic simple impl?
<smoser>  a.) update all existing templates to "future"
<smoser>  b.) try loading cheetah, silently swallow error
<smoser>  c.) if need cheetah becaus eyou find old format, then either use what you imported, or cry loudly at that point.
<harlowja> gotcha
<harlowja> sure, smoser  makes sense
<smoser> 'c' could be improved in the future to say "if i can render this in hack-mode, then i will still be ok"
<harmw> smoser: https://code.launchpad.net/~harmw/cirros/cirros-sysinfo
<harmw> thoughts? :)
<smoser> harmw, i'd rather read /proc/partitions. 
<harmw> ok
<harmw> its all about warpspeed, mr Worf
<harmw> :)
<smoser> http://paste.ubuntu.com/7029317/
<harmw> nice
<smoser> harmw, do we have lsblk in there ?
<harmw> yep
<harmw> pff lol
<harmw> thats easy output as well
<smoser> yeah.
<harmw> $ lsblk 
<harmw> NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
<harmw> vda    253:0    0      1G  0 disk 
<harmw> `-vda1 253:1    0 1011.9M  0 part /
<smoser> oh. the other hting to be awware of is running in lxc
<smoser> i dont want a bunch of silly error messages that are not relevant.
<smoser> lsblk has nice formatting options too
<harmw> I see now
<harmw> still favor /proc/partitions?
<smoser> well, the above is really nice. 
<smoser> and gets you LABEL easily too
<smoser> http://paste.ubuntu.com/7029329/
<smoser> see how i use it in _lsblock there
<smoser> i'm not saying we want all of that, but for sure, or even if we want it (since that ascii art is so nice to humans)
<smoser> one thing i'd think about (and yes, i overthink just about everything)
<smoser> is that i want to enable machines to read this stuff.
<smoser> in additionto humans. 
<smoser> and if i had to choose between humans and machines, i'd probably pick machines.
<harmw> uhm, how does that relate to cirros-status?
<harmw> (I'm not sure I'm getting it)
<smoser> well, what i'm afraid of is that if we put something in like that. 
<smoser> in any format
<smoser> then people will make scripts/programs that expect it in that format
<smoser> and thn i'm tied to an api of sorts.
<harmw> ah
<smoser> harmw, trying to do too many things.
<smoser> cirros-status runs to stdout by default, right?
<harmw> yea
<smoser> k.
<harmw> it's ran at the end of booting cirros, pushing all the info down to the console.log
<smoser> right.
<smoser> so i'm not opposed to having status output mostly human readable at this point. and takign some patch like you have there. 
<smoser> i thin i'd like lsblk if it put label output. that'd be great.
<harmw> like this?
<harmw> $ lsblk -d -o NAME,SIZE
<harmw> NAME SIZE
<harmw> vda    1G
<harmw> harlowja: i've submitted the fbsd static network stuff btw
<harlowja> harmw yup yup, seanwbruno can u check it out?
<harlowja> if u don't mind boss
<harlowja> https://code.launchpad.net/~harmw/cloud-init/freebsd-static-networking/+merge/208973 
<smoser> harmw, i'd like to see LABEL in there too.
<harmw> smoser: pushed
<seanwbruno> eh?
<seanwbruno> oh
<seanwbruno> one sec
<harlowja> seanwbruno if u don't mind, harmw has been doing some stuff to write out the freebsd network config
<harlowja> so since u likely know that format, u might be a good reviewer ;)
<harlowja> seanwbruno v
<harlowja> *oops
<harlowja> seanwbruno https://code.launchpad.net/~harmw/cloud-init/freebsd-static-networking/+merge/208973 
<harlowja> stupid paste key, lol
<smoser> harmw, how does it look if you dro pthat '-d'
<smoser> ?
<harmw> $ lsblk -t -o NAME,SIZE,LABEL
<harmw> NAME      SIZE LABEL
<harmw> vda         1G 
<harmw> `-vda1 1011.9M 
<smoser> harmw, so i like this:
<smoser> sudo LANG=C lsblk  --ascii --list --bytes -o NAME,MAJ:MIN,SIZE,LABEL,MOUNTPOINT
<smoser> you can drop the LANG=C i guess and sudo.
<smoser> i like the output other than i wish it would print a '.' or somethign for "no label"
<smoser> as parsing that is crap
<harmw> ah, with sudo I get a label returned
<harmw> wicked
<smoser> harmw, well, i added LABLE to output there.
<smoser> but reading label may require root. if it wasn't using blkid and that data already cached.
<harmw> indeed
<harmw> I'm changing it to the above
<harmw> without LANG=C though
<harmw> and yes, something like no-label would've be nice
<harmw> perhaps something similar for mountpoints
<harmw> for the sake of parsing
#cloud-init 2014-03-04
<natorious> is cloud-init able to run in a virtualenv?  My guess is no.
<smoser> natorious, well, it would write to files outside the virtualenv
<smoser> but harlowja did work to on tox-venvs at https://code.launchpad.net/~harlowja/cloud-init/tox-venvs/+merge/203225
<smoser> i do plan on getting that.
<natorious> thnx
<harlowja> yup, plans to make that work, just need to do a little more
<harlowja> arg, smoser mako doesn't have some of the niceness that cheetah has, durn
<harlowja> especially #slurp
<smoser> what niceness  is that?
<harmw> smoser: I've pushed to cirros-sys-info yet again
<harmw> can you think of anything else to add?
<smoser> harmw, looking.
<harmw> cpu (speed/cores), ram, hdd
<harmw> what else :)
<harmw> hm, I want to know th hypervisor env (atleast a little)
<harmw> argh, missing dmidecode
<smoser> harmw, have you tested in lxc ?
<harmw> nope
<smoser> you can do that fairly easily  from ubuntu.
<smoser> launch an ubuntu trusty, then apt-get install lx
<smoser> lx
<smoser> lxc
<harmw> fedora...
<smoser> well, launch an ubuntu vm. thats the power of this whole cloud thing :)
<harmw> :>
<smoser> we make images really easily available for you.
<smoser> install lxc, then:
<smoser> lxc-create -t cirros -- --version=devel 
<harmw> ok, well, fedora has lxc as well
<harmw> obviously
<harmw> though cirros is an unknown template
<harmw> lets see :)
<harmw> smoser: https://github.com/lxc/lxc/blob/master/templates/lxc-cirros.in
<smoser> use trusty
<smoser> or the ppa
<smoser> oh. yeah, fedora
<smoser> silly fedora
<smoser> what good is an os if you can't run cirros in a container on it.
<harlowja> arg smoser  why do all the other template languages suck
<harlowja> cheetah seems so nice now, lol
<harmw> smoser: lsblk doesn't print diskinfo in lxc
<harmw> smoser: can you put dmidcode in cirros? I think its include with busybox... It's cool for getting more information from the running system
<smoser> harmw, thats fine if it doesn't print info
<smoser> i'm not oppposed to dmidecode in the future.
<smoser> but not for this next release.
<smoser> harmw, its fine if you don't get information, but i'd rather it say something like "unavaliable" or not print anything.
<harmw> the release after the next is fine with me
<smoser> and i dont want it write useless spam error messages to the console
<harmw> it currently writes nothing 
<harmw> no errors/crap, just nothing
#cloud-init 2014-03-05
<harlowja> smoser so jinja2 seems to be more feature compatible with cheetah, have the templates converted over
<harlowja> smoser u were thinking about having this new template engine be the default right?
#cloud-init 2014-03-06
<smoser> harlowja, well, i'd replace any usage of the templates that we have in trunk to use the new format.
<harlowja> smoser right, i adjusted all the ones in /templates
<smoser> and those templates would in some way declaratively state that they were of this new type.
<harlowja> yup
<harlowja> check out the merge smoser when u get some time
<harlowja> *review/merge
<smoser> harlowja, thanks for all your help.
<harlowja> np boss 
<harlowja> :)
<harlowja> like 100 distros in cloud-init now
<harlowja> haha
<harlowja> pretty interesting
<smoser> horah!
<harlowja> def
<oobx_> howdy? I'm stuck with vcloud director and am trying to add cloud-init to my vms because vcloud's vm customization is picky about the OS versions that it supports.
<oobx_> I've created a user-data.txt file and genisoimage'd it, uploaded, attached, and booted.  But, /var/log/cloud-init.log doesn't show it noticing the ISO
<oobx_> i'm using the EPEL 6.3 version of cloudinit on centos 6.5
<oobx_> any tips?  It doesn't show "Alt cloud" as a datasource in the log, just nocloud, configdrive, ec2 and OVF.  I guess I need to find the 7.5 rpm or build one, right?
<oobx_> i just noticed that OVF requires userdata to be in XML. I just used the make-iso script to convert my user-data into bases64 and shove it into the XML file.  I'll try it out tomorrow, since I'm unable to upload the ISO right now.
<harmw> seanwbruno: you've had a chance to test the freebsd stuff?
<harmw> smoser: anything needed from my part to have the pending merge request in cirros be fullfilled?
<harmw> *accepted
<smoser> harmw, i'll try to do that today.
<smoser> i'm really sorry. i want to run it in lxc once. (even though i made you boot an unholy ubuntu system to try that :)
<harmw> oh it worked on my fedora system :) 
<harmw> and sure, np
<harlowja> any RH folks around? i was wondering what version of cloud-init will be in rhel7
<harlowja> *and more importantly with what cloud.cfg
 * harlowja might save me some time rebuilding a rhel7 rpm if i can avoid building it in the first place :-P
<harmw> harlowja: does it even ship with ci? I dont see it at either ftp://ftp.redhat.com/redhat/linux/enterprise/6Server/en/os/SRPMS/ or the path for the 7beta
<harlowja> hmmm, was talking to the folks here, and i think they said it does, let me double check, i just got access to a rhel7beta
<harmw> I could be wrong ofc
<harlowja> k, think u are right, doesn't seem in the rhel-beta repo, wonder where the guy saw it
<harmw> perhaps epel, since it's include there
<harmw> http://mirror.nl.leaseweb.net/epel/6Server/SRPMS/cloud-init-0.7.4-2.el6.src.rpm
<harmw> thats for rhel6/centos6
<harmw> http://mirror.nl.leaseweb.net/epel/beta/7/SRPMS/cloud-init-0.7.2-8.el7.src.rpm
<harlowja> k, found it
<harlowja> http://dl.fedoraproject.org/pub/epel/beta/7/SRPMS/repoview/cloud-init.html
<harmw> and thats 7
<harmw> ah yes harlowja
<harlowja> ya, looking inside it, seeing what they did :-P
<harmw> hhe
<harlowja> ya, it seems like they just took over the fedora one :-P
<harlowja> distro: fedora...
<harlowja> in cloud.cfg
<harmw> ah yes, I believe I've patched that in my private repo to read rhel or something :p
<harlowja> ya, some systemd changes also
<harmw> are those patches fedora specific or should thy be included 'upstream'?
<harlowja> not 100% sure :-P
<harlowja> they don't seem so specific
<harmw> the nodevconsole patch looks interesting
<harlowja> ya
<harlowja> hey, at least rhel7 is 'Python 2.7.5'
<harlowja> lol
<harlowja> :-/
<oobx> harlowja: I was tinkering with my desktop and may have missed help directed at me.  Thanks for the cloud-init epel links for CENTOS 7.  Now, I'm seeing 7.4 instead of 6.3 in http://dl.fedoraproject.org/pub/epel/6/x86_64/
<harlowja> :)
<harlowja> there u go
<harmw> smoser: got that lxc container up already? :p
<smoser> harmw, so in my test here.. i see the disk output of the host.
<harmw> you do? hm
<harmw> proc/partitions has indeed all the hosts stuff
<harmw> but lsblk didn't
<smoser> i'll lok at this a bit more. 
<smoser> i think i might use lscpu
<smoser> as it can tele me cores x sockets x threads
<harmw> and thats included in cirros already?
<smoser> yeah.
<harmw> damn
<smoser> and the lslk might differ between yours and mine.
<smoser> (the lxc container might differ)
<harmw> yea could be the case
<smoser> but at this point whatever we see is probably not valid.
<smoser> harmw, lscpu --parse
<smoser> that looks like it would be nice. 
<smoser> but it seems to not work :)
<smoser> oh. i see.
<smoser> it does. but doesn't give as much info as you'd like.
<harmw> damn, nifty output
<harmw> though it's all zeroes whn running --parse
<smoser> well, its not. 
<smoser> thats only for the first cpu
<smoser> and you only have 1
<smoser> :)
<harmw> indeed I have
<smoser> i didn' realize cirros /bin/sh supported ${FOO//from/to}
<smoser> that is not posix
<smoser> awk '$1 == "MemTotal:" {print $2}' /proc/meminfo
<smoser> you can do the divide in awk too if you wanted. and even then ditch the echo.
<smoser> harmw, http://paste.ubuntu.com/7045706/
<smoser> that prints out:
<smoser> arch: x86_64
<smoser> total: 16 cpu @ 1600.000 MHz
<smoser> cores/sockets/threads: 4x2x2
<harmw> arch: x86_64
<harmw> total: 1 cpu @ 2009.101 MHz
<harmw> cores/sockets/threads: 1x1x1
<harmw> nice
<harmw> === system information ===
<harmw> Arch: x86_64
<harmw> CPU(s): 1 @ 2009.101 MHz
<harmw> Cores/Sockets/Threads: 1/1/1
<harmw> Virt-type: AMD-V
<harmw> Hypervisor:
<harmw> I think I like that output a little better
<harmw> (and to figure out the hypervisor I'd probably only have dmidecode as option)
<smoser> harm. tha tlooks fine to me.
<smoser> one cleanup to what i had, you should initialize variables to 'NA' or something.
<smoser> harmw, http://paste.ubuntu.com/7046165/
<harmw> smoser: pushed
<harmw> and I realy can't wait for dmidecode... :)
<smoser> harmw, dmidecode is "interesting"
<smoser> i recently fixed two bugs in cloud-init to *not* use dmidecode
<harmw> haha
<smoser> as on arm, it crashes vms
<smoser> and even, reportedly, crashes hardware.
<harmw> ouch
<smoser> ie, you run that command and "poof" your hardware goes MIA.
<harmw> well, perhaps /dev/mem is just to delicate :p
<harmw> anyway, time for a new cirros release?
<harmw> hm
<harmw> dmesg|grep DMI:
<harmw> looks like a simple way of telling which platform we're on
<harmw> [    0.000000] DMI: Red Hat Inc. OpenStack Nova, BIOS Bochs 01/01/2011
<harmw> kindof
<harmw> dmesg|grep DMI:
<harmw> DMI: Supermicro X7SPA-HF/X7SPA-HF, BIOS 1.0b    01/19/2010
<harmw> (or var/log/messages for that matter)
<harmw> smoser: I think I like that line as part of the extended info stuff
<harmw> thoughts?
<harmw> cut until BIOS shows up that is
<smoser> $ dmesg | sed -n 's/.*[ ]\+DMI:[ ]\+//p'
<smoser> LENOVO 7417CTO/7417CTO, BIOS 7UET91WW (3.21 ) 12/06/2010
<smoser> harmw, lets do something simple now. and get you some of the dat ayou want.
<smoser> and then work on getting something nicer.
<harmw> ok 
<harmw> grep DMI /var/log/messages | sed 's/.\+] DMI: \(.\+\), BIOS.\+/\1/'
<smoser> no need for grep. 
<smoser> sed is doing that above.
<smoser> and in yours too.
<smoser> $ sed -n 's/.*[ ]\+DMI:[ ]\+//p' /var/log/kern.log
<smoser> LENOVO 7417CTO/7417CTO, BIOS 7UET91WW (3.21 ) 12/06/2010
<smoser> use '-n'
<harmw> well, using grep at first seems a little faster 
<harmw> $ grep DMI: /var/log/messages | sed -n 's/.*[ ]\+DMI:[ ]\+\(.\+\), BIOS.*/\1/p'
<harmw> Red Hat Inc. OpenStack Nova
<smoser> really?
<smoser> are you sure?
<smoser> yeah. wow. it is. 
<harmw> :)
<harmw> I'm pushing it 
<harmw> http://paste.ubuntu.com/7046723/ thats how it looks like
<harmw> idealy, the sudo fix for reading ssh keys shouldn't have gone into this branch
<harmw> ok, and using awk to get the ram size
<smoser> right. you dont need sudo there.
<smoser> right?
<harmw> well, not at boot time no
<smoser> ah. i see.
<harmw> but you do if you run the cirros-status manually
<harmw> for whatever reason
<smoser> right. other option is just to output only what is avilable as non-root.
<smoser> and say "if you were root, you'd know more"
<harmw> lets merge this, release cirros and then change that :p
<smoser> :)
<smoser> i have to run.
<harmw> ok :)
#cloud-init 2014-03-07
<harmw> smoser: cirros release, ahem :p
<smoser> i need a big "DONATE" button on the fabulous cirros home page :)
<harmw> hehe yea
#cloud-init 2014-03-09
<harmw> smoser: https://code.launchpad.net/~harmw/cirros/cirros-buildroot-2014.02
<harmw> your thoughts please :)
#cloud-init 2015-03-02
<faizal> Hi
<faizal> Have some doubts
<faizal> About cloudinit
<faizal> Can I ask
<Odd_Bloke> faizal: It's better to just ask than ask to ask. :)
<faizal> Okay
<faizal> Can we run just one module
<faizal> When we run manually
<faizal> For example only users-group module
<faizal> And one more doubt
<faizal> Cloudinit --file. ...what type of file will it take as argument
<faizal> I gave cloudinit --file mycloudconfig.Cfg
<faizal> I am getting error
<faizal> Too few argument's
<faizal> What type of file is cloudinit expecting
<Odd_Bloke> faizal: You still need to instruct cloud-init to do something with that file.
<Odd_Bloke> faizal: (e.g. cloud-init --file foo.cfg init)
<Odd_Bloke> faizal: And to run a single module, you want the single subcommand.
<Odd_Bloke> (i.e. cloud-init single)
<Odd_Bloke> I haven't used it, so I don't know the parameters off the top of my head.
<Odd_Bloke> But poke around that and you should find what you want.
<faizal> Thanks
<faizal> Cloudinit single users-group?
<faizal> Is this correct
<Odd_Bloke> faizal: I don't know. :)
<Odd_Bloke> faizal: Does it do what you expect? :p
<faizal> Nope
<Odd_Bloke> Then it's not correct. ;)
<Odd_Bloke> faizal: Did you read the help for cloud-init single?
<faizal> Yeah
<faizal> Got it
<faizal> We should use -n
<faizal> It's working
<faizal> :)
<Odd_Bloke> :)
<faizal> To run manually we should delete the contents in /var/lib/cloud/ every time?
<faizal> Is there any other way
<Odd_Bloke> faizal: Specifying a frequency should override that behaviour.
<Odd_Bloke> (But I'm not 100% sure)
<harmw> hm, stupid wget
<harmw> Read error at byte 4302834/21211526 (error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac).
<harmw> (while trying to build cirros)
<johnca> zackf: :)
<zackf> Hey there johnca
<zackf> Hey all, had a question, using cloud-init on coreos and i'm trying to have it start a systemd template... When I do so it throws an error that i'm using an invalid name
<zackf> https://gist.github.com/jrcloud/92f46d5d845da58b7cf2
<zackf> That's my cloud config ^
<zackf> When i get rid of the @ sign in the zk-discovery service it works. 
<zackf> Just not as a template. 
<zackf> anyone have any tips on how to resolve this? 
<johnca> I think you might have to split it into 2 commands, 1 to write the file and one to start the service
<zackf> Yeah that's what i'm doing currently
<zackf> That works!
<zackf> Just curious if there was a way to do in one step
<zackf> if not , what you said is working as of now
<johnca> probably not
<zackf> Ok, no worries. 
<zackf> This way works. 
<zackf> Thanks johnca
#cloud-init 2015-03-03
<AndroUser> When passing the hash paswd through  cloud config it's adding two ! Symbols + the hash password to the /etc/shadow file
<Odd_Bloke> AndroUser: How are you generating the hash password?
<AndroUser> http://paste.ubuntu.com/10514512/
<AndroUser> Openssl
<Odd_Bloke> AndroUser: Ah, that means that the user account is locked.
<AndroUser> Ohh
<Odd_Bloke> AndroUser: You'll need to set lock_passwd to false for that user.
<AndroUser> So what is the best way to pass the passwd parameter
<AndroUser> In cloud - config
<Odd_Bloke> AndroUser: That looks fine, I think; but you need to configure the account to not be locked. :)
<AndroUser> Ok
<AndroUser> Let me try
<AndroUser> Woow. .It work's
<AndroUser> Thanks a lot again
<Odd_Bloke> AndroUser: You're welcome. :)
<Odd_Bloke> smoser: Tests are broken on Python 2.6 in lp:cloud-init; are you looking at those, or shall I?
<smoser> Odd_Bloke, oh?
<smoser> oh. 2.6.
<smoser> i didn't test there.
<smoser> feel free to fix... how are you doing that.
<Odd_Bloke> smoser: I've installed Python 2.6 from an 'old versions of Python' PPA, I think.
<Odd_Bloke> And then just tox.
<smoser> Odd_Bloke, right. you cna fix if you want.. i'm not really sure what my feelings are about 2.6 support.
<smoser> the only relevant distribution with 2.6 is rhel 5
<smoser> to my knowledge.
<harlowja_> smoser rhel6 ;)
<harlowja_> rhel5 has 2.4, lol
<Odd_Bloke> *broken sobbing*
<smoser> harlowja_, is this true ?
<smoser> such madness.
<harlowja_> i should check how broken it is
<harlowja_> but ya, the rhel6 thing is true
<harlowja_> :-P
<harlowja_> i blame redhat
<harlowja_> lol
<JayF> I can confirm we run cloud-init under py26 for CentOS 6.x :(
<harlowja_> alright added some comments to https://github.com/cloud-init/cloud-init/pull/2 smoser if u haven't yet
<harlowja_> idk anything about the windows junk there, lol
<smoser> harlowja_, you're a good man.
<harlowja_> :-P
<smoser> except for that rhel thing
<harlowja_> hey, i use x-ubuntu at home, only rhel due to work :-P
<harlowja_> even then i try not to, lol
 * harlowja_ someday smoser will not hate me, lol
<harlowja_> windows and python is so weird, lol
<harlowja_> so much ctypes stuff
<harlowja_> it'd be really nice for those windows guys to get https://github.com/travis-ci/travis-ci/issues/2104 implemented, at least something so that stuff can really be automatically be tested 
<harlowja_> or something that can integrate with that repo (maybe not full blown travis + windows forall)
<harlowja_> *seems like they had something for macosx http://docs.travis-ci.com/user/multi-os/ 
<harlowja_> *that u could sign-up and get in limited quanities
<harlowja_> something like that for windows would be cool
<harlowja_> something that even requires 'This feature needs to be enabled manually. ' ...
<alexpilotti> harlowja_: we have a CI for Cloudbase-Init
<alexpilotti> harlowja_: integrated in gerrit
<harlowja_> alexpilotti cool, can we get that to run on the new repo ;)
<alexpilotti> harlowja_: thatâs the plan, as soon as we can move to gerrit 
<harlowja_> kk
#cloud-init 2015-03-04
<kl> what's the advantage of cloud-init over ansible?
<kl> or rather, where would you use one tool over the other
<larsks> kl: cloud-init runs when a system boots, *on that system*.  So, for example, if you need to install python first so that you could use ansible, cloud-init could take care of those package installations.  Or if you need to provision an ssh key before you are able to connect to the system, cloud-init can do that, too.
<smoser> cloud-init has the ability to run earlier in boot than ansible. but for the most part its job is to get you to some other management system.
<smoser> Odd_Bloke, if you're bored and want some cloud-init work (or ayone for that matter)
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1427939
<kl> larsks & smoser: thanks guys.
<smoser> but you probalby hav ecloud-init in there somewhere.
<smoser> thats the idea. even if all it does is set up ssh host keys
<smoser> and ssh public keys so ansible can take over.
<kl> That makes perfect sense
<kl> Cheers again :)
<faizal> How to add a blank line. ..With write_files
<Odd_Bloke> smoser: Those Python 2.6 failures boil down (I think) to this lovely behaviour change: http://paste.ubuntu.com/10526494/
<smoser> Odd_Bloke, oh joy
<smoser> i guess its easy enougnh to check python version and address.
<smoser> if thats it.
<Odd_Bloke> smoser: Yeah, I've hit this before actually.
<Odd_Bloke> The problem is that 2.6 uses cStringIO which doesn't handle Unicode.
<Odd_Bloke> So it's an easy enough fix.
<smoser> Odd_Bloke, htanks
<Odd_Bloke> smoser: Review of https://code.launchpad.net/~daniel-thewatkins/cloud-init/fix-dmi/+merge/251715 would be much appreciated; it's causing SmartOS to fail (and is probably (partially) behind the CloudSigma failures that bjf was seeing).
<smoser> Odd_Bloke, thats very nice. thank you.
<Odd_Bloke> :)
<smoser> Odd_Bloke, the only ocmment i hvae on your code... i might be wront oo
<smoser> er... 2 comments i guess.
<smoser> a.) i might make it check that /sys path exists on input _read_dmi_syspath
<smoser> ie, possibly the user doesnt know or care about the old dmidecode path.
<smoser> maybe you have thoughts on that.
<smoser> b.) LOG.debug("querying dmi data %s", dmi_key)
<smoser>   is slightly faster (and consistent with outer cloud-init logging)
<smoser>   than:
<smoser>   LOG.debug("querying dmi data {0}".format(dmi_key))
<smoser> in the case where LOG.debug is filtered out entirely, it will never do the string conversion
<smoser> but consistency is probably a more real argument ther.e
<Odd_Bloke> smoser: I did initially have the code trying /sys/.../mapping[key] and then /sys/.../key, in case people didn't want to hand a dmidecode thing over.
<smoser> neither of the above are strong feelings of mine.
<Odd_Bloke> smoser: OTP for a bit, will pick it up in a bit.
<Odd_Bloke> smoser: So the reason that I backed out the 'see if the given key exists in /sys/... before trying dmidecode' is that we will have some places that /sys/... doesn't exist.
<Odd_Bloke> smoser: But most people will be developing on places where it does.
<Odd_Bloke> smoser: So we will see things breaking unexpectedly.
<smoser> Odd_Bloke, im' fine with that argument
<Odd_Bloke> I'll make the logging changes in a minute.
<Odd_Bloke> smoser: Logging changes pushed up to https://code.launchpad.net/~daniel-thewatkins/cloud-init/fix-dmi/+merge/251715
<smoser> Odd_Bloke, i'll pull. thanks
<Odd_Bloke> smoser: Thanks!
<Odd_Bloke> smoser: https://code.launchpad.net/~daniel-thewatkins/cloud-init/fix-py26/+merge/251784 fixes tests on Python 2.6.
<Odd_Bloke> smoser: Tests still pass on 2.7, but I can't check 3.4 because of a test hang.
<Odd_Bloke> smoser: Which is fixed in https://code.launchpad.net/~daniel-thewatkins/cloud-init/fix-py34-test-hang/+merge/251725
<smoser> Odd_Bloke, its probably udev related. 
<smoser> the test hang... udevadm gets run in the tests. it really shouldnt
<Odd_Bloke> smoser: It is, in fact, HTTPretty related.
<smoser> oh.
<Odd_Bloke> Though I'm not ruling out the existence of more than one hang. ;)
<smoser> http_proxy= ?
<smoser> the debian package unsets that explicilty
<Odd_Bloke> smoser: Nope, something to do with it failing to monkey patch Python 3.4.2 correctly.
<Odd_Bloke> Or, rather, failing to un-monkey-patch Python 3.4.2 correctly.
<Odd_Bloke> (Because of a bug in Python 3.4.2)
<stumped> Q: Using "write_files" to *append* to an existing file. I'm having troubles figuring out how to *append* to a file, no problems creating one. Can't seem to locate the method. Ideas?
<smoser> i dont thhink you can append.
<smoser> yeah, you cannot. it calls 
<smoser>  util.write_file(path, contents, mode=perms)
<smoser> it'd have to pass 'omode=wb+'
<stumped> ah, i see. Thanks. End of that research then. :P I'll move it to a runcmd
<stumped> Is there a special method of escaping the ">" characters? In a runcmd, I've tried adding the following text, but YAML appears to have problems with the ">"....
<stumped> - "echo /tmp/afile.txt >> /tmp/anotherfile.txt"
<smoser> hm..
<smoser> inside the "" it doesnt like it?
<stumped> doesn't seem to. I've tried a lot of different variants. My understanding is that a double-quote treats the entire string as literal.
<stumped> sorry, the word "echo" should be "cat".
<smoser> http://paste.ubuntu.com/10531218/
<smoser> seems to not need anything special to me
<smoser> stumped, ^
<stumped> yes. I see. I'll play around with it more. Perhaps I made a mistake or it's something unique to the runcmd method of performing the systemcall to execute the string.
<stumped> smoser: looks like my mistake. I believe I had a large "yum update" performing at the beginning of the runcmd list and it was still running when the system went into multiuser mode and I didn't realize it. I probably got myself confused by trying to escape >> with \>\> .. TL;dr; cascading set of errors made it look like it wasn't working.
#cloud-init 2015-03-05
<Odd_Bloke> smoser`: utlemming: So I think the version of cloud-init uploaded to vivid yesterday is fairly fundamentally broken by the snappy merge.
<Odd_Bloke> In that SSH is disabled by default.
<Odd_Bloke> smoser`: utlemming: I _think_ the fix is to remove snappy from the list of cloud_config_modules, and ensure that the snappy image build process includes it.
<Odd_Bloke> I'm booting a snappy instance now to have a look at what it currently does.
<Odd_Bloke> smoser`: https://code.launchpad.net/~daniel-thewatkins/cloud-init/no-snappy-by-default includes that change.
<smoser> Odd_Bloke, agreed i screwed that up.
<smoser> lets just configure it off by default.. make the module do nothing, and we can enable it in snappy
<smoser> Odd_Bloke, alternatively we can enable that module based on presence of some "this-is-snappy" flag on filesystem or other runtime check
<smoser> thats what i'd like.
#cloud-init 2015-03-06
<stumped> Q: Having troubles getting the "power_state" to reboot. It seems to be ignored. Any tricks?
<smoser> stumped, distro / version ?
<smoser> and anything in log ?
<stumped> centos7 nothing logged.
<stumped> trying centos 6.5 now
#cloud-init 2016-03-07
<suro-patz> folks, can you please point me to an example or code in cloud-init that can configure aliases on a network-device [ multiple IP addresses from config-drive ]
<suro-patz> smoser: harlowja: ^^
<harlowja> no idea, suro-patz spandhe implemented that
<harlowja> maybe she knows
<harlowja> or look at the config drive -> networking code
<suro-patz> harlowja: ok
<harlowja> https://github.com/openstack/cloud-init/blob/0.7.x/cloudinit/distros/net_util.py#L82 suro-patz is the translation code
<harlowja> but no idea of any examples
<suro-patz> harlowja: checked with spandhe, it seems she had introduced the support of IPV6 - I will check this link and look for the alias support
<harlowja> k
#cloud-init 2016-03-08
<smoser> suro-patz, there is code that should drop "real soon now" that should take the maas network config format and allow it to be applied.
<smoser> this is to land in 16.04, so real soon.
<Odd_Bloke> smoser: o/ I've added you as a reviewer for https://code.launchpad.net/~alexandru-sirbu/cloud-init/bigstep-datasource-improvements/+merge/288251 to confirm that it addresses your concerns about exceptions. :)
<suro-patz> smoser: is the code under review? [ "that should take the maas network config format and allow it to be applied." ]
<smoser> no. its still tbd.
<smoser> ew will take the rendering code from curtin (lp:curtin curtin/net)
<suro-patz> any tracking bug or so - for me to bookmark
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1549403
<suro-patz> smoser: thanks!
<smoser> so far what has landed is just disabling of cloud-init
<smoser> which doens't seem like much of a feature :)
<smoser> but is nice.
<suro-patz> smoser: it looks like what I am exactly looking for â¦ But I am looking for RHEL â¦ I can take up that work for RHEL
<suro-patz> smoser: if you are working on this, please feel free to keep me in the loop, if that helps
<smoser> ok, suro-patz i will
<suro-patz> thanks smoser, I have subscribed to the bug updates too
#cloud-init 2016-03-09
<chucky_z> hello, I need to add a power_state: entry to a cloud.cfg, but I'm unsure of where to put it.
<chucky_z> should it go straight into the cloud.cfg at the bottom, or inside a cloud.cfg.d file?  This is on CentOS 7 specifically
#cloud-init 2016-03-10
<smoser> Odd_Bloke, ping
<smoser> i see Mar 10 14:53:55 ubuntu [CLOUDINIT] util.py[DEBUG]: failed read of /sys/class/dmi/id/product_name#012Traceback (most recent call last):#012  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2143, in _read_dmi_syspath#012    key_data = load_file(dmi_key_path)#012  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1187, in load_file#012    return decode_binary(contents)#012  File "/usr/lib/python3/dist-pac
<smoser> kages/cloudinit/util.py", line 88, in decode_binary#012    return blob.decode(encoding)#012UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
<smoser> on my system
<smoser> because /sys/class/dmi/id/product_name has non-utf-8 stuff in it.
<smoser> what should we do there?
<smoser>  xxd /sys/class/dmi/id/product_name
<smoser> 00000000: ffff ffff ffff ffff ffff ffff ffff ffff  ................
<smoser> 00000010: ffff ffff ffff ffff ffff ffff ffff ffff  ................
<smoser> 00000020: ff0a                                     ..
<Odd_Bloke> smoser: AIUI, that means that it's possible to set that DMI value, but it isn't set to anything.
<Odd_Bloke> smoser: (Whereas all 0s means that it can't be set)
<smoser> oh, interesting.
<smoser> but what if i set it to all zeros :)
<smoser> then it cant be set
<Odd_Bloke> Well, I don't know if _you_ can set it at all, it might all be set in stone after early boot stuff.
<smoser> but generally, what should we do about the possibility / reality of non-utf8 in dmi data.
<Odd_Bloke> Well, this is a known special-case (which we should handle).
<Odd_Bloke> Would we expect to get non-UTF8 DMI data on a system which wasn't physically broken?
<smoser> i dont know.
<smoser> do you?
<smoser> i suspect the answer is at least "sometimes stuff is broken"
<smoser> and cloud-init shouldnt fall over on that.
<Odd_Bloke> I don't know, no.
<Odd_Bloke> Yeah.
<Odd_Bloke> I think we only call that in two places ATM.
<Odd_Bloke> If we get non-UTF8 data, then we aren't on Azure (or Azure is horribly broken).
<Odd_Bloke> So returning None there would be fine.
<Odd_Bloke> I can't call the other place to mind off the top of my head.
<Odd_Bloke> But I suspect garbage can safely be reported as "no value", until we find an example when that isn't true.
<smoser> Odd_Bloke, https://code.launchpad.net/~smoser/cloud-init/trunk.dmidecode-null/+merge/288676
<smoser> if you can find referencen to '\x00' then i'd happily return some value for that too (ideally whatever dmidecode would do)
<rharper> smoser: your MP has merge markers
<smoser> fiddle
<smoser> i push-overwrited
<smoser> as i actually had also pushed unintended chagnes
<smoser> :)
<smoser> but merge markers in Changelog i dont care too much about.
<rharper> sure
<rharper> smoser: but the MP shows changes to unrelated stuff IIUC, the cloud-init generator for example has nothing to do with the dmidecode, no?
<smoser> no longer
 * rharper refresh
<smoser> i pushe --overwrited with those removed
<smoser> rharper, so for network rendering
<smoser> looking at curtin/net/network_state.py seems pretty much a howto
<rharper> yes
<smoser> and i answered my own question now
<smoser> render_network_state
<smoser> is what i want
<rharper> and then consuming/using networkstate is documented in curtin/commands/apply_net.py (which uses curtin/net/__init__.py:render_eni
<rharper> yes
<rharper> render_network_state
<rharper> and optionally the udev rule writer
<smoser> render_network_state renders persistent net
<rharper> right
<rharper> you;'re looking at the code
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/trunk.net1 is with curtin/net copied over.
<smoser> id' hope to have that used via nocloud datasource by morning.
<rharper> smoser: nice! I'll pickup that branch and migrate cloud-init-tests to use that instead of my curtin apply_net hack
<smoser> lets talk tomorrow morning
<smoser> i have to go afk now. will be back later.
<rharper> ok
#cloud-init 2016-03-11
<smoser> Odd_Bloke, i'd apprecate your reading https://code.launchpad.net/~sankaraditya/cloud-init/topic-stanguturi-vmware-support/+merge/288452 also
<smoser> for stanguturi
<skoude> hi! is it possible to define the address where cloud-init connects manually?
<skoude> I have a problem in openstack, that it is trying to connect with http instead of https and because of that the conection fails..
<waldi> it tries to connect where?
<waldi> (hint: it provides this information in the log)
<skoude> well it tried to connect to http://169.254.169.254/2009-04-04/meta-data/instance-id   instead it should connect to: https://169.254.169.254/2009-04-04/meta-data/instance-id
<skoude> I'm just trying to undestand where the address is told in the configs?
<waldi> why do you think so? the EC2 metadata service does not use https
<skoude> I have an openstack and it is configured to use https for metadata service
<skoude> We have our private cloud running, and there was some updates to it, and the metadata service stopped working..
<waldi> how did you configure that?
<skoude> or not the metadata service stopped working.. the metadata service is answering on https but for some reason the cloud-init tries to connect http.
<skoude> This is suse cloud 5, and suse has done the configs..
<skoude> I'm just trying to figure out how cloud-init works
<skoude> basically if update comes to suse cloud, chef will automatically provision the updates to cloud nodes and controllers
<waldi> look into the neutron metadata proxy config
<waldi> and i see no knob to make serve https
<skoude> It worked before, so somehow they managed to do it :) It breaked yesterday evening after the updates
<skoude> I have a support request open, but I would like to know how it works, for just in case.
<skoude> yes there is htts -option: nova_metadata_protocol = http(StrOpt) Protocol to access nova metadata, http or https
<waldi> as neutron adds an explicit and unconfigurable redirect from 169.254.169.254:80 to the metadata proxy, i doubt that this was ever https
<waldi> wrong side
<waldi> this is the neutron metadata proxy (used for the EC2 endpoint) speaking to nova
<skoude> yes it was, because I checked from the logs before.. Before cloud-init connected to https
<waldi> i'm pretty sure it did not, as http://169.254.169.254 is hardcoded in cloud-init
<skoude> maybe it is doing somekind of redirect to https, but  in the instance logs I checked that it was https
<waldi> well. so what is your problem now? that an alledged redirect got missing?
<skoude> well the problem is that it does not work :)
<skoude> cloud-init does not connect to metadata service correctly
<waldi> then fix your metadata service
<skoude> okay thanks..  But itÃ¤s good to know that address is hardcoded to cloud-init
<smoser> skoude, the address is "well known"
<smoser> but you can probably configure it, let me see.  The thing you can't do to my knowledge is tell cloud-init where it is without modifying an image.
<smoser> skoude, http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config-datasources.txt
<smoser> the openstack metadata service (datasource['OpenStack']) should have the same configuration options as the Ec2 datasource
<smoser> and you could give it a list of urls in 'metadata_urls'
<smoser> the default is
<smoser> (DEF_MD_URL) is ["http://169.254.169.254"]
<smoser> rharper, magicalChicken around ?
<rharper> here
<smoser> wanted to share where i was with the cloud-inti networing
<smoser> http://paste.ubuntu.com/15351009/
<smoser> so in that branch, i can basically seed that network config (line 10-23)
 * jgrimm rides along
<smoser> so in container the networking comes up fine
<smoser> but https://github.com/lxc/lxd/issues/1747
<smoser> means that lxc's user-data / metadata get read instead of mine
<smoser> but that woudl be work aroundable and i think woudl work if lxc wasnt fighting me
<smoser> http://paste.ubuntu.com/15351044/
<smoser> that shows the results in it too.
<smoser> stanguturi, ^
<rharper> nice
<smoser> that is what i'm workign on to get networking configuration from a "local" data source into the instance.
<smoser> harlowja_at_home, wonder if you made any progress on openstack networkign ?
<smoser> rharper, if you could try to get that intergrated with what you have for testing, and send me a mail on how to go from there, i'd appreciate it.
<smoser> i have to run for the night.
<smoser> stanguturi, i pointed you at that because we'd like for vmware's data to get into instance that general way also,.
<rharper> smoser: yeah
<borei> hi all
<borei> got confused completely, need some heads up
<borei> i have cloud-init with NoCloud datasource
<borei> iso image created, and VM is picking it up
<borei> from docs i found that if instance-id is changed then cloud-init will reload user-data
<borei> but nothing happens
<borei> nobody ?
#cloud-init 2017-03-06
<rharper> smoser: where does the cloud-init package handle the debconf_setselection inputs from maas?  what parses that and renders those .cfg files ?
<smoser> rharper, .postinst
<rharper> thanks
<logan-> I'm trying to hunt down docs on how to configure the renderer for /etc/network/interfaces.d/50-cloud-init.cfg that is generated on Xenial images. I'd like it to render with v6 configuration in addition to v4.
<smoser> logan-, what do you have now ?
<smoser> where are you running ?
<smoser> basiclaly cloud-init reads from the datasource to get the networking information
<smoser> if it finds nothing it renders "dhcp on eth0"
<smoser> there is not currently a way to configure that to use dhcpv6 on eth0
<logan-> http://cdn.pasteraw.com/k10obywd29qws85z8q976qwnkk0nxdt (+ prepended to the line I need)
<logan-> on an Openstack based cloud
<logan-> gotcha. thanks
<jgrimm> smoser, rharper:  we still need this, yes? https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/317917
<rharper> y
<smoser> logan-, are you able to use config drive ?
<smoser> if so, it might "just work"
<smoser> if not, its more complicated...
<jgrimm> we should try to get that landed next, i'd think
<smoser> there is no way for cloud-init to know if it shoudl dhcp v6 or not if it doesnt have a datasource telling it what to do
<smoser> jgrimm, ack
<logan-> I was just looking thru the metadata being exposed at http://169.254.169.254/openstack/latest/network_data.json, it seems to be missing ipv6. maybe it is an openstack metadata service issue :)
<jgrimm> smoser, rharper: cool, just trying to line things up for landing (once ds-identify is up)
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/318975
<smoser> you're just complainig about the 'd', right?
<rharper> smoser: yes, enable|disable vs enable[d]|disabled[d]
<rharper> pick one, for both places their used, IMO
<rharper> it's documented properly; but easy to misremember
<smoser> fixed.
<rharper> thanks
<logan-> in /etc/cloud.cfg.d/, are yaml overrides to dicts treated as complete overrides of the dict or is the earlier dict merged with later overrides using .update()? ie I want to override system_info: { package_mirrors: {} }, can I just enumerate package_mirrors in the override dict, or do I need to redefine the whole system_info
<smoser> rharper, you ACK that review ?
<rharper> smoser: I'll do that now
<powersj> FYI getting CI automation going on cloud-init today, so some reviews might start showing up
<rharper> powersj: \o/ cool
<powersj> *sigh*
<powersj> this is what is going to happen for nearly every cloud-init CI run:
<powersj> http://paste.ubuntu.com/24126637/
<magicalChicken> powersj: it may be worht looking into the bddeb cloud tests branch again
<magicalChicken> it currently still requires tags to be in place
<magicalChicken> but we were talking before about updating packages/bddeb and the bddeb in a container feature to be able to generate a debian/ on its own
<magicalChicken> and it may be possible to have some sort of workaround to have a temporary tag with that
<powersj> so remind me, that branch was to automatically build a deb, however in this case I'm just trying to run tox, which will package up the code. I don't exactly what a deb in that case.
<magicalChicken> yeah that was to build a deb from current tree and run cloud-tests with the deb
<magicalChicken> so not quite the same thing, nvm
<powersj> I still think this should be "fixed" somehow, as this means the person proposing the merge definitely did not run tox, but also could not run tox until they know that they need to pull the tags
<magicalChicken> i think they could still have the tags locally, they just forgot to push the tags out to their repo
<magicalChicken> that's what happened with mine before
<powersj> ah right
<powersj> put what other projects require you to push tags and keep them up-to-date...
<magicalChicken> it may also work to just have git pull tags from the main repo
<powersj> magicalChicken: is git clone for you slow at times?
<powersj> I've noticed it can be, but this is a first: http://paste.ubuntu.com/24126670/
<magicalChicken> launchpad sometimes times out on me for random stuff, it may just be a glitch
<nacc> powersj: working fine for me at home
<magicalChicken> its working for me too right now
<powersj> nacc: thx
<powersj> k
<Odd_Bloke> smoser: Can one override the frequency of a config module using /etc/cloud.cfg{,.d/*}?
<smoser> yes.
<smoser> https://git.launchpad.net/cloud-init/tree/doc/examples/cloud-config.txt#n147
<smoser> but not without re-declaring the list.
<smoser> and generally you shouldnot have to do that.
<Odd_Bloke> smoser: We have a cloud who want to do password configuration (i.e. cc_set_password) every boot; thoughts on how else we could enable them to do that?
<smoser> well cloud-init wont re-read the metadata ever boot, so it wouldnt know that that got changed.
<smoser> so without more thinking i dont have an easy way to do what you're after.
<Odd_Bloke> Ah, right, of course.
<Odd_Bloke> Ah, OK, perhaps we've misunderstood what they were trying to achieve.
<Odd_Bloke> (They ship a modified cc_set_password.py.)
<rharper> =(
 * powersj pushes the 'go go ci button'
<powersj> well... first round of CI looks good :D
<jgrimm> \o/
#cloud-init 2017-03-07
<prometheanfire> smoser: btw, I might have a gsoc person do the gentoo cloud-init revamp
<smoser> prometheanfire, that'd be cool!
<smoser> Odd_Bloke, re-thinking what i said yesterday.
<smoser> this is not terribly different than what azure accomplishes in their datasource
<smoser> they handle re-running the mkfs and mounts
<rangerpb> smoser, nice email response .... i was just thinking through the use of the ovf file for the hostname, remember that azure ejects the "cdrom" with that file after it completes provisioning ... do you think that is going to be problematic for reboots, etc?
<smoser> no. it should be ok. it is guaranteed to be there on first boot, and we check to see if the cached instance-id is the same as the current instance id (provided in smbios) to determine if hte cached value is valid
<smoser> and fwiw... they say they eject it, but i've never seen that happen.
<rangerpb> :)
<smoser> (that said, my instances done last more than 30 minutes normally)
<larsks> smoser: re: the new ci stuff, is it possible to have launchpad take the merge proposal from the commit being merged in order to avoid "No commit message was specified in the merge proposal."? I am going to do a couple of copy-and-paste updates, but that seems like busy work.
<powersj> larsks: it is not unfortunately. Basically setup to let you propose a new message in the event that you have many commits and assumes you want to squash them all into one with the new message, rather than the last.
<larsks> Yeah, I understand the reason. I am just spoiled by github, which if there is a single commit will just adopt that commit message for the pr, and only give you an empty field if there are multiple commits.
<powersj> ah that would be nice :\
<smoser> larsks, i know. its a pita.
<smoser> that is what it shoudl do, like github.
<larsks> smoser: I've updated those merge requests, so hopefully ci will be happy.  Any chance you will have time to look at them?  Fixes for #1670052 and #1669504
<smoser> just commented on https://code.launchpad.net/~larsks/cloud-init/+git/cloud-init/+merge/318800
<smoser> that looks fine other than the flake8 errors...
<larsks> smoser: fixed and updated...
<smoser> larsks, didi harlow mention at all where the hard 3 nameserver restriction comes form ?
<larsks> smoser: I commented in the issue I think: it's referenced in the resolv.conf(5) man page.
<smoser> and maybe, if it is really 3 per protocol (ipv4 or ipv6) then ideally we'd still check that ?
<larsks> I don't think it's *per protocol*.
<larsks> I am pretty sure it is "anything more than three will be ignored".
<smoser> k
<smoser> larsks, for the resolvconf, lets at least WARN which ones we're adding. or more clearly state "only the first 3 are will be used"
<larsks> Aren't we doing that?  It logs a warning message with the nameserver that is being ignored.
<larsks> Line 25 in the review.
<smoser> if you just read that message, is it clear to you waht its going to do ? or what it did ?
<smoser> it somewhat makes sense if it raises an exception
<larsks> I don't think it makes sense at all if it raises an exception. THat means that data in your cloud providers environment can prevent you from logging into your new instance.
<larsks> That seems like a real problem.
<smoser> i agree with you there.
<smoser> i was saying that the *message* makes sense as an exception
<smoser> but not as a warning
<smoser> raise Exception("Too many nameservers")
<smoser> but as a warning:
<larsks> Okay.
<larsks> So..."Ignoring nameserver %s because that would exceed the maximum of 3 nameservers"?
<smoser> well, "using the first 3" or something like that.
<larsks> smoser: that message will get logged once per nameserver (for each ns beyond 3).  Does "using the first 3" make sense (i.e., will someone reading the logs know what "the first 3" are?)
<larsks> How about that (update I just pushed)?
<smoser> lookiong
<smoser> larsks, that looks good, thanks. looking at only the mp and not enough context, i was thinking that 'ns' was the list, not an individual name server.
<larsks> Okay.  New question, unrelated: I am trying to resolve a dependency cycle between cloud-init and os-collect-config (on RHEL).  What is the goal of having cloud-final.service run After=multi-user.target?
<larsks> (I'm not entirely sure what happens during the -final step)'
<smoser> larsks, that one is merged. the other https://code.launchpad.net/~larsks/cloud-init/+git/cloud-init/+merge/318800 did you think you pushed flake8 fixes ?
<larsks> smoser: I thought I did, but let me double check.
<smoser> i just fetched here and 'tox -e flake8' showed errors (0e5610fd08)
<larsks> There.
<smoser> having final run after multi-user.target ... we ended up doing that to avoid some issues witih package instaallation
<smoser>  commit 34a26f7f59 talks about it and references 2 bugs that we were hitting.
<larsks> Thanks, I'll take a look.
<smoser> if package installation worked fine prior to multi-user.target on redhat (it may well .. due to services not starting by default), then i'm not opposed to it being different there.
<smoser> the goal really is for '-final' to run "as late as possible"
<smoser> see the somewhat limited text description i wrote at doc/rtd/topics/boot.rst
<larsks> I was guessing I would find --no-block somewhere in there, and I did :)
<rangerpb> smoser, ive been poking around the cloud-init code ... the suggestion to set the hostname from the cdrom metadata is prob worth pursuing.  Did you have a place in mind for that? At what point in the cloud-init cycle is the network set up ?
<smoser> rangerpb, give me a minute, i'll take a quick poke and give you a diff that might helip you get started.
<rangerpb> awesome
<smoser> rangerpb, http://paste.ubuntu.com/24132495/
<smoser> thats what i have... there is non-trivial work there.
<smoser> the setting of hostname in this  manger is generally i think a good thing (tm)
<smoser> The other option would be for azure to act more like DigitalOcean and in its local data search, it could dhcp, bring up the network do all the configuration and then take it down, and let the OS bring it up.  In its 'get_data()' then that would run in local mode, it would have to read the ovf file, and invoke dhcp and negotiate with the metadata service for the ssh keys.  I don't think this is supportable with the legacy "use walinu
<smoser> xagent path".
<rangerpb> right
<rangerpb> getting to the ideal is not going to be trivial
<smoser> generally speaking though, setting the hostname "centrally" is a win i think
<rangerpb> yeah big time
<msaikia> Hi, Just yesterday I got an email from Server Team CI bot about continuous integration failure. I looked into the failed log. Doesnot seem like anything to do with my change. This is my branch https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/305427
<msaikia> Any steps that I need to take to fix it?
<powersj> msaikia: you are right, the issue was fixed in a later version of cloud-init that was trying to access the local maas cfg files
#cloud-init 2017-03-08
<smoser> msaikia, so rebase to master
<smoser> that make sense ?
<rangerpb> smoser, paulmey i cobbled up a patch where we call perform_hostname_bounce when using only cloud-init to provision; however, i observe later that both the cc_set_hostname and update_hostname override that with incorrect results.
<rangerpb> should those be called in an azure deployment ?
<rangerpb> i think part of the problem is they aren't able to detect a proper hostname or fqdn for some reason ... been poking around at the code... seems the best way to set the hostname is to pluck it from the cdrom's metadata
<smoser> something is wrong if they're setting it wrong.
<smoser> they should get the write hostname from the metadata and set it.
<rangerpb> so both of those should run in your opinion smoser?
<smoser> i think so, yeah.
<rangerpb> of course, part of the issue is the the hostname stuff boils down to a method in util.py which is unaware of the metadata afaict
<rangerpb> is there any difference in setting hostname with hostname vs hostnamectl ?
<smoser> well, i think there is a bug with setting it with hostnamectl
<smoser> possibly that is what you're seeing.
<smoser> cloud-init possibly tries to set it to early ? larsks mentioned something about this.
<larsks> I'm not sure about too early.  There was an issue concerning cloud-init, hostnamectl, and dbus...but that results in "it all asplode" rather than failure to set the hostname.
<larsks> But rangerpb I agree with smosers: the distro update_hostname method should do the right thing.
<larsks> That means that the data source needs to provide the correct host name.  If it's not, that's where the problem is happening.
<rangerpb> distro update_hostname ?
<rangerpb> what do you mean provide it ?
<larsks> rangerpb: unless I am crazy, the update_hostname method is provided by the distro class(es).
<rangerpb> maybe i am missing something
<larsks> And the data source driver should be providing those methods with the correct hostname.
<larsks> I have to run to a parent/teach conference.  Back in a bit!
<rangerpb> well there is update_hostname defined in distros/__init__.py and there is a cc_update_hostname.py
#cloud-init 2017-03-09
<msaikia> @smoser Makes sense. Rebased to masted
<rangerpb> smoser, larsks while tracing down this hostname issue in Azure, im observing that eventually in cloud.py (https://git.launchpad.net/cloud-init/tree/cloudinit/util.py#n1048) cloud.get_hostname() is called ... where 'cloud' is DataSourceAzureNet ... but no get_hostname method is in that class ... so what gives
<rangerpb> what is it calling ?
<larsks> rangerpb: get_hostname is defined in the parent class.
<larsks> (cloudinit/sources/__init__.py)
#cloud-init 2017-03-10
<uppock> Does cloud init support setting up ipv6 via dhcp6. Im having problems getting the correct network config in a ubuntu intance on openstack
<jgrimm> thanks rbasak
<smoser> uppock, if you use config drive on openstack and it provides network_data.json then it should work.
<smoser> however, if you  just want to do "dhcp v6 on the first nic", then there is not currently a way to do that without modifying an image.
<smoser> i recently just did this in bug
<smoser>  
<smoser>  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1657940/comments/4
<smoser> see the 'sudo tee' line there. thats how you can configure the system to do it for a spicific NIC by mac. i am pretty sure if you know the name the nic will get you can drop the mac
<uppock> Oh
<uppock> I do have network_data.json via the metadata server, but that does not help?
<smoser> uppock, currently no it does not.
<smoser> that is a change that can be done, but its not working now.
<uppock> smoser: ok
<uppock> thanks for the help
<smoser> rharper, https://code.launchpad.net/~wesley-wiedenmeier/cloud-init/+git/cloud-init/+merge/317589
<smoser> so...
<smoser> NETWORK_CONFIG_V2  is a bad name i think
<smoser> NETPLAN_PASSTHROUGH ?
<smoser> no. its not passthrough.
<smoser> what shoudl we call that.
<smoser> rharper, also i had https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/319506 for you
<smoser> which is terribly rendered there because i rebased to master
<rharper> smoser: I don't have a better suggestion;  I had talked with magicalChicken about having maybe a list of network renderers; as well as paththrough  features
<rharper> smoser: oh, the ntp test
<smoser> hm.. i guess the name isnt terrible in itsel.f
<smoser> as i think cloud-init is basically stating that it can render
<smoser>  http://paste.ubuntu.com/24152513/
<smoser> which is "network config v2"
<smoser> an interesting thing though is that we cannot render the sysconfig for that yet, right?
<rharper> not yet
<smoser> we have a generic v2-to-v1 ?
<rharper> yes
<smoser> so we could v2-to-v1-to-sysconfig
<smoser> outside of features not supported.
<rharper> yes, v2 to network-state, and network-state to *whatever*
<smoser> right
<rharper> this branch adds a v2 parser -> netstate;  and then a state -> netplan
<rharper> we have a v2 -> output skipping state
<rharper> aka, 'passthrough'
<powersj> smoser: test regression wrt chpasswd last night LP: #1671883
<powersj> https://bugs.launchpad.net/cloud-init/+bug/1671883
<smoser> powersj, should be fixed in trunk
<powersj> smoser: oh look at that :)
<powersj> I'll launch the tests again then. thanks!
<smoser> yeah, horay for that integration test.
<smoser> rharper, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/319605
<smoser> magicalChicken, ^
<smoser> that is simplified and more-inert version of https://code.launchpad.net/~wesley-wiedenmeier/cloud-init/+git/cloud-init/+merge/317589
<magicalChicken> smoser: calling it NETWORK_CONFIG_V1 might be confusing though, as there are versions of cloud-init with network config support but without the feature flags
<powersj> smoser: https://paste.ubuntu.com/24152754/
<smoser> "it" ?
<rharper> smoser: would we fill that out with other input formats we support?  I sorta liked the idea of specifying the list of inputs and outputs supported
<magicalChicken> smoser: sorry, the feature for network support
<smoser> magicalChicken, there, i just took your suggestion, but currently if i merged this into trunk, we do not support NETWORK_CONFIG_V2
<smoser> but we *do* support NETWORK_CONFIG_V1
<smoser> so i listed that support as a feature
<magicalChicken> smoser: oh, right that makes sense
<magicalChicken> yeah, it may be good t fill out that list
<magicalChicken> the mp i had filed had been into ~raharper/cloud-init:netconfig-v2-passthrough, once that branch lands in trunk it will support v2 though
<smoser> magicalChicken, right.l but i was just grabbing the "add feature list"
<magicalChicken> yeah, that makes sense
<smoser> rharper, i dont know what to do there..
<smoser> wrt listing "sysconfig" or "ifupdown"
<rharper> right; those might be per distro
<smoser> i think if we follow curtin's lead, we end up with:
<magicalChicken> i like the features output command
<smoser>  NETWORK_CONFIG_V1_IFUPDOWN
<rharper> it could be a KEY into a list of distros that support those renderes
<smoser> and
<smoser>  NETWORK_CONFIG_V1_SYSCONFIG
<smoser> unfortunately, that then possibly leads to
<smoser>  NETWORK_CONFIG_V1_SYSCONFIG_WITH_BONDS
<smoser> or something like that
<magicalChicken> rharper: i think we had discussed this before, the per-distro features vs global features
<rharper> RENDERERS = 'eni, sysconfig, netplan, netcfg';   distro_renderers = { 'Ubuntu': ['eni', 'netplan'], 'Fedora': ['sysconfig'] }
<rharper> yeah
<smoser> powersj, [embarrased] how do i run integration test for specific test ?
<smoser> rharper, so i dont want to over complicate things.
<powersj> As an example: python3 -m tests.cloud_tests run -v -n xenial -t tests/cloud_tests/configs/modules/set_password.yaml --deb cloud-init_daily_all.deb
<smoser> i dont think t here is any reason to query cloud-init and say "can you render Fedora/sysconfig"
<rharper> smoser: agreed; certainly for now, a flag or two makes sense
<smoser> i can see a reason to say "if i gave you v2, can you render that for *THIS SYSTEM*"
<rharper> we may want something more complex later
<rharper> smoser: from curtin perspective, it will know what it has (v1 or v2)  and it will want to know if cloud-init can render it , and it *will* know what the target os is
<rharper> so, I do think it's cloud-init, can you render v2 -> fedora/sysconfig
<rharper> because, they may want to see if we can v2 -> fedora/networkd
<rharper> I do think it's  os + networkcfg format
<smoser> powersj, what am i doing wrong
<smoser> http://paste.ubuntu.com/24152788/
<powersj> smoser: what version of pylxd do you have? If I recall 2.1.3 is still the one that works best until we get magicalChicken's branch added
<smoser> 2.2.3-0ubuntu1
<magicalChicken> smoser: you need to run from the current branch then
<magicalChicken> smoser: the old cloud_tests can't handle 2.2
<magicalChicken> wait, if 2.2.3 is out that means that the fix for the leaked file handle bug is released, so I can go ahead and update the tox env in the new version
<magicalChicken> that means we should be able to run 2.2.3 on jenkins
<powersj> yep looks like that just got released!
<magicalChicken> nice
<powersj> getting that + the tox commands would make smoser's life easier too
<smoser> yeah, do you have a example tox env ?
<magicalChicken> smoser: 1 sec
<magicalChicken> http://paste.ubuntu.com/24152810/
<magicalChicken> smoser: ^
<magicalChicken> powersj: i am going to be able to work almost full time next week, so i may go ahead and add support for distro and platform feature flags and write some of the kvm platform
<magicalChicken> powersj: i have been talking to jgrimm about getting centos tests running smoothly so we can start integration testing there too
<powersj> magicalChicken: no beach vacation for spring break? ;)
<magicalChicken> powersj: haha no, probably not :)
<powersj> magicalChicken: sounds great -- let me know what you start on. I can jump between bugs and the kvm stuff
<magicalChicken> powersj: sure, sounds good
<magicalChicken> i'll get all thise done in a new branch so that mp doesn't get any bigger
<jgrimm> thanks magicalChicken
<magicalChicken> jgrimm: no problem :)
<smoser> powersj, was that the test that failed ? tests/cloud_tests/configs/modules/set_password.yaml
<powersj> smoser: yes
<smoser> ok.
<smoser> magicalChicken, can you just put a MP for the citest toxenv right now ?
<smoser> that is very useful on its own.
<magicalChicken> smoser: sure i'll pull that out
<magicalChicken> smoser: if it would make it easier to review the main one, i can split it up into a few smaller ones. they'll all still be needed to get everything working but it may make the diff easier to read
<powersj> magicalChicken: if there are some things that we could divide out it probably would be good to do anyway
<magicalChicken> powersj: part of the problem is that if i divide things out, the test suite as a whole won't work until almost all of it is merged in
<smoser> magicalChicken, patch series are easier, yeah.
<smoser> you can just work it as a patch series...
<smoser> in a single mp, and rebase ... commit --fixup.... like that.
<magicalChicken> smoser: the commit history is already pretty heavily modified, there aren't really any junk commits left
<magicalChicken> i can do it a bit further though
<smoser> ok.
<smoser> if you want, i can just do the tox.ini change for trunk right now.
<smoser> powersj, that wont cause harm to antying in c-i right ?
<smoser> if i aadded 'citest' tox
<magicalChicken> smoser: that should be fine
<powersj> smoser: correct that is fine
 * powersj greatly wants that change too
<magicalChicken> smoser: it would have to just be the pylxd 2.1.3 env for now though ,the 2.2.3 env would break
<smoser> ok
<smoser> magicalChicken, powersj
<smoser> http://paste.ubuntu.com/24152915/
<smoser> that look sane ? i'll push that now.
<smoser> w
<magicalChicken> smoser: yeah, that looks good, thanks
<powersj> +1
<smoser> pushed.
<magicalChicken> at some point we should probably update the docs again, but there's enough stuff changing for now that it can probably wait
<smoser> and powersj trunk should hopefully pass again
<powersj> smoser: thank you
<smoser> rharper, you want to ack https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/319605 ? and i'll pull it
<jgrimm> magicalChicken, fwiw i generally like to see doc changes accompany said code/test changes.
<jgrimm> but please open a bug as reminder if known gaps
<magicalChicken> jgrimm: sure, i'll open a bug to track integration-testing doc updates
<jgrimm> thanks
<magicalChicken> i'll see if i can get to some of those changes next week
<jgrimm> +1 thanks
<rharper> smoser: done
<smoser> powersj, i kicked https://jenkins.ubuntu.com/server/job/cloud-init-integration-z/ and it ran successful
<powersj> smoser: good, I had gone into torkoal after I saw the failure and blew away the offending lxc image
<powersj> that means tot is fixed as well :) \o/
<smoser> ah. i didnt realize there had been lxc issues
<powersj> "Failed to create ZFS filesystem: cannot mount '/var/lib/lxd/images/(uuid of image)"
<powersj> this is the issue cpaelzer and I keep hitting on the test systems
<powersj> I am trying to reproduce locally, but waiting for new daily images as I think that is when it happens
<cpaelzer> ?
<cpaelzer> powersj: so we hit it again?
<powersj> cpaelzer: on torkoal yeah :(
<cpaelzer> without touching LXD versions
<powersj> correct
<cpaelzer> just as I promised
<cpaelzer> are you giving stgraber access again?
<cpaelzer> probably the best debug chance
<powersj> well, I already blew the failed image away
<cpaelzer> did that make it working?
<powersj> yes
<cpaelzer> e.g. after sync of the same image it is ok
<cpaelzer> hmm
<cpaelzer> interesting
<cpaelzer> but that is a great indicator where stgraber can check next time it happens
<powersj> yeah
<cpaelzer> powersj: did yu update the github issue I filed`
<cpaelzer> ?
<powersj> no, but I can
<cpaelzer> thanks
<cpaelzer> uh, I just got suggested that I#M in EOD (EOW) for a while ...
 * cpaelzer hushing back to family
<powersj> :D
<powersj> magicalChicken: for integration tests, the only system requirements right now are python3, lxd, and pylxd==2.1.3?
<magicalChicken> powersj: as far as i know yeah
<powersj> Updating the docs with how to run via tox + putting small section on dependencies
<magicalChicken> powersj: that's all that is set up in the tox env and that works
<magicalChicken> powersj: oh, nice
<powersj> magicalChicken: https://paste.ubuntu.com/24153367/
<magicalChicken> i'll pull those changes in with the pylxd 2.2 stuff and update them for that as well
<magicalChicken> powersj: yeah, that looks good
<powersj> ok
<powersj> thx!
<magicalChicken> powersj: it may be worth changing the full path to the test config to just moduules/user_groups
<powersj> I don't need the "tests/cloud_tests/configs" part?
<magicalChicken> it doesn't really make a difference, but i find it easier to read that way
<magicalChicken> powersj: no, the argument normalizer will handle that for you
<powersj> O.o didn't know this
<magicalChicken> i built in relative path -> absolute path stuff pretty much everywhere because i'm lazy typing out commands :)
<powersj> let me go try this then
<smoser> rharper, http://paste.ubuntu.com/24153396/
<rharper> ok
<rharper> smoser: re the virtual stuff; that's just super annoying
<powersj> magicalChicken: very nice
<rharper> I don't think rename in lxc is an issue, no? it's created with the name it needs
<rharper> smoser: so not sure why that matters
<magicalChicken> powersj: :) the code that handles args in the new version is kinda cool
<smoser> well, definitely one way to interact with lxc is to request it to name your network devices.
<smoser> but another is to tell cloud-init via network config.
<rharper> smoser: I updated my comment in the bug; we need to somehow detect duplicate macs and sort out the *real* interface that should be renamed
<smoser> yeah.
<rharper> smoser: that's for the review, replying to it is somewhat awkward
<smoser> agree
<smoser> why did i do it like that..
<smoser> because it wasnt submitted for merge
<smoser> right ?
<smoser> https://code.launchpad.net/%7Ecurtin-dev/curtin/trunk/+activereviews
<smoser> bah. well it woudlt be there
<smoser> for some reason i thought you hadn't clicked submit
<smoser> i can replay my comments onto the mp
<smoser> https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/319259
<smoser> rharper, updated. https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/319259
<rharper> smoser: I sent a mail reply
<dtp> hello.  cloud-init-0.7.8-5.fc25 is failing to write net config for me on Fedora 25.  error is [ValueError: Unknown subnet type 'loopback' found for interface 'lo'] at net/sysconfig.py:238
<harlowja> larsks have u seen anything like that ( dtp was poking me - also works at godaddy, but not sure if u've already fixed that, lol )
<larsks> harlowja: I haven't seen that previously, but 'loopback' is clearly not one of the supported types in NetInterface.iface_types...
<harlowja> ya, lol
<larsks> ...but why are they passing in configuration for lo?
<larsks> That sounds goofy.
<harlowja> agreed
<dtp> my content/0000 looks like ubuntu w/ an [iface lo inet loopback] line
<dtp> http://imgur.com/a/qEElh
<rharper> isn't the content/XXXX path where a complete eni  is being injected rather than using network_data.json ?
<dtp> added network_data.json image to that link, too
<dtp> i don't see a loopback in network_data.json; so must be coming from content/0000?
<rharper> I need to review the code path, but I thought if we got a content eni file, we'd just pass that through; but maybe we're attempting to inject the eni into network_State and then re-render something else (like sysconfig)
<rharper> which sounds like what's happening
 * harlowja didn't think we did that (but maybe we do, ha)
<rharper> well
<dtp> added most of the stack trace to that link
<rharper> but if openstack sends a network_data.json, I'd think we prefer that to injesting eni into network_state
<larsks> rharper: and yet, the network_data.json doesn't container any reference to lo...
<rharper> it shouldn't
<larsks> well, of course not.
<larsks> So the exception dtp  is seeing suggests that we *don't* prefer network_data.json.
<rharper> yeah
<rharper> I wonder why they send both
<larsks> rharper: huh, looking at the code, it seems as if network_json should take precedence: https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L142
<larsks> dtp: do you see any of those message in your logs (on lines 145 or 150)?
<dtp> not that i recall larsks, but i will dbl check now
<dtp> not that i can see
<larsks> Hmm. I need to run off, but if you haven't already, open a bug so that we don't forget (including details of cloudinit version, distro and version, cloud provider, etc)
<dtp> ok
<dtp> https://bugs.launchpad.net/cloud-init/+bug/1671927
<rharper> larsks: not quite;  the code I see checks if the we have a self._network_config; if it's None, then we check if we have a network_json and convert;  so the eni file mapped in the metadata content will get used first
<larsks> rharper: ah, good catch.
<rharper> larsks: but I think we're meant to
<rharper> given the network_eni
<rharper> in distro/rhel.py we do this;  so I suspect we're getting the eni passed in here;  entries = net_util.translate_network(settings)
<rharper> something funny is going on w.r.t the type of network config that was found and passed in;  more investigation needed;
<rharper> dtp-afk: if you can, attaching the configdrive to the bug would be super helpful
<smoser> rharper, sorry for the noise above with the patch comments... can you re-play on the MP ?
<rharper> yes
<smoser> i'ms orry i just didnt' think you'd proposed for merge
<smoser> :-(
<rharper> no worries
 * rharper forklifts into web form
 * rharper mumbles something about gerrit 
<dtp> rharper you want the directory listing or ?
<dtp> uploaded 0000 and meta_data.json
#cloud-init 2018-03-05
<smoser> powersj: https://jenkins.ubuntu.com/server/job/cloud-init-copr-build/274/console
<smoser> the build that that caused
<smoser>  https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/build/724037/
<smoser> seems happy
<smoser> was that just a transient connection failure or something that caused fail ?
<powersj> smoser: I've seen a number of failures on the copr build job. I wanted to look at it this week, but I think I am querying the status field before it even exists
<powersj> state*
<ajorg> hi!
<stanguturi> Hi, Is there a bi-weekly meeting today?
<Odd_Bloke> stanguturi: The Canonical people are all at an internal conference this week, so I wouldn't be surprised if it was being skipped.
<stanguturi> @odd_bloke: Ok. Thanks for the update.
<smoser> yes, we'll do it in 28 minutes
<smoser> stanguturi:
<smoser> but we're here if you want to chat
<smoser> https://gist.github.com/smoser/984af8ead62c7b908f02df82da49c832
<smoser> blackboxsw: exec python -m SimpleHTTPServer ${1:-8000}
<smoser> minus the 'exec'
<blackboxsw> #startmeeting Cloud-init bi-weekly status meeting
<meetingology> Meeting started Mon Mar  5 17:00:58 2018 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
<meetingology> Available commands: action commands idea info link nick
<powersj> o/
<blackboxsw> hey folks welcome to another episode of Cloud-init status updates. Thanks for coming.
<blackboxsw> Since we had a status meeting just prior to the cloud-init upstream release last week there won't be a ton of updates this week.
<rharper> o--
<blackboxsw> without further ado
<blackboxsw> #topic  Recent Changes
<blackboxsw> highlight of the last week was the upstream 18.1 release getting cut! Great work folks on getting branches landed in tip proir to release
 * blackboxsw grabs powersj highlights 
<blackboxsw> cloud-init 18.1 released!
<blackboxsw> ds-identify: Fix searching for iso9660 OVF cdroms for vmware (LP: #1749980)
<blackboxsw> Documented chef example incorrectly represented apt source configuration for chef install
<blackboxsw> SUSE: Fix groups used for ownership of cloud-init.log (Robert Schweikert)
<blackboxsw> OVF: Fix VMware support for 64-bit platforms (Sankar Tanguturi)
<blackboxsw> Salt: configure grains in grains file rather than in minion config (Daniel Wallace)
<blackboxsw> Implement puppet 4 support (Romanos Skiadas)
<ubot5`> Launchpad bug 1749980 in cloud-init "ds-identify doesn't properly detect ISO" [High,Fix released] https://launchpad.net/bugs/1749980
<blackboxsw> For those that didn't see the email:
<blackboxsw> #link https://lists.launchpad.net/cloud-init/msg00144.html
<smoser> o/
<blackboxsw> thanks again stanguturi  Akihiko and Max Illfelder
<blackboxsw> in the ubuntu side of the house we published 18.1 to the Bionic series so clouds now have this by default in bionic images
<blackboxsw> Also on the ubuntu-side of the house we finalized an SRU (stable release update) of 17.2.35.2 into both Xenial and Artful, so xenial-updates and artful-updates should have 17.2.35 available (which is only a few commits earlier than the 18.1 release)
<blackboxsw> Also in tip post 18.1 we had significant contribution from partners
<blackboxsw> Simplify some comparisions. [RÃ©my LÃ©one]
<blackboxsw> Change some list creation and population to literal. [RÃ©my LÃ©one]
<blackboxsw> GCE: fix reading of user-data that is not base64 encoded.
<blackboxsw> doc: fix chef install from apt packages example in RTD.
<blackboxsw> Implement puppet 4 support [Romanos Skiadas]
<blackboxsw> subp: Fix subp usage with non-ascii characters when no system locale.
<blackboxsw> salt: configure grains in grains file rather than in minion config. [Daniel Wallace]
<blackboxsw> (sorry took me a while to dig up the git formatting options)
<blackboxsw> #topic In-progress Development
<blackboxsw> We have some existing branches we are trying to get review feedback to folks on:
<blackboxsw> #link http://bit.ly/ci-reviews
<blackboxsw> Are there reviews that folks feel need some attention this week?
<blackboxsw> Internally, we work on items in the TODO/Doing lane of our trello board here:
<blackboxsw> #link https://trello.com/b/hFtWKUn3/daily-cloud-init-curtin
<stanguturi> @blackboxsw: I have one request about the bug https://bugs.launchpad.net/cloud-init/+bug/1724128 Any inputs from cloud-init team will be great.
<ubot5`> Ubuntu bug 1724128 in open-vm-tools (Ubuntu) "Need a Success / Failure notification mechanism when cloud-init finishes." [Undecided,New]
<blackboxsw> so we have some Snap module development incoming on chrony support, snappy support, vsphere early hostname support
<blackboxsw> stanguturi: checking
<stanguturi> blackboxsw: Thanks. I discussed this last year in August in cloud-init meeting. Logged this bug long time back. But for some reason, it was not tagged with the proper project. My bad.
<blackboxsw> stanguturi: we have an external notification mechanism for scripts to information about when cloud-init has finally completed via "cloud-init status --wait"  .... hrm trying to discuss here about alternative mechanisms internal to cloud-init
<blackboxsw> stanguturi: ok we can probably add a publish/subscription mechanism in cloud-init proper for internal eventing that'd get this done.
<stanguturi> @blackboxsw: Thanks a lot.
<blackboxsw> I've added a topic to our meetings this week to discuss the approach that could make this happen.
<blackboxsw> We'll comment on the bug/mailing list with an approach proposal
<blackboxsw> #link https://bugs.launchpad.net/cloud-init/+bug/1724128
<ubot5`> Ubuntu bug 1724128 in open-vm-tools (Ubuntu) "Need a Success / Failure notification mechanism when cloud-init finishes." [Undecided,New]
<blackboxsw> just so I capture the link
<stanguturi> @blackboxsw: Thanks I have got another request. I am working on https://code.launchpad.net/~sankaraditya/cloud-init/+git/cloud-init/+ref/vmware-customize-utc-time to add some 'UTC customizations'. Got some review comments from Scott Moser. Can you please provide some pointers to any existing / tests / procedure to add new functionality to distro class
<blackboxsw> stanguturi: re-reading ... and checking out the existing cloudinit.distro base module for utc /tz specifics
<stanguturi> @blackboxsw: We just want to customize /etc/default/rcS file with some settings on debian platforms.
<blackboxsw> hrm, so I'm conflicted with your branch by seeing that Distro.tz_zone_dir  sets that path for for that distro for where TZ information is being configured. I think I'm missing why UTC=yes|no is needed versus Debian.set_timezone
<stanguturi> @blackboxsw: This is actually related to 'hwclock'. The key value that needs to be set in /etc/default/rcS is 'UTC' but it's related to 'hwclock' setting.
<blackboxsw> for my suggestion on that initial review, I thought that we might need to allow for a Debian-specific method which would handle this additional/separate configuration file processing you were doing w/ /etc/default/rcS
<smoser> looking quicklyi it looks like at least recently the correct place to store that is /etc/default/hwclock
<smoser>  /etc/default/rcS is read, but per /etc/init.d/hwclock.sh, it looks like /etc/default/hwclock is preferred
<smoser> but either way, wahat i think we really want is for the distro class to have a 'store_hwclock_timezone' or something
<smoser> and then you'd call into that.
<stanguturi> ok. Thanks Scott. Are there any extra test cases / test suites that I need to run if I am modifying the distro class?
<blackboxsw> as far as developing additional unit tests for a new distro method: I'd expect new feature methods to be covered in tests/unittests/test_distros/test_debian.py
<blackboxsw> also cloud-init summit action we haven't done yet is to move all unittests  out of  tests/unittests and under cloudinit proper
<smoser> +1 blackboxsw
<stanguturi> ok Will work on that.
<blackboxsw> I'll take that test migration action  for any existing modules that are already tested under tests/unittests. The policy we were hoping is that for new modules added under "cloudinit" we'd add a cloudinit/somepath/tests/test_newmodule.py for each cloudinit/somepath/newmodule.py
<blackboxsw> so if you want to build on tests/unittests/test_distros/test_debian.py we'll pull that under cloudinit proper when we finally remove tests/unittests altogether
<blackboxsw> stanguturi: any other items?
<blackboxsw> otherwise I think we'll probably wrap up this meeting  in a few minutes
<stanguturi> Yeah. I have one more item. Sorry. https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1750780
<ubot5`> Ubuntu bug 1750780 in open-vm-tools (Ubuntu Xenial) "Race with local file systems can make open-vm-tools fail to start" [Undecided,Triaged]
<blackboxsw> no worries at all stanguturi we  like the interest
<stanguturi> We just noticed that on Ubuntu 18.04 VMs, open-vm-tools service doesn't work with cloud-init.
<stanguturi> We didn't have any issues on 17.10. But only found in 18.04
<blackboxsw> #link https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1750780
<ubot5`> Ubuntu bug 1750780 in open-vm-tools (Ubuntu Xenial) "Race with local file systems can make open-vm-tools fail to start" [Undecided,Triaged]
<stanguturi> I first logged the bug https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1667831 and it was fixed and it was mentioned that now the bug 1750780 came up.
<ubot5`> Ubuntu bug 1667831 in open-vm-tools (Ubuntu) "cloud-init dependency for open-vm-tools service" [Undecided,Fix released]
<smoser> stanguturi: i'll talk with christian tomorrow about the bug there.
<stanguturi> ok Thanks Scott.
<blackboxsw> #link https://bugs.launchpad.net/ubuntu/+source/open-vm-tools/+bug/1667831
<ubot5`> Ubuntu bug 1667831 in open-vm-tools (Ubuntu) "cloud-init dependency for open-vm-tools service" [Undecided,Fix released]
<blackboxsw> ok I think we'll have to call this meeting a close for this week. Thanks again stanguturi for the help/chat here.
<blackboxsw> as always I'll post this log to the site:
<blackboxsw> #link https://cloud-init.github.io
<blackboxsw> #endmeeting
<meetingology> Meeting ended Mon Mar  5 17:53:36 2018 UTC.
<meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2018/cloud-init.2018-03-05-17.00.moin.txt
<blackboxsw> dinner time :) have a good one folks
<bemis_> Hello, are there any RHEL/derivative users in your dev crew?
<bemis_> i ask because line 9 of cloud-init.service.tmpl seems to call in systemd-networkd-wait-online.service which isn't available (yet?) as of el7
<bemis_> but as someone just learning the software i'm not sure of the implications of moving htat into the 'ubuntu' if-then
<AscII>  
<dojordan> @smoser, it appears my C-I run has gotten stuck. Any way to retrigger it? https://jenkins.ubuntu.com/server/job/cloud-init-ci/815/console
<dojordan> Hello- I had an infinite loop that is causing my ci build to run forever. Could someone with permission please cancel it? https://jenkins.ubuntu.com/server/job/cloud-init-ci/815/console
<blackboxsw> I killed it dojordan
<blackboxsw> I see the review responses on your branch, we've talked about instrumenting a trace level debug message, but we haven't had time to prioritize that work yet https://trello.com/c/8cp7exCC/128-cloud-init-logging-trace-level
<dojordan> @blackboxsw, makes sense no worries. I've removed the tracing for now. The other thing is the logs will get filled up with the log messages from wait_for_url. If we pass in None as the status callback, then it defaults to LOG.info. Should I pass in a lambda that does nothing?
#cloud-init 2018-03-06
<dojordan> I am getting ImportError: No module named ec2 . Any suggestions?
<dojordan> in: File "/home/dojordan/code/cloud-init/tests/cloud_tests/platforms/__init__.py", line 5, in <module>
<powersj> dojordan: are you trying to run the integration tests?
<smoser> interesting bug https://bugs.launchpad.net/cloud-init/+bug/1753558
<ubot5`> Ubuntu bug 1753558 in cloud-init "NoCloud data source is mis-using the "system-serial-number" SMBIOS field, should use "OEM Strings" instead" [Medium,Confirmed]
<do3meli> do i have to do something special to finally have a merge request with status = approved to be merged into master?
<smoser> do3meli: smoser has to not be lazy
<do3meli> :-) thanks ;-)
<do3meli> comming from github and dealing the first time with launchpad is not the easiest...so good to know its not my fault ;-)
<smoser> do3meli we don't have automated lander.
<smoser> but even in github, the developer still has to push "yes pull"
<dpb1> there are some good questions here if anyone wants some karma:  https://askubuntu.com/questions/tagged/cloud-init
<dpb1> :)
<dojordan> @smoser, got everything passing locally but now C-I is failing integration. Are there currently any known blockers?
#cloud-init 2018-03-07
<smoser> dojordan: https://jenkins.ubuntu.com/server/view/cloud-init/job/cloud-init-ci-nightly/
<smoser> that is nightly runs. the #260 was a platform failure unrelated to cloud-init it looks like.
<smoser> powersj: 2018-03-07 06:41:50,707 - tests.cloud_tests - ERROR - stage: collect for platform: lxd encountered error: 'Operation' object has no attribute 'description'
<smoser> that make any sense to you ?
<powersj> uhhh no
<smoser> http://paste.ubuntu.com/p/64jsHzV4bh/
<powersj> lxd           3.0.0.beta3
 * smoser recreates tox
<smoser> yeah.
<powersj> how long have you been running 3.0?
<powersj> I'm wondering if pylxd and lxd are out of sync again
<smoser> i have no idea, but that was my first question too
<smoser> do3meli: you there ?
<do3meli> smose: sure i am ;-)
<smoser> we have a integration test harness
<smoser> and it has a salt minion test.
<smoser> it does not run by default, but is currently failing
<smoser> before i merged your changes i wanted to make sure that it was running (which is how i found it fails)
<smoser> we dont run it by default because (I think) we did not want to depend on 3rd party
<smoser> so...
<smoser> what i'm asking is that we need to kind of get that sorted, and verify that your change will work (or should work) with theintegration tets
<smoser> would you be interested in taking a look at that ? i can point you at how to run and suc.
<smoser> a vm with ubuntu and lxd is the easiest way to set up that environment
 * smoser wonders if he scared do3meli away
<do3meli> haha. no still here. no worries.
<do3meli> i am happy to help with that integration test but would require you to point me into the right direction: a) in which file are those tests define? b) ho to run the integration test? and c) are there any docs describing how to setup an integration test env?
<smoser> do3meli: http://cloudinit.readthedocs.io/en/latest/topics/tests.html
<smoser> that is the doc we have on interation test.
<smoser> i just ran
<smoser>  tox-venv citest python3 -m tests.cloud_tests run --platform=lxd --os-name=xenial --preserve-data --data-dir=results.lxd.d --verbose --test=tests/cloud_tests/testcases/modules/salt_minion.py
<smoser> and saw the failure
<smoser> 'tox-venv' is in ./tools/ but it is basically just a "run this command in the tox venv". you could also do...
<smoser> tox -e citest run --platform=lxd --os-name=xenial --preserve-data --data-dir=results.lxd.d --verbose --test=tests/cloud_tests/testcases/modules/salt_minion.py
<smoser> oh. in order to *run* that test you have to http://paste.ubuntu.com/p/FVF8VvPkcY/
<do3meli> smoser: you run this command in a freshly new installed ubuntu vm after cloning the git. is that correct?
<do3meli> which ubuntu vm image did you use? cloud? server? desktop?
<smoser> do3meli: well you could
<smoser> i would use an ubuntu cloud image. but youc an do the install for sure.
<smoser> h...
<do3meli> ok. let me try to get a test env up and running and see if i can reproduce the failing integration test. will get back to you once there...
<smoser> do3meli: where will you start ...
<smoser> as in is your working system freebsd ?
<smoser> just let me know if you ahve any questions.
<smoser> if i have to.. i can launch you a vm somewhere too
<do3meli> i thought about spinning up an ubuntu cloud vm - i have an env ready to do this. my working system is ubuntu btw.
<smoser> oh. well then i suggest just installing and using lxd on the working system.
<smoser> that make sense ?
<smoser> lxd is really nice
<smoser> just apt-get install lxd and then 'lxd init'
<smoser> i think that is probably all you should need.
<do3meli> smoser: good point - let me try lxd then
<do3meli> smoser: do i have to setup the citest tox-env somehow? i get "no module named "six".
<smoser> hm.
<smoser> tox --notest -e citest
<smoser> but tox-venv i thought would do that.
<smoser> do3meli: ^ sorry for slow reply. that make sense ?
<smoser> yeah, my ~/bin/tox-venv *does* do the create. but the one in cloud-init does not. sorry.
<do3meli> after installing bzr it was able to setup citest requirements. now tox-venv cmd fails with:
<do3meli>   File "/home/doemeli/git/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/client.py", line 267, in __init__
<do3meli>     raise exceptions.ClientConnectionFailed()
<do3meli> do i have to start an lxd container first?
<do3meli> or will the tox-venv do that for me ?
<smoser> the harness should do that for you.
<do3meli> ahh think i got it. as i first installed lxd my user was not yet in that group
<smoser> i suspect you just arent able to run a lxd container first
<smoser> try
<do3meli> after getting a new shell now seems to work ;-)
<smoser> lxc launch ubuntu-daily:xenial testme
<smoser> then lxc delete --force testme
<do3meli> i think it works now. need to enable the ci test now and rerun it
<smoser> ok. yeah, you problably didn't have access to the lxd socket (group / sggroup stuff)
<do3meli> so just to confirm smoser: the test you mentioned in the beginning of this discussion was producing this failure: AssertionError: 'role: web' not found in '' ?
<smoser> yep
<smoser> and i probably regressed it (if you git blame)
<do3meli> allright, then let me dig into that now ;-)
<do3meli> just a general question: do you still want to have this test to be disabled per default? or is it may better to have it enabled?
<smoser> have to think
<smoser> what I do not want... is for c-i failures based on external things changing
<smoser> (ubuntu archive is ok)
<smoser> but if this generally works, and doens't have a dependency on , say salt-minion.com/something
<smoser> then i'm good with enabling it
<do3meli> the only dependency i see is the pkg install of the salt-minion.
<do3meli> depending on the apt sources settings this is external or the default ubuntu repo ( i think it is in there too in a very old version - i personally use the official salt repos from repo.saltstack.com) - but as we dont manage the repository this is kind of 50/50 chance if it works or not. for the cloud-init ci system it will always work as long as the ubuntu repo contains the minion pkg.
<do3meli> but for setups in the wild it basically comes down to the repo setting.
<smoser> http://paste.ubuntu.com/p/P8dMDPhfNK/
<smoser> do3meli: i'm looking at that . we'll get it fixed
<do3meli> smoser: thanks - please ping me once resolved
<smoser> do3meli: so i think ifyou just drop the --results-dir it will work
<smoser> i think
<do3meli> let me see
<smoser> https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/341045
<smoser> do3meli, powersj ^ thanks for your patience do3meli
<blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/338366 powersj <---
<dojordan> @blockboxsw you there?
<dojordan> @blackboxsw* you there?
#cloud-init 2018-03-08
<smoser> cpaelzer: are you able to recreate https://bugs.launchpad.net/cloud-init/+bug/1750770
<ubot5`> Ubuntu bug 1750770 in cloud-init "installing cloud init in vmware breaks ubuntu user" [Undecided,New]
<smoser> it must have been 16.04, right >
<do3meli> smoser: good morning, can you trigger another ci run for https://code.launchpad.net/~d-info-e/cloud-init/+git/cloud-init/+merge/340112 ?
<blackboxsw> morning do3meli https://jenkins.ubuntu.com/server/job/cloud-init-ci/827/ running
<do3meli> thx blackboxsw
<blackboxsw> btw thanks for working on this. much appreciated
<do3meli> thanks blackboxsw, btw ci run completed successfully. so this is finally really ready for merging now
<blackboxsw> do3meli: looks great. I'm working on an automated lander this morning and I'll run it against your branch. You'll probably see it close in the new few minutes
<do3meli> awesome ;-)
<do3meli> may you want to set it to approved before your automation picks it up ;-) cause currently the MP status is WIP
<do3meli> i mean "needs review"
<blackboxsw> smoser: rharper https://askubuntu.com/questions/259453/how-can-i-monitor-cloud-init-for-errors-and-trigger-a-script-when-it-fails/1012766#1012766
<rharper> maybe combine those together?
<rharper> cloud-init status --wait and then examine the result.json file
<smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/341116
<do3meli> can someone look into https://code.launchpad.net/~d-info-e/cloud-init/+git/cloud-init/+merge/340757 ?
<smoser> rahbug 1749696
<smoser> rharper: bug 1749696
<ubot5`> bug 1749696 in cloud-init "Excessively vague error while parsing yaml: RuntimeError: Unable to shellify type dict which is not a list or string" [Undecided,Incomplete] https://launchpad.net/bugs/1749696
<smoser> we could mark my shellify change as fixing that.
<smoser> but..
<PsyRabbit> smoser: I'm using open-vm-tools and I would like to know what data comes via OVF into my machine from VCloud director. So I red some source but didnt find files like cust.cfg or nics.txt referenced here: https://github.com/cloud-init/cloud-init/blob/master/cloudinit/sources/DataSourceOVF.py#L112 Is it only for vmware tools? Or how can I see all data which is provided by VCloud Director OVF inside my
<PsyRabbit> machine?
<smoser> PsyRabbit: what do you mean "via OVF" ?
<smoser> is that via CD-rom / iso ?
<smoser> or via the guest/host vmware transport
<smoser> most of the vmware code is contributed via vmware developers and embarassingly I've never actually used it myself.
<smoser> I have used the OVF iso transport, and originally wrote and tested that.
<PsyRabbit> It's via the guest/host vmware transport... The documentation is really thin in this direction
<smoser> PsyRabbit: yeah. i really have very little vmware knowledge.
<PsyRabbit> Okay... I mean I see that the hostname etc. can change, so the transport itself is working. But is there a way that I can dump all incoming key->value pairs independently from my datasource? Just that I know what I can use for further setup tasks?
<PsyRabbit> It's not really transparent for me...
<smoser> PsyRabbit: i'm really sorry, you're at the limits of my knowledge in this area.
<blackboxsw> PsyRabbit: yeah around this space, I know that I recently reviewed a vmware branch that dealt with placing some customization script seeded (I thought from https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/330105) It has a bit more context on a single customization script file containing both Pre and Post customization operations which gets placed on the system.
<blackboxsw> from the looks of the datasourceOVF, that file is located at /var/run/vmware-imc
<PsyRabbit> smoser: Thank you anyway :/ No problem, maybe someone read our conversation :) But I really have to say the documentation needs improvement...
<blackboxsw> on a deployed system. best approach (because most vmware folks only infrequently join this channel is to go for a deployment and check that directory for content.
<blackboxsw> also you can find a bunch of information on which cloud-init reacts (user-data/metadata) in /run/cloud-init/instance-data.json on a deployment that contains cloud-init 17.1 or later
<PsyRabbit> blackboxsw: The file is not present - I have the path from the source... Maybe it's deleted after usage (didnt find rm statement in python) or open-vm-tools is working different than vmware tools.
<PsyRabbit> Hmm, I'm using cloud-init 0.7.9 which is from the rhel 7 repo... instance-data.json is not present, there is only result.json and status.json
<blackboxsw> from wait_for_imc_cfg_file() in DatasourcOVF it looks like that file is optional as that function which if memory serves I think only is written if custome user scripts were provided during vm creation
<blackboxsw> PsyRabbit: let me get you a link to our tip COPR repo... one sec
<blackboxsw> it'd give you the ability to upgrade to latest cloud-init to check
<blackboxsw> PsyRabbit: https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/
<blackboxsw> we have jobs that update this repo with tip of cloud-init (currently revision 18.1...)
<PsyRabbit> blackboxsw: Thank you, I will try it out :)
<blackboxsw> running 'sudo cloud-init clean --logs --reboot' after you install that should clean your system of cloud-init artficts and reboot it allowing cloud-init to attempt running again
<blackboxsw> after reboot, the OVF datasource  will write /run/cloud-init/instance-data.json which sheds light on metadata/user-data that cloud-init reacts too
<blackboxsw> also PsyRabbit vmware folks occasionally jump into our cloud-init bi-weekly status meeting
<blackboxsw> oops let me update that in the channel topic
<PsyRabbit> I think you helped me a lot blackboxsw! Thank you
* blackboxsw changed the topic of #cloud-init to:   Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 3/19 16:00 UTC | cloud-init 18.1 released (Feb 22, 2018)
<blackboxsw> no worries
<dojordan> hey @blackboxsw, you threre?
<dojordan> hey all, on bionic, how / when does /etc/resolv.conf get generated?
<dojordan> @smoser, when you get a chance: https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+ref/fix_azure_hostname
#cloud-init 2018-03-09
<dojordan> Hey folks, can someone take a look at this (simple) PR when they get a chance? https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+ref/fix_azure_hostname
<blackboxsw> dojordan: approved if we can have a simple unit test (cause I â¤ test coverage)
<blackboxsw> sorry for being out of pocket this week. lotsa meetings
<blackboxsw> gotta run again
<dojordan> no worries, appreciate it!
<dojordan> will do re: UT
<blackboxsw> cheers
<blackboxsw> will land it after our ci passes that
<blackboxsw> and will get to your review next week on branch #2
<dojordan> thnx
#cloud-init 2018-03-10
<ybaumy> smoser: you here
<ybaumy> smoser: are you online?
#cloud-init 2020-03-02
<fredlef1> rharper: Can you take a look at PR:229 when you have a minute.  It's a follow up from our chat last week about the cached datasource object?
<fredlef1> https://github.com/canonical/cloud-init/pull/229
<blackboxsw> fredlef1: I also owe you a followup on https://github.com/canonical/cloud-init/pull/216 apologies I got pulled into a minor escalation
<blackboxsw> plan is still to get you an update today
<blackboxsw> rharper: I think the bug we were hitting is in part related to https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1727356
<ubot5> Ubuntu bug 1727356 in mutter (Ubuntu Cosmic) "Login screen never appears on early generation Intel GPUs (Core2 and Atom etc)" [High,Fix released]
<rharper> fredlef1: definitely
<blackboxsw> woah wrong bug
<rharper> blackboxsw: maybe; but it's out-of-scope for our bug
<rharper> oh
<rharper> hehe
<blackboxsw> https://bugs.launchpad.net/cloud-init/+bug/1848617
<ubot5> Ubuntu bug 1848617 in ifupdown (Ubuntu) "Installing ifupdown on bionic does not install a /etc/network/interfaces that sources /etc/network/interfaces.d/*" [Undecided,New]
<blackboxsw> but still out of scope for our bug specifically
<rharper> blackboxsw: yes, that is true
<rharper> I agree
<blackboxsw> though bionic ifupdown on bionic should have a proper include
<rharper> the source file is from the image build
<rharper> and on bionic it's not installed
<rharper> I don't think ifupdown packaging ever contained that
<rharper> it was always an image build add
<blackboxsw> ahh i see
<rharper> but, that sounds exactly correct
<blackboxsw> confirmed on xenial dpkg -S /etc/network/interfaces finds no owning pkgs
<rharper> blackboxsw: worth asking cpc how best to handle this w.r.t legacy cloud-init
<blackboxsw> will do
<rharper> that is, cloud-init *could* emit that source stanza in /etc/network/interfaces
<rharper> blackboxsw: actually; the xenial image should have that
<blackboxsw> rharper: xenial image does contain the file, just not delivered by a package
<blackboxsw> https://pastebin.ubuntu.com/p/dNGHqNzKvt/
<blackboxsw> so image build process did it
<rharper> correct
<rharper> however , what clobbers the file ?
<rharper> if you launch a xenial image , it should be present, no?
<rharper> so what happens during package install, snapshot, upgrade ?
<blackboxsw> rharper: ubuntu@35.247.83.53
<rharper> in
<rharper> I see the file looks like it was clobbered by the install of the package
<rharper> look at ifupdown postinst
<rharper> it will create the file
<blackboxsw> +1 see it
<blackboxsw> rharper: though see that postinst creates with different incorrect content too
<rharper> not incorrect
<rharper> just does not filter *.cfg from the .d dir
<blackboxsw> source /etc/network/interfaces instead of 'source /etc/network/interfaces.d/*.cfg'
<blackboxsw> ahh I overlooked source-directory
<blackboxsw> right
<rharper> blackboxsw: so; if I launch a new xenial image, the /etc/network/interfaces is what we expect
<blackboxsw> rharper: and I don't think the original issue was on xenial at all
<rharper> oh ?
<blackboxsw> just bionic as I re-readit
<rharper> how did I see xenial mentioned as well  /
<rharper> that makes a lot more sense
<rharper> as the in-image eni file is sane
<blackboxsw> right, xenial was mentioned as not replacing it
<rharper> ok
<blackboxsw> *reproducing*
<rharper> there we go
<blackboxsw> ok so filing a bug and adding workaround notes.
<rharper> so, then yes, this comes down to bionic images not having the cpc eni file because it's not ifupdown enabled; and further, ifupdown packging does not provide a file with the cloud-init expected source directive
<rharper> it can be resolved in a few places;  for now; I suspect we want to purge ifupdown in that image unless pppoe is needed (it is not in the cloud AFAICT)
<blackboxsw> rharper: you thinks it's worth demoting pppeoconf installed by ubuntu-desktop from required to recommends too? maybe I'm misreading the bionic pkg debian control
<rharper> this should result in rendereing networkd just fine and all is well
<rharper> blackboxsw: unclear
<rharper> I think workaround first (uninstall ifupdown)
<blackboxsw> yeah +1.
<rharper> blackboxsw: follow is, if you install bionic from desktop iso, do you get ifupdown ?
<rharper> if not, why
<rharper> and cloud-init can be defensive in the bionic case w.r.t adding a source directive if the /etc/network/interfaces files does not contain it
<rharper> floating the idea of whether the ifupdown package itself should have that in it's auto-generated eni file
<ahosmanMSFT> rharper: I tested the PR looks to be working, can you double check?
<rharper> ahosmanMSFT: excellent, I'll give it a spin
<blackboxsw> rharper: http://releases.ubuntu.com/18.04/ubuntu-18.04.4-desktop-amd64.manifest  doesn't that mean the Ubuntu Desktop ISO installs both ifupdown and ppoeconfig pkgs?
<blackboxsw> I'm mid download and install run now to be certain
<rharper> blackboxsw: well, it means the deb packages are installed
<rharper> it does not say, what, if any, iso specific changes are made in livecd-rootfs
<rharper> blackboxsw: so, unfortunately you'll need to complete an install
<rharper> paride is gone, but I wonder if utah has final image qcow2 images we could post-mortem the filesystem
 * rharper inspects a local bionic desktop install 
<rharper> blackboxsw: my local bionic desktop does not have any source * stuff in it's eni file
<blackboxsw> rharper: my orig cpc image had the netplan disabled comments in /etc/network/interfaces
<rharper> yes, cloud images
<rharper> but standard desktop wont (just ifupdown installed due to the pppoe dep IIUC)
#cloud-init 2020-03-03
<sirhopcount> Quick question related to datasources. The 18.3 documentation has reference on how to configure them but not on how to use them. I'm trying to set the availability zone in the content of a file. I tried using the syntax of the most docs: Availability_zone: {{ v1.availability_zone }} but for some reason the v1.availability_zone isn't interpreted. Does 18.3 support the use of datasources and if so what am I doing wrong?
<meena> sirhopcount: "I'm trying to set the availability zone in the content of a file. I tried using the syntax of the most docs:" where you setting that
<sirhopcount> I'm creating a file using 'write_files:` and I use that in the 'content: |' section.
<meena> i am suddenly more confused than ever
<meena> for what is this configuration? where are you writing that file?
<sirhopcount> It's a cloud-config file for AWS.
<sirhopcount> meena: This is an example of what I'm trying to do: https://gist.github.com/sirhopcount/6873745c1de6a830cb40ab0b26e402d9
<rharper> sirhopcount: you need tag the file with the jinja template key,  ## template: jinja
<rharper> that needs to be the very first line in your cloud-config file
<rharper> this tells cloud-init to run your config through the template renderer which expands the {{ }} keys
<rharper> https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html   *search for jinja*
<sirhopcount> rharper: thanks!
<rharper> sirhopcount: yw
<sirhopcount> rharper: I might be mistaken but that feature doesn't seem available in 18.3
<rharper> sirhopcount: yes, 18.3 was tagged in June 2018, feature landed in Sept 2018, 18.4 tagged Oct 2, 2018;
<rharper> sirhopcount: so you'll need 18.4 or newer
<sirhopcount> thats to bad, Debian Buster ships with 18.3 and I don't think there is an easy way for me to upgrade.
<sirhopcount> thanks for the help anyway :)
<rharper> sirhopcount: sure;  is there a newer release in downstream in testing ? or unstable ?
<sirhopcount> Bullseye seems to have 19.4-2
<sirhopcount> but the buster backports repo doesn't seem to contain it.
<sirhopcount> Debian might but be considered stable but the facts they deploy with "old" software versions has more then once bitten me. I prefer Ubuntu over Debian but unfortunately its not up to me in this case.
<rharper> sirhopcount: I see, bummer
<sirhopcount> mm seems the debian apt repo does contain a cloud-init_19.4-2_all.deb which seems to install just fine. Going to test with that.
<blackboxsw> rharper: yaml on kernel commandline is not as easy as we thought as we are parsing that cmdline spliting on spaces. valid yaml for {'config': 'disabled'} is actually "config: disabled" with a space otherwise yaml.load interprets "config:disabled" as a single concatonated key which gets loaded as {"config:disabled" : None}
<rharper> blackboxsw: yes, the space is required, is the parsing code eating the space ?
<rharper> I see the tok is done with split()
<blackboxsw> rharper:  no, yaml.load needs that space. and if we present a kernel cmdline with that space, then our handling in tok split breaks
<rharper> blackboxsw: ok, then maybe we want to regex for the disabled only
<blackboxsw> but I'm thinking generally about kernel cmdline parsing if allowing spaced in cmdline values could break other things
<blackboxsw> *spaces
<blackboxsw> rharper: I think possibly, but seeing if we can be more flexible than that
<rharper> we also have cc this is cloud config parsed end_cc;
<rharper> blackboxsw: see util.read_cc_from_cmdline
<rharper> so, the proper cmdline would be cc: network-config: {config: disabled} end_cc
<blackboxsw> rharper: yeah that makes more sense
<blackboxsw> good suggestion
<rharper> >>> from cloudinit import safeyaml
<rharper> >>> safeyaml.load(util.read_cc_from_cmdline("foobar cc: network-config: {config: disabled} end_cc wark"))
<rharper> {'network-config': {'config': 'disabled'}}
<blackboxsw> and good  fix suggestion :)
<rharper> and then lastly, we need to decide, if we support the explicit:  network-config={config: disabled}
<rharper> I think doc-wise, we should check that disable_string is in the cmdline
<rharper> if 'network-config={config: disabled}' in cmdline:  then return {'network-config': {'config': 'disabled'}}
<blackboxsw> rharper: if we are supporting a one-off, non-yaml string. why not just network-config=disabled
<rharper> I think that closes the bug
<rharper> because we documented it the other way ?
<rharper> I'm open to suggestion  here
<rharper> clearly no one is using it
<rharper> as it doesn;'t work
<rharper> so, we could update the docs with all methods that work now (without any changes) for example the cc: .... end_cc
<blackboxsw> hrm I'm on the fence. I like simple ways to globally disable something, but writing an example cc: end_cc may advertise that more flexible functionality
<blackboxsw> which folks could better leverage for bigger c-config changes
<rharper> since the current doc is broken as it is;  do we support that documented string ?
<rharper> I'm find with saying , no; that's broken;  instead use:  network-config=disabled;  and support that;
<rharper> separately we can file a bug on documenting use of cc: / end_cc;  and figure out if cloudinit/net/cmdline.py should be employing read_cc_from_cmdline() (try this first, followed up by search for network-config) etc
<rharper> I think the non-encoded yaml only works if it's wrapped in cc: end_cc
<rharper> as you mention, the spaces are needed
<blackboxsw> I not even sure we can/should be supporting the documented string  "network-config={config: disabled}" with the space because that type of kernel commandline I don't think conforms to typical k=v seen on kernel commandlines. So other cmdline parsing tools may break as a result if cloud-init *did* support that.
<blackboxsw> but I'm good with documenting a simple network-config=disabled  or more complex cc... end_cc options too
<rharper> blackboxsw: yeah; I think you're right there as well
<rharper> yes
<blackboxsw> rharper: I just thought about the cc end_cc  approach we use too, might also break those same cmdline parsing tools that are out there anyway
<rharper> so to fix the bug, 1) update docs to metion network-config=disabled 2) add this to the cmdline.py
<blackboxsw> but, that's a separate issue
<rharper> blackboxsw: well, they'll ignore it
<blackboxsw> +1 on 1 and 2
<rharper> cool
<rharper> and we can follow up with documenting cc/end_cc separately
<rharper> and the refactor around b64/gzip stuff;  low-hanging fruit cleanup
<blackboxsw> 3) would you like cc/end_cc docuemented option for disable in the rtd docs too?
<blackboxsw> +1 on your statements up until now
<blackboxsw> I'm good with document 3 too. and we can followup later if this is a breaking behavior for others in cloud-init that we need to address (but as you said, it probably isn't something that breaks cmdline parsing folks as they'll ignore things they don't care about)
<rharper> I dunno about (3);  I suppose we should test whether that works as well (it should, I know I've tested a cc: some cloud config here end_cc for ssh_import_id
<rharper> we can hold off on (3) I think, as you mentioned, then folks are wondering in general about cc: end_cc; and I'd rather document that fully else where and then we could add a disable network cc:end_cc and refer to the cc:end_cc docs for more details
<blackboxsw> yeah and for 3 we do that for https://github.com/canonical/cloud-init/blob/master/doc/examples/kernel-cmdline.txt
<blackboxsw> yeah something separate
<blackboxsw> ok
<blackboxsw> got it will post this PR within the hour
<rharper> cool
<blackboxsw> rharper: question at this point is if we will ultimately support cc ... end_cc for network-config on the commandline. As it is currently, merged config dict looks like it might ignore the network-config=base64-stuff and only rely on util.read_cc_from_cmdline. so we may have a gap there too in terms of merged network config.
<blackboxsw> I think there is a separate bug here that would be helpful to address when we address 3) adding cc ... end_cc documented option for network-config cmdline setup
<rharper> blackboxsw: I suspect because if cc/endcc being under-advertised that we've got merge gaps; so let's file a bug for that and we can fix that separately
<blackboxsw> even more reason to address separately
<rharper> ack
<blackboxsw> yeah
<blackboxsw> rharper: so, do we drop the documented yaml support https://cloudinit.readthedocs.io/en/latest/topics/network-config.html#default-behavior ?
<rharper> it's supported but only if it's b64 encoded
<rharper> it was the line about, you can disable this with network-config={config: disabled} on kernel cmdline;  that gets moved to network-config=disabled, and we need to *bold*/*note* the yaml stablement that it requires b64 encodingin
<rharper> blackboxsw: ^
<blackboxsw> rharper: I was wondering about the text
<blackboxsw> ip= or network-config=<YAML config string>
<blackboxsw> rharper: I was wondering about the text     ip= or network-config=<YAML config string>
<blackboxsw> we can't really support YAML config string currently
<rharper> s/YAML config string/Base64 encoded YAML config string
<blackboxsw> roger
<blackboxsw> thanks
<blackboxsw> rharper: https://github.com/canonical/cloud-init/pull/232
<rharper> thanks
<Goneri> @blackboxsw, the NetBSD PR is ready for review https://github.com/canonical/cloud-init/pull/62 :-)
<blackboxsw> rharper: thoughts on this for focal? https://trello.com/c/wepeWcKf/1109-ec2-support-secondary-ip-addressses-on-eth0
<blackboxsw> Goneri: +1. will spend get you an update today.
<Goneri> :-) awesome, I will rebase the OpenBSD branch this weekend if this one get merged this week
<rharper> blackboxsw: +1, we need FFE and I left review on it; haven't looked again, but generally wondered about whether we also need the route-metric and dhcp*-override like we have on azure
<blackboxsw> ok rharper so I have 2 FFEs: instance-data to include merged_cfg and and ec2 secondary ip address support
<rharper> yeah, I think so
<blackboxsw> ok I fixed ci for https://github.com/canonical/cloud-init/pull/230 I found an httpretty.last_request() that *is* available on xenial httpretty v 0.9.6
<blackboxsw> rharper: so that first bug fix for ec2 redact should be good to go
<rharper> blackboxsw: let me re-review
<rharper> I had one nit on docstring
<blackboxsw> and I'm teasing out the bug-fix from feature-request for instance-data top-level keys @ https://github.com/canonical/cloud-init/pull/214
<ahosmanMSFT> blackboxsw:
<blackboxsw> will push a separate BR for the bugfix
<blackboxsw> hi ahosmanMSFT
<ahosmanMSFT> Hey blackboxsw what would be the best way to keep our repo of cloudinit up to date with upstream and have the debian salsa packages included, without having to do it manually?
<blackboxsw> ahosmanMSFT: not sure I understand the question, you mean how can you get latest cloud-init in a debian salsa based image?
<blackboxsw> or are you looking for how best to install salsa debs on an ubuntu system?
<ahosmanMSFT> something like this https://salsa.debian.org/cloud-team/cloud-init
<ahosmanMSFT> blackboxsw: We have a branch of cloud-init with some changes, I was wondering what the best way to get debian salsa packages on there and also keep everything up to date
<ahosmanMSFT> currently we have a manual approach
<blackboxsw> ahosmanMSFT: I think that project is owned by debian devs to get cloud-init updated/maintained in debian proper. I see the committers are names that don't participate in cloud-init upstream as far as I can tell
<blackboxsw> you might be able to touch base with Noah or Thomas listed as debian salsa maintainers there https://salsa.debian.org/cloud-team/cloud-init/-/commits/master/
<blackboxsw> ahosmanMSFT: since we don't control when specific distributions pick up cloud-init upstream changes, each distro differs (besides Ubuntu which we *do* control)
<ahosmanMSFT> blackboxsw: thanks that helps, I'll try getting in contact with them
<blackboxsw> ahosmanMSFT: you might be able to trigger your own deb package builds via using cloud-init's packages/bddeb in our source tree to roll your own cloud-init upstream deb if that helps too
<ahosmanMSFT> That might be better, or we would have to rely on debian releases to update our cloud-init
<blackboxsw> ahosmanMSFT: you could also try using cloud-init's ./tools/run-container --package  debian/10 #  to build a deb from source on a debian container
<rharper> blackboxsw: one challenge is not using Debian's debian/ packaging dir
<blackboxsw> that tool should create a binary deb for you from whatever your latest commit is in your local directory
<rharper> what you really want is git checkout cloud-init.git; git merge /path/to/debian/cloud-init/git/branch/with/debian-dir-changes
<rharper> which is like our daily-ppa for our release branches
<rharper> that'd be cloud-init-daily-debian
<rharper> which is what I think would be desired
<ahosmanMSFT> Would that also have the debian salsa packages or is it jsut a cloud-init met to work on debian
<ahosmanMSFT> I think I see what you mean rharper, If we merge latest cloudinit and debian upstream changes we can get the best of both
<ahosmanMSFT> That might just work
<blackboxsw> thanks rharper fixed docstr https://github.com/canonical/cloud-init/pull/230
<rharper> blackboxsw: ok
<blackboxsw> also closed 221 as that was my first cut of 230 that we didn't merge last week
<rharper> blackboxsw: approved, I'll leave to use to wait on CI and squash/merge
<blackboxsw> thanks
<blackboxsw> rharper: I forgot what did you suggest for top-level instance-data key for merged config instead of 'ci_cfg'
<rharper> we discused merged_cfg or merged_config
<rharper> something indicating it was combined
<blackboxsw> ok will go with merged_cfg
 * blackboxsw saving those keystrokes one step at a time
#cloud-init 2020-03-04
<blackboxsw> rharper: I think https://github.com/canonical/cloud-init/pull/232 is good to go
<blackboxsw> I believe I addressed all comments
<rharper> blackboxsw: excellent, I'll review
<blackboxsw> ok addressed thx rharper
<fredlef1> rharper: What purpose does DataSourceNone serve?
<rharper> it's a fallback datasource that allows cloud-init to run without a datasource
<rharper> basically if we could not find any datasource at all, try to do something reasonable (we still generate host keys, and other bits);
<rharper> we've recently had a discussion that DataSourceNone really ought to look more like NoCloud ;  https://github.com/canonical/cloud-init/pull/203
<fredlef1> Is there any scenario where it would leave an instance in a state where it is accessible to the user? Other than cloud-init running in a non-cloud scenatio
<rharper> fredlef1: I think it depends on the image;  if there is included cloud-config , None will return user-data/metadata from the datasource;
<rharper> a default user is normally created, host keys are generated;   however, without a password or an imported ssh key, I don't see how a user would have access
<rharper> this only happens if cloud-init detected a potential datasource, otherwise, cloud-init won't run at all
<rharper> so, in your IMDS is down scenario; we'd detect ec2 via host UUID value, and then when we called _get_data() if that failed, we'd fall through to None, which would continue to run modules/final but a stock image won't include any user-data/metadata nor fetch it from remote URLs, etc
<fredlef1> Yes. I have seen that.  My latest builds I have been testing for our AMIs have the None datasource removed from the list of candidate datasources for that reason.
<fredlef1> The impact of falling back to DataSourceNone is that cloud-init concludes it's dealing with a new instance
<rharper> Yes, it does do that;
<fredlef1> DataSourceNone doesn't seem like something we should ever use after first boot
<rharper> the general idea of falling back to the previous DS is new;
<rharper> fredlef1: except if you're capturing and deploying again
<rharper> if we end up falling back to the previous ds if the current ds is not viable;  None is not very useful.
<fredlef1> rharper: Yes, I agree. But, on the other side, the list of things that can go awfully wrong if you capture/snapshot and re-deploy without properly cleaning up various states on your instance is very long already.
<rharper> it's harder now than it used to be;  but none-the-less, it is very common workflow
<fredlef1> I agree. We see it everyday (both the right way and the wrong way).  Clearing up cloud-init states is what people do wrong the least often (does that sentence even make any sense?).
<fredlef1> I'll think about it some more and update my PR later this week.
<fredlef1> There is a way to get this right
<rharper> =)
<rharper> fredlef1: I'm happy with the direction you're going here
<rharper> I've asked the vmware folks to look at your changes as well, as they have a similar scenario where they need to restore the previous instance-id on subsequent boots even if they don't have any new data (which is different than IMDS being down)
#cloud-init 2020-03-05
<fredlef1> blackboxsw: I think I have all you comments addressed : https://github.com/canonical/cloud-init/pull/216
<blackboxsw> fredlef1: +1 I like it, I'm currently working on a patch for you at the moment but need to sort a unit test issue
<blackboxsw> fredlef1: here's the wip I was thinking. https://paste.ubuntu.com/p/PRnnJtSKww/
<blackboxsw> wip == work in progress
<blackboxsw> fredlef1: sorry for not getting this sooner, I expect to have this buttoned up before my lunch (~45 mins from now0
<blackboxsw> fredlef1: the trouble I was originally thinkg with the branch was how Ec2 proper would react to a requests.ConnectionError/TimeoutError, if IMDS were for some reason down. But retrying in the face of that is ok, and I wanted to confirm that _imds_exception_cb performs properly in that case. and I think it does
<blackboxsw> fredlef1: so anyway, should have those couple unit tests in place within an hour and then I think we are clear there for landing. and rharper and I think https://github.com/canonical/cloud-init/pull/229 may be in good shape for landing as the next PR we merge
<fredlef1> your diff looks good
<blackboxsw> fredlef1: your two branches are the only thing that is gating our next Ubuntu Focal (20.04) package upload
<blackboxsw> so I hope we get through those 2 branches today/tomorrow
<blackboxsw> hopefully today
<rharper> blackboxsw: do you need any more reviews  for FFE branches ?
<blackboxsw> rharper: nope, still on fred's PR #216.
<blackboxsw> I think I have unit tests in order will push that an then I'll address your reviews on my branches
<rharper> sure
<blackboxsw> ok done with fred's branch https://github.com/canonical/cloud-init/pull/216
<blackboxsw> posted a patch that I think will be accepted there
<blackboxsw> grabbing some reviews now
<Nick_A> Can someone please confirm if this is valid network version 1 syntax for dhcpv6? http://paste.openstack.org/show/EqXcd7Pi7DEaiGP6YjCW/
<powersj> Nick_A, I believe it is per https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html#network-config-v1
<powersj> but I'll let rharper confirm
<rharper> Nick_A: looks sane;  if you've a newer cloud-init, you can use:  cloud-init devel net-convert
<rharper> https://paste.ubuntu.com/p/7spXqkFtGj/
<rharper> Nick_A: the lack of MAC and set-name may make the config unreliable if nics are added/removed from instance;  and from boot to boot depending on your platform
<Nick_A> thank you
#cloud-init 2020-03-06
<Smithx10> Does cloud-init 19 not work with the latest netplan stuff?
<blackboxsw> Smithx10: it should. if a particular use-case isn't working, we'd love a bug. cloud-init should emit /etc/netplan/50-cloud-init.yaml on Ubuntu Bionic (18.04) or later
<Smithx10> I got it to work
<Smithx10> It took me a good 30 minutes to figure out what this thing was doing
<Smithx10> i still kinda have no idea what its doing
<Smithx10> but address !
<blackboxsw> yeah network-config for cloud-init could come from a variety of sources, kernel cmdline, datasource metadata or /etc/cloud/cloud.cfg.d files and cloud-init network renderers will spit out network configuration  files (sysconfig, /etc/network/interfaces/ or /etc/netplan) based on what distribution it detects it is running on.
<Smithx10> yea, the logging is pretty errrr
<Smithx10> Also figured otu that I had to reboot the machine in order for it to configure an interface
<blackboxsw> pretty noisy for sure
<Smithx10> Or I was running the wrong commands
<Smithx10> i was doing cloud-init clean and then unsetting the interface and downing it
<Smithx10> then running cloud-init init and it wouldnt do anything with it
<Smithx10> bouncing would configure it
<blackboxsw> Smithx10: it depends on the datasource(cloud platform)  you are running on what couldinit does. generally during reboot there are a number of stages where cloud-init wakes up and does some detection or configuration. All datasource and network config happens in either init-local stage (which is the command cloud-init init --local)    or init-networking stage (which is the cloud-init init  command you mentioned)
<Smithx10> Best place to see that is in the Modules code?
<Smithx10> What is donein local vs init?
<blackboxsw> Smithx10: trying to run those individual commands is not really a supported flow for cloud-init because  cloud-init does initial system configuration on boot (and actually runs the other configuration stages like modules and final which actually configure things like users and ssh keys etc.
<blackboxsw> so missing any of the stages will likely lead to a misconfigured or partially configured system
<Smithx10> yea, I'm kind of blaming linux for this.
<blackboxsw> heh :) https://cloudinit.readthedocs.io/en/latest/topics/boot.html
<Smithx10> The user is happy,  their Distro is gettin an addr now
<blackboxsw> Smithx10: to see what config modules are done in each stage: read /etc/cloud/cloud.cfg on your system
<blackboxsw> the cloud_init_modules: list is what's run in init (network already up) stage
<blackboxsw> cloud_config_modules: list == https://cloudinit.readthedocs.io/en/latest/topics/boot.html#config  and cloud_config_final = https://cloudinit.readthedocs.io/en/latest/topics/boot.html#final
<blackboxsw> in local: some datasources that can be detected before network being up get checked for discovery so they can emit the right network config before the system brings up it's full network stack. the rest of the details should be in https://cloudinit.readthedocs.io/en/latest/topics/boot.html#local
<Smithx10> hahaha
<Smithx10> and this is all better than running a bash script
<Smithx10> lol
<Smithx10> Not the point of cloud-init, i get it :P
<Smithx10> different problem
<Smithx10> Thanks for the advice
<Smithx10> I'll dive in tomorrow and with the docs in front of me find out more
<blackboxsw> hehe :)
<gfidente> hi guys, can anyone point me to the code distinguishing when a node is at is firstboot vs not-first?
<gfidente> or maybe how is the instances id generated because I can clearly tell from the logs that it changed
<MAbeeTT> hi,
<MAbeeTT> I am trying to understand the cause of som errors logs in /var/lib/cloud-init.log I see this information https://pastebin.com/D4Fh4FmT
<MAbeeTT> How could I understand how to get more info? Thanks in advance.
<blackboxsw> MAbeeTT: looks like the hostname file  that was written what not written as proper json.. strange.  peeking one sec
<blackboxsw> MAbeeTT: check /var/lib/cloud/data/set-hostname on your system to see if it looks like json
<blackboxsw> you can cat it and paste here if you like
<blackboxsw> or just ensure the following succeeds  python3 -c 'import json; print(json.load(open("/var/lib/cloud/data/set-hostname")))'
<MAbeeTT> ok, the idea is to understand how to proceed in next possible issues.
 * MAbeeTT n00b.
<blackboxsw> one thing to also check is egrep 'Trace|Failed' /var/log/cloud-init.log as any issues setting the hostname in the first place (and writing that original set-hostname file) would have been emitted there
<blackboxsw> so above that current failure I would have expected to see cloud writing to set-hostname too. so a grep set-hostname /var/log/cloud-init.log would show you the order of operations on hostname related activity
<MAbeeTT> ok, file says `/var/lib/cloud/data/set-hostname: empty
<MAbeeTT> `
<blackboxsw> strange as current cloud-init writes a dict always if it writes anything
<blackboxsw>     write_json(prev_fn, {'hostname': hostname, 'fqdn': fqdn})
<blackboxsw> cloud-init -v and cloud-id commands will tell us a bit more
<blackboxsw> also sudo cloud-init query --all will show you all metadata that cloud-init is looking at on your system (careful there may be sensitive information in there if you setup passwords and keys in your userdata)
<blackboxsw> and cloud-init query local_hostname  should show you the standardized metadata value that cloud-init is sourcing when trying to set hostname
<blackboxsw> for more info: https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html
<blackboxsw> ultimately this could be a bug once you track down where /var/lib/cloud/data/set-hostname is being written as empty. if so, run "ubuntu-bug cloud-init" and it'll allow you to create a bug and attach the cloud-init logs.
<blackboxsw> if you are on Ubuntu that is. on other distros "cloud-init collect-logs" creates a tar of all artifacts/logs to aide in debugging)
 * MAbeeTT reading 
<MAbeeTT> this is a particualr instance ina lab, so there is no sensitie iformation, no problem. An error from my part is at the moment the best candidate for root cause. Anyway this is a good excuse for learning.
<MAbeeTT> this is an ubuntu focal (20.04) vm in a proxmox PVE server, the cloud-init is generated by proxmox pve tool. (as cdrom drive).
<blackboxsw> MAbeeTT: if you are providing user-data to the instance at boot, it generally is the most-likely source if deployment issues.  try "sudo cloud-init query userdata > my-userdata.yaml;   cloud-init devel schema --config-file my-userdata.yaml --annotate"    it will at least validate your userdata is proper yaml and may give hints about failures
<blackboxsw> run that on the system exhibiting failures just to double check a couple things about proper whitespace formatting. ... it's still work in progress (as we need better schema coverage) but it may find some glaring issues
<blackboxsw> https://cloudinit.readthedocs.io/en/latest/topics/faq.html has some typical debug suggestions
<MAbeeTT> great.
<MAbeeTT> from the part of formatting it seems there are no problems `Valid cloud-config file /tmp/my-userdata.yml`
<blackboxsw> ok, one âconfirmed
<MAbeeTT> `sudo egrep --color 'Trace|Failed' /var/log/cloud-init.log | wc -l 1` gives 3 tracebacks and 1 `Failed to get raw userdata in module rightscale_userdata`. Tis file is just last boot, previous boots are in cloud-init.log_ (just for clearance)
<MAbeeTT> sudo cat /run/cloud-init/instance-data.json  | jq '.v1.local_hostname'
<MAbeeTT> shows the previous hostname (previous to  cdrom rebuild and reboot)
<MAbeeTT> well, i will be investiating diring the weekend, thanks you so much!
<blackboxsw> rharper: is there anyway I can ammend the author of the latest commit without forcing people to rebase? Somehow my last squash and merge for AWS Fred's PR ended up giving 'me' authorship'
<blackboxsw> maybe it's that the last commit on his branch he attributed to me via Test provided by: Chad Smith <chad.smith@canonical.com>
<rharper> blackboxsw: hrm;   ISTR we had something like this before
<rharper> I thought it was supposed to have multiple authors or something like that
<blackboxsw> I think we might have to add the directive Authored-by: <pr submitter> if it has multiple attributions
<rharper> hrm
<rharper> I'm out of my gitfu level here...
<blackboxsw> strange is that that merge commit doesn't even have a merge marker from me vs the author like other merges I've squashed using the github ui
<rharper> it wouldn't be terrible to amend with author fix;
<blackboxsw> rharper: but that changes the commitish, so it might break consumers of tip
<rharper> doing that sooner rather than later will reduce the number of rebases needed, no ?
<rharper> only if they've pulled since that commit ?
<blackboxsw> yes
<rharper> I'm not saying do it Right Now, but I don't know another way;
<rharper> maybe rbasak or other git folks know how best to handle this
<blackboxsw> and if doing we changed the author, I'd also like to correct fomatting on the cmt message and add the LP:#1866290
<blackboxsw> worth a discussion in ubuntu-devel
<rharper> yeah
<blackboxsw> community-notice: Ubuntu focal upload of tip of cloud-init master accepted [ubuntu/focal-proposed] cloud-init 20.1-9-g1f860e5a-0ubuntu1 (Accepted)
<blackboxsw> community-notice: Certified cloud images should update with 20.1-9 over the next couple of days
<rharper> blackboxsw: nice!
