[05:43] <openstackgerrit> Nate House proposed stackforge/cloud-init: [WIP] Updating openstack.base to support configdrive inheritance  https://review.openstack.org/225465
[05:43] <openstackgerrit> Nate House proposed stackforge/cloud-init: Removed buffer return from http source vendor data  https://review.openstack.org/229246
[13:51] <natorious> morning!
[13:54] <smoser> good morning natorious
[13:56] <natorious> hopefully I did that patch right ^^.  Didn't see anything else mentioned to directly address.  I'd fixed that spelling thing via amend.
[13:56] <claudiupopa> hey folks.
[13:56] <claudiupopa> Are we doing a meeting today?
[13:56] <smoser> yes.
[13:57] <natorious> got a ways on configdrive and was looking at doing that as a separate request for that and it'd make it read a whole lot better w/ that in etc.
[13:58] <openstackgerrit> Merged stackforge/cloud-init: LICENSE: correct wording with respect to Apache 2  https://review.openstack.org/228471
[13:59] <natorious> was planning a separate request for network json config too.  Feel free to correct me if I'm going about things wrong or backwards
[13:59] <natorious> hi claudiupopa o/
[14:00] <claudiupopa> Hi natorious \o!
[14:00] <smoser> hey.
[14:00] <claudiupopa> so irc or hangouts?
[14:00] <smoser> irc i think for now.
[14:01] <smoser> ok. so well walk over list of reviews quickly, and then open discussion and we can talk to natorious some there on his questions.
[14:01] <smoser> https://review.openstack.org/#/q/project:stackforge/cloud-init+status:open,n,z
[14:01] <smoser> i just accepted a license file change.
[14:01] <smoser>  https://review.openstack.org/#/c/228471/
[14:02] <smoser> the change was to make the LICENSE file not self contradictory.
[14:02] <smoser>  https://review.openstack.org/#/c/228471/1/LICENSE
[14:02] <claudiupopa> I'm not a lawyer, but looks good to me. ;-)
[14:02] <smoser> as previously it basically said "if you want to offer as apache 2.0 only, then put the GPLv3 header above"
[14:03] <smoser> which doens't make sense :). now its says
[14:03] <jgrimm> +1
[14:03] <openstackgerrit> Nate House proposed stackforge/cloud-init: [WIP] Updating openstack.base to support configdrive inheritance  https://review.openstack.org/225465
[14:03] <smoser>  "if you want to offer as apache 2.0 only, then put apache header above"
[14:03] <smoser> thansk to jgrimm for that.
[14:04] <smoser> i think reasonably thats all we have for the reviews. i think we should just help natorious a bit, with his questions and then we should try to get moving.
[14:04] <smoser> i have a couple topics for open discussion
[14:05] <smoser> is that ok? (i'm not skipping the reviews, i do want to talk about them, but i dont think they're jsut "quick")
[14:05] <smoser> Odd_Bloke, you here ?
[14:05] <claudiupopa> Sure, let's go with open discussion then and see where natorious can use our help.
[14:05] <Odd_Bloke> o/
[14:05] <smoser> k
[14:05] <smoser> heres hwat i had foro open discussion topics
[14:05] <smoser>  * python2.6 support: shall we bother with this?
[14:05] <smoser>     + need someone to look at viability of 2.7 on rhel 6.
[14:06] <smoser> the primary immediate driver of this is desire of using taskflow.
[14:06] <smoser> but generally.. if we can do without 2.6, that might be nice.
[14:07] <smoser> the goal would be to have *some* solution for running this on rhel 6
[14:07] <claudiupopa> It would be nice to not need it, yes, but it depends after all on your supported platforms.
[14:07] <smoser> and probably some sles, but i'm not so familiar with sles family of products.
[14:07] <claudiupopa> so the problem is rhel 6?
[14:07] <smoser> well, and probably some sles.
[14:07] <claudiupopa> when it's scheduled for EOL?
[14:08] <Odd_Bloke> The heat death of the universe (which will largely be caused by people having to maintain software for RHEL 6). :p
[14:08] <Odd_Bloke> Wikipedia says 2020.
[14:08] <natorious> they have a 10 year cycle :/
[14:08] <claudiupopa> in the same year as python 2.7
[14:09] <claudiupopa> not a coincidence.
[14:09] <Odd_Bloke> More, if we want to cover their extended lifecycle support. :p
[14:09] <smoser> i'm very  much ok to say "supported with python2.7 on rhel 6" if there is some sane path to getting python 2.7 on rhel 6.
[14:09] <smoser> but not if that path includes "go to some dude's web site" or "download python2.6 and compile"
[14:09] <smoser> ie, i'd accept 'enable EPEL'
[14:11] <Odd_Bloke> Is there going to be a path to installing cloud-init 2.0 on RHEL6 that fits your criteria?
[14:11] <smoser> thats what i want to know. i'd like someone to dig on that a bit.
[14:12] <smoser> smatzek, might know. also might have info on sles
[14:13] <natorious> what about pinning taskflow to before the 2.7 dep change?
[14:13] <claudiupopa> that doesn't sound that good.
[14:13] <claudiupopa> And if I'm not mistaken, the problem is with a dependency of taskflow.
[14:13] <smoser> well, apparently taskflow isn't itself 2.7 only.
[14:13] <smoser> right.
[14:14] <smoser> but generally speaking if we can say "no 2.6" then that woudl make life easier.
[14:14] <smoser> so i'm looking ofr a way, and a sane path for that.
[14:14] <smoser> i kond of hoped someone woudl take this and run with it.
[14:14] <smoser> i guess i can do that a bit after meeting
[14:15] <smoser> ok. so next topic, then.
[14:15] <claudiupopa> Btw, let's use trello for this as well.
[14:15] <smatzek> I would have the same criteria as smoser, if there is a sane path to getting 2.7 on rhel 6.  I don't have info on if there is a sane path for that or not.  I don't have the "python version included with" info for SLES 11 or 12 but could dig it up without much trouble.
[14:15] <smoser> claudiupopa, good idea.
[14:15] <smoser> claudiupopa, you want to add a task and give it to me ?
[14:15] <claudiupopa> yep.
[14:15] <smoser> smatzek, ok. you can help me :). thanks for offering.
[14:16] <smoser> google doesn't seem to know about assane path to 2.6 that does not include "people.readhat.com/some-dude" (yes, thats better than some.dude.com, but...)
[14:16] <smoser> anyway..
[14:16] <Odd_Bloke> That's because you have to be insane to still be running RHEL6. :p
[14:16] <natorious> we could make a rhel/cent rpm for alt installing python2.7 that would work though
[14:17] <smoser> natorious, as in smoser maintain python2.7 on rhel ?
[14:17] <smatzek> I still like Odd_Bloke's suggestion from several weeks past that those who bundle / ship python 2.6 after its EOL be made to sit in a corner and think about what they've done.
[14:17] <smoser> smoser == some-dude-i-wouldn't-trust
[14:17] <natorious> iir it can't replace the python2.6 due to yum or some such
[14:18] <smoser> natorious, yeah, python2.7 would be the name installed. i'd never expect that /usr/bin/python would be 2.7.
[14:18] <smatzek> maybe we just hande RHEL 6 and those other OS levels on 2.6 out to dry with cloud-init 2.0 and if those distros really want it, they get python 2.7, on their distro.
[14:18] <smoser> but i really dont want to maintain pytthon package.
[14:18] <smoser> smoser, that is how i am leaning
[14:18] <smatzek> hande = =hang
[14:19] <smoser> ok.
[14:19] <smoser> lets move on.
[14:20] <smoser> actually one more thing here.
[14:20] <Odd_Bloke> Well, they could also maintain distro patches for taskflow/networkx/cloud-init 2.0 which would make it work on Python 2.6; the point is it's their problem. :p
[14:20] <smoser> please everyone here just join cloud-init mailing list . https://launchpad.net/~cloud-init
[14:21] <smoser> and i'll use that list to write to.
[14:21] <smoser> k?
[14:21] <Odd_Bloke> Done.
[14:21] <smoser> smatzek, natorious claudiupopa you are all actioned to join that :)
[14:21] <smoser> ok. natorious now lets move on.
[14:22] <smoser> you want to ask some questions to the channel ?
[14:22] <natorious> yeah, sure
[14:22] <natorious> so configdrive
[14:22] <natorious> do we want to support v1 and v2?
[14:22] <claudiupopa> done.
[14:23] <natorious> are the only differences vfat vs iso
[14:24] <smoser> i really dont care about v1.
[14:24] <claudiupopa> What's the difference between v1 and v2? We're also interested in vfat for cloudbaseinit for instance.
[14:24] <smoser> it was pretty bad i think
[14:24] <smoser> and i think that v2 has been in place for probably 2 years ?
[14:24] <smoser> https://review.openstack.org/#/c/11184/
[14:25] <smoser> s/2/3/
[14:25] <smoser> no one other than possibly natorious's employer is running a 3 year old openstack cloud :)
[14:25] <natorious> burn
[14:25] <natorious> -.-
[14:26] <smoser> natorious, do you have a need to support config drive v1 ?
[14:26] <smoser> i'm reallynot terribly opposed to it.
[14:26] <smoser> but it woudl not land high on my priority list
[14:26] <natorious> I somewhat feel like most of the detection can be refactored down to detect umounted blockdevs searching f iso9660 and vfat fs w/ the same dir structure
[14:26] <smatzek> anyone have links to a spec or other doc that describes v1 and v2?
[14:26] <smoser> well, we dont want to assume 'unmounted'
[14:26] <natorious> k
[14:26] <smoser> really we want to look for the filesystem label
[14:27] <natorious> k, that might make it even easier
[14:27] <smoser> that seems to me to be the biggest thing.
[14:27] <smoser> it would seem to me to be a low possibility of non-desired false positive if you found a drive with a label 'config-v2'
[14:27] <smoser> s/drive/drive or partition/
[14:28] <smoser> ie, if someone put something like that... they probably did it so you would find it.
[14:28] <smoser> now... obviously such a person could be malicious
[14:28] <natorious> are we still doing an on first boot hook?
[14:28] <smoser> thats a good question
[14:28] <natorious> didn't see any examples of that in the http ds
[14:29] <smoser> cloud-init 0.7 basically always searches for a datasource.
[14:29] <smoser> (on every boot)
[14:29] <smoser> unles the  user sets 'manual_cache_clean' or some such
[14:29] <smoser> the reason for that is so that it can detect "new instance".
[14:29] <smoser> so that the user who does:
[14:29] <smoser>   * boot instance
[14:30] <smoser>  * install a package
[14:30] <smoser>  * shutdown and snapshot
[14:30] <smoser> doesn't have to also do: 'run cloud-init clean'
[14:30] <natorious> right
[14:30] <smoser> now that search is painful
[14:30] <smoser> in many ways.
[14:31] <natorious> its also ordered and limited by the datasources defined config right?
[14:31] <smoser> as for config drive, it means that if you messed up the label, then the instance would get all foobarred.
[14:31] <smoser> right.
[14:32] <smoser> we could possibly take a position that we do a quick check to see 'is this a new instance'
[14:32] <smoser> and only go looking further if that quick check failed
[14:32] <smoser> and if that quick check was buggy in that it did not recognize a new instance when it should have, we tell the user 'well, run "clean"'
[14:33] <smoser> i'm open to ideas here, and i dont knwo what a good "quick check" is.
[14:33] <smoser> i suspect that a lot of people do something like the above
[14:33] <Odd_Bloke> BTW, on Python 2.7 in RHEL6: https://twitter.com/ncoghlan_dev/status/649228375135903744
[14:33] <smoser> so that probably needs to generally work
[14:33] <smoser> Odd_Bloke, scl that references http://people.redhat.com
[14:34] <Odd_Bloke> I found this: https://www.softwarecollections.org/en/scls/rhscl/python27/
[14:34] <smoser> natorious, you ahve thoguths on that ?
[14:34] <smoser> i'd like to not have to search every boot
[14:34] <natorious> stuck on quick check.  So if the instance is new but from a snap
[14:34] <natorious> it could have previous instance data in it
[14:35] <smoser> right.
[14:35] <smatzek> I don't think there is a "quick check".  I think the only real check you can make is to query datasources and get the instance ID from them to compare with ID on disk.
[14:35] <natorious> instance uuid is sourced from ds so we couldn't use that to determine caching of ds method
[14:35] <smoser> well, there might be a heuristic that is sane.
[14:36] <smoser> similar to what MS does / did when you bought a new hard drive and you had to call their customer support to allow you to run windows again.
[14:36] <natorious> there's also hw identifiers that can be used no?
[14:37] <smoser> ie some look at the system (hard drive ids, cpu number ... might be a suffiicent quick check)
[14:37] <natorious> so like tie an instance id data to hw identifiers somehow
[14:37] <natorious> yeh
[14:37] <smatzek> checking hw identifiers doesn't work because you could have done a cold VM migration.
[14:37] <smatzek> or live followed by a reboot.
[14:38] <smoser> in which case the 'quick check' said "looks like it changed".
[14:38] <smoser> so we'd just do a longer check.
[14:38] <smoser> and if thats annoying to the user, then can still manually set "manual clean" or something.
[14:38] <smoser> you see what i mean ?
[14:39] <natorious> live migrate wouldn't hit cloud-init until a new reboot too
[14:39] <smatzek> I've struggled with these checks to see if you're a new instance for several years with both cloud-init and non-cloud-init based VM activation.  I've found that cloud-init has the best way to check, with instance ID.
[14:39] <smoser> well, and live migrate would surely have the same (virtual) hard drive id
[14:39] <natorious> so processing the ds detection and determining the same instance uuid should keep things going still
[14:39] <smoser> right.
[14:39] <smatzek> a quick check to see if HW changed would not trigger a deeper check if you too a VM snapshot and created a new VM using the snap on the same physical hardware.
[14:40] <smoser> smatzek, correct.
[14:40] <smoser> in which case, a user there would have to tell cluod-init that it should check every time.
[14:40] <smoser> i dont knwo how i feel about it.
[14:42] <smatzek> if we had a setting like that users would have to flip it on if they plan to snap and deploy snaps.  Also, wouldn't a hardware quick check require some hardware info in the base image for the public images you create?  I suppose you could put bogus hw info in there so as not to disclose the actual HW you're creating them on.
[14:42] <smoser> smatzek, well,. to be fair, on a sane cloud.. if you did that, and your new isntance landed on the same "host".
[14:42] <smoser> (or similarly configured host)
[14:42] <smoser> you'd still hope that you'd prvide new "hard drive" serial number to the virtual disk.
[14:42] <smoser> i guess if in fact you'd  handed the guest the same physical disk..
[14:43] <smoser> smatzek, well on the offical published images, we'd clean whatever data is tehre.
[14:43] <smoser> but the ubuntu image build process never invokes cloud-init
[14:43] <claudiupopa> what about windows?
[14:44] <natorious> there could be instances that use vdi chains for image caching who share a common base
[14:44] <smoser> claudiupopa, thats why you're here :)
[14:44] <natorious> but I think the vdi snaps have unique sn's
[14:44] <claudiupopa> First of all, I don't think I know what's HW. :-)
[14:45] <smoser> i'd think they would, natorious . but you're right, they dont necessarily. but you'd think it would be desireable.
[14:46] <smoser> ok..
[14:46] <smoser> so lets say this:
[14:46] <smoser>  * cloud-init should have a cleanly documented way to explicitly disable or explicitly enable per-boot checking for instance id
[14:47] <smoser>  * cloud-init may provide a "auto" mode for that that does some heuristics based checking or other quick checking
[14:48] <smoser>    the auto mode shoudl generally be *very* quick and not be skewed to not have false negatives (when it missed a change that it should have seen)
[14:48] <smoser> does that seem sane ?
[14:48] <natorious> * cloud-init cares not re v1 configdrive
[14:49] <smoser> unless natorious makes it care, which is ok
[14:49] <natorious> yea
[14:49] <smoser> one example i'd like to give of a very sane and quick check would be on lxd.
[14:49] <smoser> lxd will expose its datasource via a socket /dev/lxd.  it seems a sane "quick check" to do the unix-socket query 'get-instance-id' if /dev/lxd is present.
[14:50] <smoser> that suggests that the quick check should have access to the datasource that was used last... so that it woudl know it was lxd, and if previous was lxd and there is no /dev/lxd, then probably a change occurred.
[14:50] <natorious> interesting, I'd have to look into that and see about setting up a test env too
[14:51] <natorious> being socket I take it means not listed under block devs
[14:51] <smoser> yeah. its just a unix socket that lxd exposes an api on from outside.
[14:52] <smoser> i think that vmware has a similar thing.
[14:52] <smoser> did you have other question, natorious ?
[14:52] <natorious> think thats it
[14:54] <natorious> got what I need to get another request in.  its going to depend on the previous but should be alright
[14:56] <smoser> natorious, cool
[14:58] <smoser> claudiupopa, you going to stick around for a while ?
[14:58] <smoser> or are you EOD
[14:58] <claudiupopa> I'll be here for some time.
[14:59] <smoser> ok. i'll ping in a bit
[21:01] <stanguturi> Hi, I am working on cloud-init 0.7.6. I was able to execute './packages/bddeb' command and successfully build the debian package. I installed the final debian package in Ubuntu machine. Can anyone provide some instructions to test the changes. I mean, are there any log files which say that the package has been run, etc.
[21:22] <natorious> hi stanguturi!
[21:22] <natorious> cloud-init --version should print out the installed version
[21:22] <stanguturi> hi natorious
[21:22] <natorious> if your looking for some specific source change you made prior to building the pkg
[21:22] <natorious> you can check the installed source too
[21:23] <natorious> typically in /usr/lib64/pythonX.X/site-packages/cloudinit etc
[21:23] <natorious> X.X being 2.7 or 3.4 and whatnot etc
[21:24] <natorious> depending on the targetted init system specified when building the pkg would depend on where what init scripts or service files get installed too
[21:24] <stanguturi> great. Wher are the log messages that get printed when the cloud-init runs on machine boot
[21:24] <natorious> but like if its not running on boot, perhaps you targetted the incorrect init sys for your distro version
[21:25] <natorious>  /var/log/cloud-init.log or /var/log/cloud-init-output.log
[21:25] <natorious> one of the previous versions had a thing where most log msgs would go to syslog though
[21:27] <natorious> to change that you can comment the log line in /etc/cloud/cloud.cfg.d/05_logging.cfg for it
[21:27] <stanguturi> :x
[21:27] <natorious> like # - [ *log_base, *log_syslog ]
[21:27] <natorious> leaving  - [ *log_base, *log_file ] uncommented etc
[21:28] <natorious> might not be an issue for you.  Thought worthy of a mention though
[21:30] <stanguturi> ok. What is the best approach to test the changes. I am thinking of following 'make changes, build deb, install deb and reboot'.. Is there any other simple way to test the changes/
[21:32] <stanguturi> :x
[21:39] <natorious> what init system are you using?
[21:40] <natorious> after making the change and reinstalling, rm your /var/lib/cloud dir and it'll be like its bran new etc
[21:40] <natorious> don't think you necessarily need to reboot either
[21:41] <natorious> you could probably just fire off the init scripts again