[16:05] <chillysurfer> hey, all! what's the best way to test out data source changes without having to reprovision a machine in the cloud provider for every code change iteration?
[16:05] <chillysurfer> for instance, to change vendordata in the datasource, is the only way to test this out by patching cloud-init and then reprovision a machine with the patch?
[16:05] <chillysurfer> seems to make for a very long dev iteration loop
[16:38] <rharper> chillysurfer: you'll likely want 'cloud-init clean --logs'  and possible add --reboot ; that will reset most of the cloud-init state and re-run like firstboot
[16:39] <chillysurfer> rharper: ah cool! so basically just run that command and then reboot the machine?
[16:40] <rharper> chillysurfer: yes
[16:41] <chillysurfer> rharper: great thanks! i'll give it a try!
[16:42] <rharper> chillysurfer: you also don't _have_ to reboot; it depends on what your testing; you can just call cloud-init like boot does;  cloud-init init --local; cloud-init init, cloud-init modules --mode=config; cloud-init modules --final
[16:42] <chillysurfer> rharper: i'm really just testing out injecting vendordata
[16:43] <chillysurfer> i'm not sure exactly what part of cloud-init that we crawl metadata (and therefore get vendordata i think..?) and then handle and execute vendordata
[16:44] <rharper> chillysurfer: local
[16:44] <rharper>  https://cloudinit.readthedocs.io/en/latest/topics/boot.html
[16:45] <rharper> chillysurfer: depending on the datasource, we like to create main functions on the DataSource.py which just crawls the metadata and dumps what had (and you could merge it)
[16:45] <chillysurfer> rharper: i'm working with the azure datasource
[16:45] <rharper> chillysurfer: if you look at cloudinit/sources/DataSourcesGCE.py for example
[16:46] <chillysurfer> i'll check out the gce esample
[16:46] <rharper> chillysurfer:  so you can use the crawl_metadata()
[16:46] <chillysurfer> *example
[16:46] <chillysurfer> that would be a good way to see what i end up with for sure
[16:47] <chillysurfer> rharper: but to see the actual metadata (in this case, vendordata) applied to the machine then i should be running the entire local stage i think right?
[16:47] <rharper> chillysurfer: right
[16:47] <chillysurfer> perfect
[16:47] <chillysurfer> thanks so much! going to play around with this
[16:47] <rharper> well, and dpending on what you put into the config; some things happen at different stages
[16:48] <chillysurfer> yep i'm just injecting some 'hello world' in vendodata runcmd
[16:48] <rharper> ok, runcmds will happen at final time
[16:48] <chillysurfer> ah i see
[16:48] <chillysurfer> so the final stage then?
[16:48] <rharper> also, generally vendoring runcmds is going to be problematic since users likely want to include those, and then merging rules apply
[16:48] <chillysurfer> yep totally understand about merging and that sort
[16:48] <rharper> chillysurfer: you'd need to run local, init, and final
[16:49] <chillysurfer> i need to do some research on the default merging of vendordata + userdata
[16:49] <rharper> https://cloudinit.readthedocs.io/en/latest/topics/merging.html
[16:49] <rharper> should help
[16:49] <rharper> particuarlly at the bottom; exaples on merging multiple run commands
[16:50] <chillysurfer> awesome
[16:50] <rharper> it's somewhat awkward to allow things to merge "the way you want"  which may be different than how users want
[20:17] <Odd_Bloke> https://bugs.launchpad.net/cloud-init/+bug/1837927 <-- this bug stems from a broken OpenStack metadata service.  On xenial we end up using Ec2, so instances are at least usable, but on bionic we don't (because we correctly detect we're on OpenStack, so only use that DS).  Is there anything the user can do when launching an OpenStack instance to work around the broken IMDS by selecting the Ec2 DS?
[20:17] <Odd_Bloke> Or will they need to bake their own image to select the working DS?
[20:18] <Odd_Bloke> (I think this bug is Invalid regardless, would just like to give them a bit more direction than "go bug your cloud admin" if possible.)
[20:31] <rharper> Odd_Bloke: huh
[20:31] <rharper> Odd_Bloke: how would we know that Ec2 would work? I guess you're suggesting we could try ...
[20:32] <rharper> Odd_Bloke: IIC, on Xenial, the _get_data() returns False, but because we _search_ all known datasources, we'll try Ec2 last
[20:32] <rharper> on bionic, since ds-identify says, just OpenStack, then it won't try anything else
[20:33] <rharper> we would sort of want to allow cloud-init to try a list of maybe datasources
[20:34] <rharper> the user could modify ds-identify, with found=all, like so:  ci.di.poliyc=search,found=all,maybe=none,notfound=disabled;  which is equivalent to the Xenial defaults
[20:34] <rharper> s/ds-identify/ds-identify-policy
[20:36] <Odd_Bloke> Yes, on xenial we try OpenStack and it fails so we move on, eventually to Ec2.
[20:59] <Odd_Bloke> rharper: That should be notfound=enabled, right?
[20:59] <rharper> no
[21:00] <rharper> oh, well for fully xenial equivalent yes
[21:00] <rharper> but the goal is to fallback, not emulate xenial behavior
[21:00] <rharper> so, if we don't find any datasources, then we shouldn't enable
[21:01] <Odd_Bloke> But in this case, we aren't going to "find" the Ec2 DS, right?
[21:01] <Odd_Bloke> xenial also has mode=report.
[21:01] <Odd_Bloke> Which is I think why it works, because that means we try everything anyway?
[21:02] <rharper> we will find OpenStack
[21:03] <Odd_Bloke> Right, but OpenStack is broken.
[21:03] <rharper> and di-identify returns a datasource_list like: ['OpenStack', None]
[21:04] <rharper> I think it will list all network sources as maybe since we're not in strict mode
[21:06] <Odd_Bloke> I think the problem is going to be that it will identify OpenStack as definitely found.
[21:09] <Odd_Bloke> So we might just want ci.di.policy=enabled, with OpenStack,Ec2 configured as the data sources in /etc/cloud/... ?
[21:09] <rharper> yeah; I think you're right;   I think we'd need to either disable ds-identify via the report mode
[21:09] <rharper> right
[21:10] <rharper> well, via command line for images booted on this broken openstack
[21:10] <rharper> either both configs on the kernel command line or in the image itself if someone is making modifications
[21:11] <Odd_Bloke> Oh, can you change the command-line when launching via Nova?  Or are you saying that the cloud admin could work around their broken cloud by changing the default command line that's used?
[21:12] <rharper> I was looking for image attributes
[21:12] <rharper> and I thought one could pass things through but I think I'm not right about that; the iamge has a boot loader
[21:17] <Odd_Bloke> Oh, looks like ci.di.datasource=OpenStack,Ec2 should DTRT.
[21:17] <Odd_Bloke> enabled/disabled would mean we couldn't configure _anything_ via ds-identify.
[21:18] <Odd_Bloke> But ci.di.datasource returns before the report/search behaviour is used.
[21:22] <rharper> yes, if you specify them; then we don't bother detecting them
[21:23] <Odd_Bloke> Right, which is what we want here.
[21:23] <Odd_Bloke> Right?
[21:23] <rharper> yes
[21:23] <rharper> effectively setting the list to OpenStack and then Ec2
[21:23] <rharper> though in this case they shoud just set Ec2
[21:23] <rharper> no point in trying a broken openstack
[21:24] <Odd_Bloke> Yeah; I put that in there so that they'll switch to OpenStack if the cloud is fixed.
[21:24] <Odd_Bloke> I'll include a note in my reply explaining that.
[21:25] <rharper> that can just use non-modified image  to verify
[21:25] <rharper> but yeah
[21:25] <rharper> so Ec2 doesn't do a check-instance id
[21:25] <rharper> which means that if we use Ec2, on next boot if it found OpenStack, I think it would use that;  I wonder about the transition on upgrade
[21:25] <rharper> with such a setting
[21:26] <Odd_Bloke> OK, I'll just put Ec2 in there then.
[21:26] <Odd_Bloke> My guess is they'll either fix the cloud soon or never.
[21:26] <rharper> right
[21:53] <chillysurfer> anybody know how to hit a breakpoint with nosetest3? doing `import pdb; pdb.set_trace()` and then running nosetest3 doesn't seem to do it
[21:54] <chillysurfer> nor does the `--pdb` option (which breaks on failure and error, but not breakpoing)
[21:56] <chillysurfer> and the manpages for it don't mention a thing about breakpoints
[22:07] <rharper> chillysurfer: you want to pass -s
[22:07] <rharper> so you get standard input/output to console
[22:07] <rharper> well, I've not used breakpoints, but I've dropped an import pdb; pdb.set_trace() into various parts of code or tests;then used -s on the nose commandline
[22:12] <chillysurfer> rharper: that did it!!
[22:12] <chillysurfer> thanks so much!
[22:12] <rharper> \o/