openstackgerritMerged stackforge/cloud-init: Use an explicit absolute import for importing the logging module  https://review.openstack.org/21003509:38
Odd_Blokeclaudiupopa: Could you workflow +1 https://review.openstack.org/#/c/202743/ ?09:49
Odd_BlokeOh, is it short a +2, actually?09:49
claudiupopaIs Scott happy with it?09:50
Odd_Blokeclaudiupopa: I think so.09:51
Odd_Blokeclaudiupopa: And I think we said that I'd push forward with main stuff.09:51
claudiupopaThen I'm happy with it as it is.09:52
claudiupopaFor workflow.09:52
claudiupopaBy the way, could you take a look again at the plugin patch?09:52
claudiupopaI don't have tests, but I'll appreciate a comment regarding the direction.09:52
openstackgerritMerged stackforge/cloud-init: add cloud-init main  https://review.openstack.org/20274309:55
Odd_Blokeclaudiupopa: So with parallel discovery, we'd still load the code from the disk serially?09:58
claudiupopaGood question. I think it depends on the iterator's flavour.09:59
claudiupopaRight now the loading is serial.09:59
Odd_Blokeclaudiupopa: Should filtering by name be a strategy?10:02
claudiupopaIt could be.10:02
Odd_Blokeclaudiupopa: We don't actually have anywhere calling get_data_source with a list of strategies yet, right?10:03
Odd_Blokeclaudiupopa: How would a FilterByNamesStrategy be created?10:05
claudiupopawriting right now an example.10:07
claudiupopaSomething like this http://paste.openstack.org/show/412159/10:08
claudiupopaAlthough _names should be passed somehow to the strategy.10:09
Odd_BlokeYeah, that was the bit I couldn't quite work out.10:09
Odd_BlokeThe strategies could be instantiated, and have a method that does the filtering?10:09
claudiupopaYou mean a separate method?10:10
claudiupopaOne for loading the data sources and another one for filtering?10:10
claudiupopaMm, the idea is to combine multiple of them to do the filtering, since trying to see if a data source is available or not is still considered a filtering operation.10:11
claudiupopaI could instantiate them beforehand, in get_data_source.10:12
claudiupopaAnd I could pass names only to the FilteringByNameStrategy.10:13
Odd_BlokeSo BaseSearchStrategy.__init__ wouldn't take any parameters by default, and search_data_source would become search_data_sources(<list of data sources>).10:13
Odd_BlokeAnd you'd pass the return of that in to the next search_data_sources.10:13
Odd_Bloke(Rather than in to the constructor of the next strategy, as you do now)10:14
claudiupopaOh, that could work.10:14
Odd_BlokeSo I think you would instantiate them in get_data_source, yeah.10:15
trueneuHi. How can I run a cloud init script on an already installed instance? I've found that I gotta trick cloud-init into thinking this is a fresh boot, but I can't understand where should I place my cloud init file.11:22
Odd_Bloketrueneu: Why do you want to run cloud-init, rather than just running a shell script etc.?11:23
trueneuIt's in a neat cloud config form, and it failed to execute at boot somehow, so I need to re-do it.11:24
openstackgerritClaudiu Popa proposed stackforge/cloud-init: Add an API for loading a data source  https://review.openstack.org/20952011:31
smoserOdd_Bloke, or harlowja or claudiupopa your thoughts on my https://code.launchpad.net/~smoser/cloud-init/trunk.reporting/+merge/266578 (0.7) woudl be appreciated.12:02
Odd_Blokesmoser: Are registry and reporting copy-paste backports from 2.0?12:07
Odd_Blokesmoser: Oh, no, there's a WebHookHandler in there?12:15
Odd_Blokesmoser: Still don't know why you aren't getting stuff in to 2.0 so we can do a copy-paste backport.12:15
Odd_BlokeRather than doing a copy-paste backport, a change, and then a forward-port.12:15
smosercopy & paste + imports + http://bazaar.launchpad.net/~smoser/cloud-init/trunk.reporting/revision/115512:16
smoserand the webhookhandler.12:16
smoserOdd_Bloke, because of time line is all.12:17
smoserand now that i think about it i think that code in that one doesnt work.12:20
smoserthe goal of the change there is to re-initialize if different.12:21
smoserbut i think the check there is comparing a dict to a class.12:21
openstackgerritDaniel Watkins proposed stackforge/cloud-init: Fix running cloud-init with no arguments on Python 3.  https://review.openstack.org/21038112:52
Odd_Blokesmoser: claudiupopa: Minor fix to main. ^13:00
claudiupopaWhy doesn't parsed have the func attribute?13:01
smoserbecause it didn't have a subcommand.13:01
Odd_Blokeclaudiupopa: It's a bug in Python 3, I think.13:01
smosermaybe you can set_defautls on func to get it to call help ?13:02
Odd_Blokesmoser: That works on Python 3, but not on Python 2.13:10
Odd_Blokesmoser: claudiupopa: So that change gives us consistent behaviour on Python 2 and 3.13:55
Odd_Blokesmoser: claudiupopa: Getting Python 2 to do something different will mean pre-empting the parser, because just parsing the arguments is what throws up the error.13:56
claudiupopaI see.13:57
claudiupopaThen it seems fine for me.13:57
Odd_Blokesmoser: We have several different stages defined in cloudinit.shell, but I thought we were going to be running cloud-init as an agent (which would, presumably, only involve a single call to cloud-init).14:12
Odd_Blokesmoser: claudiupopa: harlowja: I'm trying to work out how to name things; I'm going to work on persisting a discovered data source to disk (so that future runs don't have to perform discovery).  What should I name the data that cloud-init has derived from its environment?14:47
Odd_BlokeIt's not metadata, vendor-data or user-data; those are all inputs.14:48
Odd_BlokeMaybe 'configuration', but that would seem to be more appropriate as the stuff in /etc that defines how cloud-init will run on an instance.14:48
Odd_BlokeAny thoughts?14:48
claudiupopapersisting data source to disk, as in caching?14:48
Odd_Blokeclaudiupopa: So one of the stub commands in cloudinit.shell is 'search', which will 'search available data sources'.14:49
smoserok. Odd_Bloke sorry, didnt' responde before14:49
smoserso the stages... there are still stages that have to run in boot14:49
smoserthere might be a daemon that starts very early, and the stages communicate with that daemon. that is a possible implementation.14:50
smoseralso possible is that a daemon just starts later.14:50
smoserbut either way, as far as my vision can see, we'll have upstart or sysvinit or systemd jobs that run at points in boot14:50
smoserthat is what those stages are for.14:50
Odd_BlokeI think making it possible to not run a daemon would be good; I can imagine people who are happy with cloud-init as-is not wanting an extra process running.14:51
smoserwrt storing data, i think 'cache' sounds reasonable14:51
smoseryou'll never have to run the daemon14:52
smosereven if it ran in boot, that'd just be an imlementatyion detail14:52
smoserand then it'd shut itself down.14:52
smoserbut we can worry about that laer.14:52
Odd_BlokeI'm not sure it is, strictly speaking, a cache though; some data sources will only be able to fetch information a single time.14:52
Odd_Bloke(For example, CloudStack passwords can only be read once)14:52
claudiupopaso metadata, userdata and vendordata all represents the same thing, an input data that's used to drive cloud-init.14:53
claudiupopaHow about drive data?14:53
claudiupopaSau execution data.14:53
Odd_BlokeActually, this is basically what would go in /var/lib/cloud/instance ATM; how about 'instance data'?14:54
claudiupopaYep, that sounds good as well.14:55
=== rangerpbzzzz is now known as rangerpb
Odd_Blokesmoser: Thanks for the info on the commands. :)15:05
Odd_Blokeclaudiupopa: smoser: So, next question: what do we want the data to look like when serialised on-disk?15:06
Odd_Blokeclaudiupopa: smoser: I'm thinking we could persist a dictionary as JSON, but I don't know if we have lessons from 0.7.x that suggest that's a bad idea.15:09
claudiupopawhy should it be a bad idea? I was thinking on JSON as well.15:09
Odd_Blokeclaudiupopa: Well, that's not how we do it in 0.7.x; I wasn't sure if that was intentional or not. :p15:12
smatzekJSON would be nice if there aren't gotchas from 0.7.x that Odd_Bloke refers to.15:13
claudiupopaby the way, the caching is persistent per cloud-init's run or it's always there?15:14
Odd_Blokeclaudiupopa: I would expect it to always be there.15:14
claudiupopabecause some portions of data shouldn't stay there for longer, such as passwords.15:14
Odd_BlokePotentially the consumers of that data should be responsible for clearing it out?15:15
claudiupopabefore it"s serialized on disk?15:16
Odd_BlokeIt would be good to be able to separate the "fetch all the data we need" step from the "use the data" step.15:17
Odd_BlokeNo, I think it would be serialised to disk.15:17
Odd_BlokeAnd then whatever handles passwords removes passwords from the serialised data.15:17
Odd_Bloke(Side note: If someone can read the password from the disk, they're probably already in a position to do whatever they want anyway. :p)15:18
claudiupopathat doesn't seem very good, since it's not separating the concerns properly.15:18
claudiupopaYeah, that's also true.15:19
claudiupopaBut anyway it's harder to read it from memory rather than from disk. ;-)15:19
Odd_BlokeI'm thinking that special-casing passwords isn't particularly useful.15:19
Odd_Blokeclaudiupopa: It's easier to just set it to whatever you want than read it from disk. ;)15:19
Odd_BlokeBecause there could be other private data that shouldn't be persisted long-term.15:20
claudiupopaMaybe having a way to specifiy that a piece of data should never be serialized?15:20
Odd_Blokeclaudiupopa: That does mean (e.g.) setting passwords in the same process as fetches the password from wherever the password is fetched from15:21
smoseragree with most of what is a bove.15:22
claudiupopain order to avoid ipc? If the agent is not involved, I  would expect it to happen in the same process nevertheless.15:22
smoserjson i think is fine with me. i used pckl in cloud-init 0.7 largely because it is simpler (i picked the class).15:22
Odd_BlokeWhat if we just deprecate passwords in cloud-init 2.0 (and Ubuntu 16.04 cloud images)? :p15:24
smoseri think we kind of *have* donethat15:24
claudiupopawell, on windows they're still somehow required.15:25
Odd_BlokeYou never know, perhaps 2016 will finally be the Year of Windows on the Cloud. ;)15:25
Odd_Blokesmoser: What are your thoughts on persisting passwords to disk?15:25
Odd_BlokeHmm, could we hash the passwords ourselves before putting them on disk?15:27
Odd_Bloke(This is, obviously, special-casing passwords like I said I didn't want to do :p)15:29
claudiupopahow about specific exemption?15:29
claudiupopaHaving a decorator that marks a particular piece of data as non serializable.15:29
Odd_BlokeRight, but that then means that we have to use that data before this particular process dies.15:30
smoserwll, you may need to persist them for some time15:30
smoserwe can do some thing. like hashing i dont think its unreasonable.15:30
smoserif the perms on the data are correct, its sane15:31
smoserand then after we consume it we can remove that data.15:31
smoserit obviously did get written... maybe we'd need to shred15:31
claudiupopaThe same thing happens with hashing, the password will not be available anymore after deserialization.15:32
claudiupopaas in we'll have a hash that can't be used.15:32
Odd_BlokeWhy couldn't it be used?15:32
Odd_BlokeAh, I'm guessing you can't use the hash of a password to set a password on Windows?15:33
claudiupopaNope. ;-)15:33
Odd_Bloke*buys a cheap Windows laptop on eBay, so he can throw it out of the window* :p15:33
Odd_BlokeOK, I think I can implement the first pass as 'serialise all the things' and then we can work out the nuance later.15:37
smatzekwe still have operators that use password and may want it set.  I'm not defending the practice but it is still done.15:40
smatzekdo we know for sure we'll have separate processes serializing the data vs those that consume it?15:41
Odd_Blokesmatzek: Currently there are two different cloud-init sub-commands defined which would do each bit.15:41
smatzekas stated above I think there may be other cases of private or sensitive data that we may not want sitting around on disk, so the sensitive tag idea might be worth pursuing.15:42
smoserOdd_Bloke, this does go towards a larger thread.15:42
smoserwith the goal of cloud-init query15:43
smoserwhether that hits a daemon or hits a cache, we want user to be able to get some bits of data15:43
smoserand some bits to be privildged access only15:43
smatzekanother item that may be sensitive is the chef module's validation_key which is a private RSA key.  That might be good to delete/shred once the chef module is done running.15:44
Odd_BlokeSo my proposal is (1) we persist all the data to disk, and then (2) individual modules are responsible for shredding whatever data they consider sensitive (and no longer needed).15:47
Odd_BlokeActually, we could have data sources provide a way of fetching passwords.15:49
Odd_BlokeAnd then the modules that care about passwords use that.15:50
Odd_BlokeBut that doesn't solve the case where the password(s) are in user-data.15:50
claudiupopawhy they are two steps?15:51
claudiupopadata retrieval and persistance and execution?15:51
claudiupopaI think I'm missing context here.15:52
Odd_BlokeI don't see why they would be one step (except for the issue we are discussing now). :p15:52
Odd_BlokeI'm taking my lead from smoser having stubbed out 'search' and 'config' as separate subcommands.15:53
Odd_Bloke'search' need not necessarily encompass actual fetching of the data, I guess.15:54
Odd_BlokeWhich I have been assuming.15:54
Odd_BlokeI guess cloud-provided data can also change in the meantime.15:56
Odd_BlokeSo maybe we shouldn't be persisting much of this stuff at all...15:56
=== zz_natorious is now known as natorious
harlowjahmmmm, Odd_Bloke i suck at naming things :-P17:10
harlowjaput stuff into a little sqlite.db file , profit?17:10
harlowjathere u go17:11
* harlowja is brillant17:11
harlowjahonest question, why not just store it in some /var/cloud/persistence.db or something17:14
harlowjamight be nice to have a little sqlite thing17:14
harlowjai know i know the filesystem is currently used for this17:14
TogerHello, I am trying to use cloud-init on centos7, v0.7.5.  from cloud-init-0.7.5-10.el7.centos.1.x86_64.  I am using it to install chef, however the AMI I have is pre-hardened and has noexec set on /tmp.  The chef init script tries to download and run the installation out of /tmp which fails. The chef script honors tmpdir, so if I can reset the tmpdir environmental variable prior to the chef module then it'll work. Is there a way to do that in17:37
smatzekchef runs during cloud_config_modules.  Looking at that module list I don't see any module where you could run arbitrary commands, scripts before it runs.  You may be able to use bootcmd, which runs in cloud_init_modules to change the system env, so the process that cloud_config_modules would pick it up, but I'm not sure if that would work.17:42
Togerbootcmds would run in a subshell though, wouldn't it?17:45
Togerso the env change would be lost17:45
TogerI was hoping there was a way in cloudinit natively to set environmental variables for commands17:46
TogerOr, changing       util.subp([tmpf], capture=False)      to       util.subp(['sh', tmpf], capture=False)18:07
=== rangerpb is now known as rangerpbzzzz
TogerFor the chef module, I'd like node_name to be something like 'prefix-$INSTANCEID' as opposed to a static prefix21:24
Togerand not just instance-id21:24
Togeris there anyway to do that?21:24
Togerin other words, when using this in a autoscale group I can't use one single node-name w/chef, but its not very friendly to use just i-a234tg names21:26
Togerso for each autoscale group I'd put something like groupname-instanceid21:26
=== natorious is now known as zz_natorious
Togerthe chef mechanism also needs a way to lay down the encrypted data bag key22:13
Togermm perhaps with write_files22:14
Togerbut it needs a way to at least specify the location22:35
=== hatchetation_ is now known as hatchetation
Togerand chef only seems to run if its installed via gems?22:52

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!