[09:55] harlowja: Storing a database becomes problematic if there is more than one source of Python modules, surely? [09:56] harlowja: Consider the case where I activate a --system-site-packages virtualenv for the first time in six months after a bunch of changes to the system Python packages. [09:56] harlowja: Or the case where I have a Python library in my local directory. [09:56] Or really any case where I modify PYTHONPATH. :p === rangerpbzzzz is now known as rangerpb === rtheis_ is now known as rtheis [16:10] smoser: on my trusty image, the procps service applies the cloud-init ipv6 priv extensions after network config (as designed); and clears existing ipv6 ips. During boot we apply an ipv6 address to an interface (via udev event, call ifup on bond0.108 which sets ipv6 ip), the sysctl from procps service runs once the network is up; we switch from default mode of tmpaddr=2 to tempaddr=0 which ends up wiping the ipv6 addre [16:10] ss that was previously set; note the recent comment on the bug https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1068756 for why this doesn't break in xenial [16:11] that is a pretty dense one line irc statement [16:11] yeah [16:13] that seems busted. [16:13] to apply the change (and wipe ip addresses) after they've been configured [16:13] doenst it ? [16:13] read the comments on the bug; there's some trusty kernel patch that triggers the wipe [16:14] yeah [16:14] so, from utopic on, we dropped it [16:14] but good ol trusty keeps on keeping on [16:15] and here I was cursing at ifupdown when it wasn't its fault [16:19] i have found this bug before [16:20] pretty certain once when one of the maas team was doing ipv6 stuff i found it. [16:21] https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1352255 [16:21] rharper, there, comment 5 would have saved you a couple days [16:21] :-( [16:22] i'm really sorry, rharper [16:22] heh [16:22] well, the comment in the sysctl file [16:22] helped [16:23] yeah, rharper so its even awesomeer than you thought [16:23] smoser: in #lxd, stgraber says that it's a mistake that it was dropped [16:23] because if you switch the order of the stanzas, it can work i think. [16:26] right, for some reason if I bring up bond0.208 (only v4) and then the 108 with v4+v6; that worked (without disabling the v6 change in tempname) [16:27] but that's racy; possible if we can get the procps sysctl to re-trigger networking; I don't know... [16:27] or if we could set the v6 tempname to 0 earlier (rather than procps service) === nacc_ is now known as nacc [16:52] https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1379427 is mentioned from the other bug. [16:53] key statement there: [16:53] "The result is that things that were waiting on 'static-network-up', expecting that would provide them with expected networking are not actually guaranteed anything other than the first stanza for each interface." [16:53] ugh... [17:53] Odd_Bloke surely said things are solveable :-P [17:53] and scanning all the things isn't exactly a good solution :-P [17:53] least python could do is cache some stuff [17:53] hash(sys.pythonpath) ---> make a file of entrypoints per hash? [17:54] and use that hash if possible going forward, blah blah [17:54] sorta like ld.so.cache [17:58] ya, some crap like that [18:19] well, but where would it put that ? [18:20] it does seem to sanely cache , but the first pass per python interpretor fills that cache. [18:20] /sbin/ldconfig is wrought with failure paths too [18:21] but it mostly just works [18:22] we're not in new territory; so it's best to dig at previous proposals in this space [18:22] google has shown me plenty of folks with the idea; not many actual attempts or details; [18:41] can anybody tell me what the "Ssh Authkey Fingerprints" plugin does? is that responsible for installing the public ssh keys for cloud-user (or whatever user you've chosen)? [18:41] asking because we saw it fail unexpectedly [18:41] (we're not injecting any ssh keys) [18:42] I don't think it's a big deal, but want to understand.. and if one plugin fails, will cloud-init continue processing the rest? [18:51] hm.. reading the code it looks rather like it's actually displaying the keys that were injected in pretty tables [18:51] not actually injecting keys [20:54] cn28h, i have seen a race condition on it i think. but often times you see that failures as a result of something else failing. [20:55] if you saw it and can reproduce, i'd love to see a /var/log/cloud-init.log [20:55] * smoser out === rangerpb is now known as rangerpbzzzz