=== superm1_ is now known as superm1 | ||
=== plars is now known as plars-away | ||
xnox | cjwatson: in partman-crypto we default to not erase data, yet i don't see a way to pressed that value in an automatic install such that it _does_ erase disk =/ | 14:20 |
---|---|---|
cjwatson | it's "skip_erase" in a recipe - if that's omitted then it'll erase | 14:22 |
cjwatson | that's the intent anyway | 14:23 |
cjwatson | antarus: sorry for the delay. the best documentation I suspect is in doc/devel/ in the debian-installer source package. the "multiraid" and "condpart" strings you're referring to aren't really keywords, but are covered by: | 14:25 |
cjwatson | <debconf name>::=<debconf template>_:: | 14:26 |
cjwatson | The purpose of <debconf name> is to allow translation of the names of | 14:26 |
cjwatson | the recipes into different languages. | 14:26 |
cjwatson | (I suspect this isn't generally convenient, as it requires injecting a debconf template) | 14:26 |
xnox | cjwatson: hm, looking at ubiquity, it's using partman-auto-crypto - not manual partitioning which allows to drive/toggle the erase flag. | 14:26 |
cjwatson | erase is still done by partman-crypto though | 14:28 |
xnox | and default is hard-coded and not based on any debconf question (e.g. a bool with low priority) | 14:28 |
cjwatson | yeah, but ubiquity can always just prod the partman flag directly - it has support for that kind of thing | 14:28 |
xnox | interesting, i don't think i've done that yet. I'll have a look. | 14:29 |
cjwatson | p.remove_part_entry(part_id, "skip_erase") or some such | 14:31 |
stgraber | xnox: did I misremember that there was some work being done to get a sane swap allocation by default? a friend of mine just told me that on a system with 512GB of RAM and 900GB of SSD, an auto install produced a swap much larger than /... | 21:24 |
stgraber | since we don't care about suspend to disk anyway, we really should set an upper limit to a sane value, say, maybe 4GB or so... | 21:25 |
stgraber | xnox: apparently on that box, partman decided that a 540GB swap was appropriate ;) | 21:27 |
CarlFK | I would not expect sane anything on a box with that much ram ;) | 21:40 |
xnox | stgraber: there have been strong demands to "appropriately allocate swap" but nobody yet came up with a formula. | 21:44 |
xnox | stgraber: the most halarious combination is when RAM >> "/", then we allocate more swap then space for "/" and thus fail the install as we run out of disk-space on "/" | 21:45 |
xnox | stgraber: 540GB swap with 512GB RAM sounds like a bug, since it should be ~= 1xRAM, no more than that. | 21:46 |
xnox | stgraber: i find 32GB of swap, with 32GB of ram with a 1TB spinny disk appropriate. | 21:46 |
stgraber | yeah, on large disks it's not a huge deal indeed, on expensive enterprise grade SSDs, it's a bit more annoying :) | 21:48 |
stgraber | I guess swap == RAM with an upper limit for RAM at 32GB and an upper limit for SWAP at 10% of the total space would be vaguely sane | 21:49 |
stgraber | that'd keep the 1 to 1 mapping for most systems but hopefully not kill small SSDs with a large swap and not allocate a completely insane amount of swap on systems with a ton of RAM | 21:50 |
xnox | stgraber: i would have thought the limit would be 5% of total disk-space, not 10%. E.g. 4GB of RAM, 4GB of swap with a 80GB disk. | 21:52 |
stgraber | hmm, yeah, 5% should be fine too | 21:54 |
xnox | and i don't care about VMs / cloud much - cause they either don't use d-i installer, or hypervisor knows how to suspend them. | 21:55 |
xnox | stgraber: whoops, we go for: 96 512 200% at the moment. 200% is in terms of RAM, not total disk space, hm.... | 21:57 |
stgraber | xnox: https://docs.google.com/spreadsheets/d/12Roi8YnMm2VDEtnitsr0fjRRnRJfSwXofM6JTAGzSpA/edit#gid=0 | 22:00 |
xnox | stgraber: that does not open for me =/ | 22:00 |
xnox | ah, works now, thanks. | 22:01 |
xnox | stgraber: looks good, i'll send that formula to debian-boot for discussion. | 22:07 |
stgraber | xnox: cool, thanks! | 22:10 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!