=== devil is now known as Guest57965 === Guest57965 is now known as devilz === devilz is now known as devil_ === devil_ is now known as devil__ === devil__ is now known as devilzz === devilzz is now known as devilz === cipi is now known as CiPi === Lcawte|Away is now known as Lcawte === devilz is now known as devil_ === kickinz1_ is now known as kickinz1 [08:43] Hi All [08:44] I am looking out for a allinone openstack package for openstack [08:44] can anyone plz suggest some .iso [09:28] vikram_: given the enormous number of OpenStack components that is a very vague question [09:29] vikram_: the "standard" way of doing OpenStack on Ubuntu server is via Juju "charms" though. Unfortunately both Juju and OpenStack tend to be amazingly buggy. [09:44] Walex2: oh [09:44] Walex2: I just want a basic openstack deployment model [09:46] i remember one ubuntu package for icehouse was released earlier [10:31] I'm setting up a home file server and was thinking of using ZFS to mirror the main data disks, but the machine hasn't got ECC memory. Should I still go with ZFS, or would it be better to use mdadm + ext4 or maybe btrfs? === dw2 is now known as dw1 [11:36] hello :) [11:36] i configured sasl on postfix and dovecot, but my client can still use SMTP without logging in.. authentication works, but no login works too.. how can i restrict that? [11:39] also, i configured mail to use Maildir/, doing MAIL=/home/myuser/Maildir/, but when i open an email with mail from mailutils, they get deleted and saved to mbox.. what's the variable to change that? [11:50] anyone here? [11:54] !patience | RickyB98 [11:54] RickyB98: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/ [11:55] i'm not repeating your question.. [11:55] my question* [11:55] "if nobody knows your answer, nobody will answer you" [11:56] Sorry I can't tune the bot's reply for the exact situation. It's close enough and I hope still helpful. [11:56] however, someone in #ubuntu told me to come here.. now i was being answered there, should i go back there? [11:56] going to idle here anyway, not gonna run away after 5 minutes :P === cpaelzer is now known as cpaelzer_afk [12:12] RickyB98: by client do you mean an external machine connecting to yoru postfix over SMTP and relaying? [12:13] client as in a mail client, could be thunderbird or anything [12:13] i'm using mac's default atm === cpaelzer_afk is now known as cpaelzer [12:22] ducasse: I've been running ZFS without ECC for years and never noticed any problems. I'm not really qualified to judge the risk vs non-ECCed mdadm / btrfs, but personally I wouldn't consider it a factor in deciding between them. [12:23] matt_dupre: OK, thanks. Is it a good choice for a small, home fileserver? [12:37] ducasse: I like it: I actually run on FreeBSD, but I hear ZFS on Linux has come on a long way. I think the biggest weakness is that it can be tricky to add disks later on. [12:38] matt_dupre: I though one of it's strengths was that is was easy to expand by adding disks, but that might not be true for mirrors - I don't know. [12:38] (For example, you can't add a disk to a RAIDZ, so you can't just buy an extra disk every couple of years. Big upgrades (e.g. doubling capacity) are usually simpler. [12:38] Yeah, mirrors are easier [12:39] matt_dupre: that's what I will be doing. If I set up 2x3TB now I can expand with two new disks later, right? [12:39] Yeah, you can go from a RAID1 to a RAID10 [12:41] matt_dupre: OK, thanks. I think I'll go with ZFS because there's a possibility I will move to either FreeBSD or FreeNAS later, and then I should be able to just export/import the pool, according to everything I've read... Besides, I'm not sure if btrfs is really a good alternative yet. [12:42] Yeah, as long as you take care to create the pool at the minimum version supported across everything. [12:44] Other bit of creation-time advice is to try to get the right ashift (google it). Your drives are probably 4k sectors internally, but report as being 512byte. [12:51] matt_dupre: I know about the ashift thing, all drives use 4K physical, 512 logical, so I'll use 12. Also I think FreeBSD is generally ahead of ZoL in what feature flags are supported, so that should be fine. Thanks a lot for your advice! === ztane_ is now known as ztane [13:14] jamespage, I've been struggling for too long to get hypothesis test failures situated, so I'm going to drop hypothesis from python-cryptography for now. It only affects one test function. [13:14] coreycb, okay [13:32] my ubuntu server doesnt boot, I've installed it via debootstrap on lvm and raid. made sure to configure network, grub in chroot. now the server doesnt respond, back in rescue mode I don't see network interfaces in dmesg. I've tried adding ixgbe module to /etc/modules but this does not help. https://gist.github.com/sabcio/5a8dea26ecee26af5fa5 [13:46] ok, so im booting servers through pxe iscsi root and using the same initdrd.ig for all servers. The problem is that i need a unique /etc/iscsi/initiatorname.iscsi in initrd for each host. Any way to dynamically create it? ive seen ways to do it with sysconfig on a redhat based system, but no clue for ubuntu/deb [13:48] wojtczak: you may have to add them to the 'initrd' [13:48] Walex2: was that meant for me or was someone else talking about initrd before i got in here? [13:49] Walex2: thx, something to google about [13:49] ah, guess you were. lol [14:20] jamespage, can you sponsor my changes for xenial? https://git.launchpad.net/~corey.bryant/ubuntu/+source/python-cryptography/ [14:21] jamespage, it builds successfully [15:00] magicalChicken, rharper i tried to write some of the things i think are next on the list for curtin at [15:00] https://public.etherpad-mozilla.org/p/curtin-work-201512 [15:01] smoser: Nice [15:01] smoser: I had started doing some work on separating logic between block and block meta last night [15:02] smoser: https://code.launchpad.net/~wesley-wiedenmeier/curtin/partitioning-cleanup [15:04] looking [15:09] zul, can you sponsor this for xenial? https://git.launchpad.net/~corey.bryant/ubuntu/+source/python-cryptography/ [15:10] coreycb: yeah lemme finish what im doing first [15:10] zul, sponsoring it [15:10] :) [15:11] zul, I dropped hypothesis from python-cryptography for now because it's tests don't run and I've had trouble getting them running successfully. It only affects one test function for python-cryptography. [15:11] magicalChicken, why 'extra_init' ? [15:12] smoser: Ah, yeah, I know that's a little ugly [15:12] smoser: The idea was that the BlockDevice class could be the parent of Disk as well as Partition [15:12] well, i agree with that. [15:12] smoser: And possibly the virtual filesystem layers as well, and I wanted to keep as much common functionality in the super class as possible [15:13] but super() is the typical way of doing that. no? [15:13] smoser: Yeah, I guess that would probably be better, it's probably not great to try to just have one __init__ [15:15] and then generally, when you're doing something.. unless you have a reason to merge with a branch from somewhere other than trunk, you should avoid that. [15:15] to just keep a branch doing 1 thing. [15:15] smoser: Yeah that makes sense [15:16] smoser: I was thinking I could add a Disk.wipe() function that could call the block.wipe_volume function [15:16] smoser: But yeah, it would probably have been better to wait to merge that [15:16] right. [15:17] the other htig is that we have 2 types of block devices. one that is "to be done" and one that is existing now. [15:17] BlockDevice(dev="/dev/vda") [15:18] smoser: yeah, that makes sense [15:18] BlockDevice(config={'id': 'abc', 'type': xx} [15:18] smoser: This code should handle that okay though I think [15:18] smoser: You can create a disk based on the path it already has [15:19] smoser: maybe a function to determine the difference between a config and what exists would be good [15:19] coreycb: done [15:21] magicalChicken, right. essentially a factory [15:22] smoser: reading [15:23] smoser: I was also thinking instead of going through config elements one at a time we could unflatten the config into a hirearchy of BlockDevice instances [15:23] smoser: And apply the config from the top down [15:23] i think that is generally necessary, yes [15:24] smoser: That should be faster, because we could create all the partitions in one go and keep track of state on everything [15:24] if we have the order, it' [15:24] it's also possible to run some of those in parallel [15:25] no reason I can create a partition on 5 different devices at the same time either (though that's performance enhancement for later) [15:25] i didnt realize we were creating each partition individually. [15:25] stil [15:25] rharper: yeah, that would be pretty nice [15:25] yeah, ./split-disk.py /dev/vdb 32 [15:25] is taking a while :) [15:25] hehe [15:25] zul, thanks, appreciate it [15:25] smoser: Yeah, the way the partitions are made right now is kinda slow... [15:26] rharper, i'm not opposed to parallel. but i'm not happy with the predictability of the whole system as it is . i dont feel a strong need to add more to races to it. :) [15:26] smoser: magicalChicken, also, I think we need to look again at the udevadm settle bits, there are some extra flags like the --exit-if-exits=/path/to/file/ [15:27] smoser: for sure; I'm highlighting some opportunity once we switch away from in-order handling [15:27] rharper: yeah, that would definitely help the current devsync fix [15:28] rharper: also, refactoring like this would mean that we could keep track of whether or not we've synced with udev for each device [15:34] hm. [15:38] Is there any debian-installer instruction that can be passed thru a preseed file to tell anna-installer not to check for upgrade paths on the repo mirror? === cpaelzer is now known as cpaelzer_afk [15:43] It's truly bizarre how this is only a problem when deploying new ubuntu machines and not vanilla debian ones... [15:45] nat0, sorry, cant help. [15:45] ;_; === arges_ is now known as arges === cpaelzer_afk is now known as cpaelzer === ddellav_ is now known as ddellav [16:23] rharper, magicalChicken so, i never recognized this before [16:24] but there is a device node availability limitation that means more than 16 partitions is odd. [16:24] ie, /dev/vda16 wont get created nor will /sys/class/block/vda16 exist [16:24] yeah, the minor number space right ? [16:24] right [16:24] yeah, hence lvm [16:24] wow, I didn't know that [16:25] so should we check if the partitioning config has more than 16 parts on a disk? [16:25] or is that something that we should trust the config generator to do? [16:26] would be more than lvm, should apply to all block devices [16:26] such as putting >16 partitions into a gpt disk [16:28] http://paste.ubuntu.com/14029368/ <-- some info on it. [16:28] patdk-wk, right. thats just what i did ^. i put 20 partitions on a gpt disk, and only 15 of the partitions have block devices. [16:29] patdk-wk: lvm and devicemapper can work around it by allowing new device node space to map to other parts of the disk [16:29] oh, he meant lvm as a solution :) [16:29] that's why I mentioned lvm [16:29] * patdk-wk doesn't like lvm [16:29] patdk-wk: why not? [16:30] it goes ontop of the devicemapper stack [16:30] and I have noticed nothing but write latency issues when I use that [16:30] so what if it works? [16:30] it seems to buffer my writes for a few seconds [16:30] causing horrible issues on some of my servers, and on my workstation [16:30] remove lvm/devicemapper, no more random stalls while it writes [16:30] patdk-wk: then something is probably bad somewhere else - I'm using it on 150 servers and it works well for me ;) [16:31] heh? you should know that is not the definition of *it works* [16:31] the ONLY change, was to remove lvm [16:31] and the problems went away [16:31] no hardware changes [16:31] nothing else [16:32] interesting, even if you worked around this limitation by having dm magically create device nodes named /dev/vdb16 that did what it *should* do it wouldnt appear the same in /sys/class/block as /dev/vdb15 does . the reader would have to know about dm to understand it. [16:32] had the issue here on ubuntu 10.04/12.04, and on rhel5 [16:32] patdk-wk: I've never seen 2-3secs latency [16:32] royk, there are kernel settings for it [16:32] patdk-wk: what sorts? [16:32] they might have changed, I had adjust them, but never could find ones that seemed to work better [16:33] patdk-wk: got any docs on this? [16:33] looking [16:33] back when I cared to fix this issue, I did :) [16:33] but been a few years [16:33] patdk-wk: it just seems strange, after all, centos/rhel has been shipping with lvm by default for years [16:34] yes, and I was having all kinds of issues on a very busy webserver [16:34] removed that lvm from it, issues went away [16:35] strng [16:36] but it seemed to me, if I remember write, it was getting buffered twice into the write buffer or something like that [16:36] adjust vm.dirty_writeback_centisecs would help to a point [16:38] heh, cannot remember the right things [16:38] but I had the isuse on several different machines, the result was always the same, it would feel like the machine just froze up for several seconds, while it dumped a bunch of writes to disk, then go back to normal [16:39] and only happened with using devicemapper [16:50] https://bugs.launchpad.net/curtin/+bug/1526437 <-- magicalChicken rharper [16:50] Launchpad bug 1526437 in curtin "should refuse to partition disk with more than 15 partitions" [Low,Confirmed] [16:51] smoser: Makes sense [16:51] smoser: I can add a check for that in partition_handler sometime today [16:55] magicalChicken, give yourself a name at $ sudo /tmp/even-partition /dev/vdb 20 [16:55] size=40960M numparts=20 partitions_of=2047M label=gpt dev=/dev/vdb [16:55] curstart=2048 curend=4194304 part=0 [16:55] curstart=4194304 curend=8386560 part=1 [16:55] curstart=8386560 curend=12578816 part=2 [16:55] curstart=12578816 curend=16771072 part=3 [16:55] curstart=16771072 curend=20963328 part=4 [16:55] curstart=20963328 curend=25155584 part=5 [16:55] curstart=25155584 curend=29347840 part=6 [16:55] curstart=29347840 curend=33540096 part=7 [16:55] curstart=33540096 curend=37732352 part=8 [16:55] curstart=37732352 curend=41924608 part=9 [16:55] curstart=41924608 curend=46116864 part=10 [16:55] curstart=46116864 curend=50309120 part=11 [16:55] curstart=50309120 curend=54501376 part=12 [16:55] curstart=54501376 curend=58693632 part=13 [16:55] curstart=58693632 curend=62885888 part=14 [16:55] curstart=62885888 curend=67078144 part=15 [16:55] curstart=67078144 curend=71270400 part=16 [16:55] curstart=71270400 curend=75462656 part=17 [16:55] curstart=75462656 curend=79654912 part=18 [16:55] curstart=79654912 curend=83884032 part=19 [16:56] created 20 partitions on /dev/vdb [16:56] $ ls -l /sys/class/block/vdb* -d [16:56] lrwxrwxrwx 1 root root 0 Dec 10 18:58 /sys/class/block/vdb -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb1 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb1 [16:56] heh [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb10 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb10 [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb11 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb11 [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb12 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb12 [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb13 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb13 [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb14 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb14 [16:56] lrwxrwxrwx 1 root root 0 Dec 15 16:25 /sys/class/block/vdb15 -> ../../devices/pci0000:00/0000:00:04.0/virtio2/block/vdb/vdb15 [16:56] lrwxrwxrwx 1 r [16:56] magicalChicken: I'd wait until we're building the config hierarchy, we'll instantly know then (rather than just checking if the current partition exceeds the limit) [16:56] oh crap [16:56] shame on smoser [16:56] shame [16:56] https://public.etherpad-mozilla.org/p/curtin-work-201512 [16:56] rharper: Yeah, that makes sense [16:56] smoser: lol [16:58] magicalChicken, so i think the first thing to do is to get "create a set of tests that run multiple storage configs serially" [16:58] smoser: Yeah, that makes sense [16:58] with minimal cleanup or improvement involved [16:59] smoser: Maybe the best thing to do would be to just write a script to do multiple installs in a vm and communicate with it via http or something [16:59] and then we will heavily rely on those tests to ensure that otherthings are working well. [16:59] i think we specifically want to shortcut 'install' [17:00] smoser: we also talked about running multiple curtin install commands [17:00] yeah, 90% of the time is spent doing extract/curthooks right now [17:00] it could be a payload of configs, and running curtin install on each one in sequence [17:02] rharper, testing suport for how subiquity calls curtin is important. i'll agree with that. [17:02] I think that one platform could be used for both goals though [17:02] i just want to test storage config specifically. [17:02] we just need a way to start a vm, then from a test script pass in configs to partition or configs to install [17:04] i'm not opposed at all to running 'nosetests tests/vm-storage-tests/' inside a vm [17:04] yeah, that would work pretty well [17:04] and we could just have a data partition like with vmtests where it could write it's results as it runs [17:05] anyone willing to sanity-check the nginx merge debdiff for me, out of curiosity? Hate to ask, but as I've been working on this since 11:00 yesterday (with an 8 hour break overnight for sleep!)... it could use another check [17:05] right. [17:06] I can get started on that now, I can't really find any decent way to identify a disk to udev on Trusty [17:07] teward, if you pastebinit, i can say "that looks too long for me to review right now". ie, i can take a very cursory look. [17:07] smoser: would an uploaded-to-launchpad link work? https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1510096/+attachment/4535172/+files/nginx-merge_debian-vs-ubuntu_nginx_1.9.6-2_1.9.6-2ubuntu1.debdiff [17:07] Launchpad bug 1510096 in nginx (Ubuntu) "Please merge 1.9.6-2 (main) from Debian Unstable (main)" [Wishlist,In progress] [17:07] SILENCE, bug bot >.< [17:08] magicalChicken, lets try to separate out the copying of results somewhere from the running of the tests. [17:08] and even separate the tests by number of disks required or something. [17:08] smoser: it *should* be fine, the larger merge debdiff from 1.9.3 -> 1.9.6 aincluded is on the bug [17:08] sure, okay [17:08] (but that debdiff there compares Debian to the merge itself, to see what's actually changed there) [17:09] as ideally i'd like to be able to launch a vm somewhere, and then just type: nosetests tests/vm-storage-config [17:09] and skipIf@ the ones that aren't going to run for me. [17:09] teward, i was afraid you were going ot say 'merge' :) [17:09] smoser: it is a merge :) [17:10] always so hard to do that. [17:10] smoser: i've got both debdiffs present, force of habit [17:10] it builds. it runs. it IDs as the updated software. [17:10] and because Sec team mandates it, HTTP/2 is disabled [17:10] the headache is it expands the delta [17:10] have you looked at all at rbasak's process for merges ? its really good. separating out the 'logical ubuntu delta' into a set of patches. [17:10] https://github.com/basak/ubuntu-git-tools [17:11] yeah i have, except I've never been able to get a handle on his process... [17:11] and the 'logical ubuntu delta' is the one i did link here [17:11] (all the stuff from the previous merges included, plus additional *new* mandates from the sec team) [17:12] (i.e. the Ubuntu branding, ubuntu-core added, the apport hooks we had to add for Wily+... [17:12] i should probably put this down for a day and relax :/ [17:12] i can wait for rbasak to once-over it [17:12] though i can easily upload direct :P [17:16] teward, you seem to have done a careful job, but i think i dont have time to review at the moment. [17:23] magicalChicken, do you have a reason taht you have used 'partprobe' rather than 'blockdev --rereadpt' ? [17:25] smoser: not really, I assumed they did the same thing pretty much [17:25] smoser: I know partprobe triggers the udev events for creating the block device [17:29] yeah. they are definitely different. and partprobe is smarter i think. watch 'udevadm monitor' and run each. partprobe generates a lot more interaction. [17:29] i think the difference is that partprobe is actually reading the partition table itself, and telling the kernel about it. where rrpart is just telling the kernel "hey, reread the table there". [17:30] smoser: that makes sense [17:30] smoser: I guess we don't actually need to read the table ourselves then [17:31] smoser: I think once we stop doing everything in steps and write the whole partition table in one go then we won't really need to do that anymore [17:33] magicalChicken, well just suspect we dont need to do it now. [17:34] tjat [17:34] *that's probably true [17:34] tjat [17:34] well said [17:34] lol === cpaelzer is now known as cpaelzer_afk === CiPi is now known as cipi === cpaelzer_afk is now known as cpaelzer === cipi is now known as CiPi === cpaelzer is now known as cpaelzer_afk [19:42] magicalChicken, one other thing 'll add to thath pad. [19:43] nicder subprocess execution output log [19:43] http://paste.ubuntu.com/14025172/ [19:44] smoser, yeah that would be nice to do [19:48] smoser, we don't need to show all stdout when curtin fails, we could always write stdout to a tmp file, but including it in the logs takes up a lot of space [19:49] i think we want it in the log. i dont care about space. i just want to easily see the error. [19:50] okay, yeah. it shouldn't take too much work to get it to print with proper formatting === Lcawte is now known as Lcawte|Away