[00:05] tomreyn: ok [01:03] when using ubuntu-server ISO, ubiquity is not used, only d-i? [01:19] Hello! We are running ami-9e158c89 in us-east-1, which is an Ubuntu 14.04 image. We had a very strange issue where three servers running RabbitMQ in a VPC simultaneously started freaking out, claiming they could not write to disk until the kernel killed the processes. When we reviewed this with our Amazon account managers and an actual EC2 developer, we were [01:19] told to reach out to Canonical and that there was a known bug in the enhanced networking drivers for Ubuntu 14.04. Is there a known bug? [02:43] rfkrocktk, maybe pastebin some logs [02:45] I'll try to do that, but I honestly think it was just AWS lying to us about some underlying problem 😈 [02:46] It was more of a general query regarding the enhanced networking driver in 14.04, are you aware of any significant (or otherwise) bug reports concerning it in 14.04 EC2 AMIs? [02:50] rfkrocktk, if you didn't manually update the driver, yes [02:50] I have a newer driver in my ppa [02:51] it's documented from intel [02:51] can you link to the bug? how long ago was it? [02:51] the bug is linked to right on aws ec2 page, where it documents the enhanced network option [02:51] trying to find [02:52] it's not on here: https://help.ubuntu.com/community/EC2StartersGuide [02:52] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sriov-networking.html [02:54] thank you patdk-lap [02:54] that ubuntu page doesn't talk about enhanced networking at all [02:54] https://launchpad.net/~patrickdk/+archive/ubuntu/production/+sourcepub/4863517/+listing-archive-extra [02:54] I made it a dkms, unlike the aws page, you won't have to screw with it again on each kernel update [02:55] never had any issues, ran some very large and high traffic mongo servers on it [02:58] I should probably update my package [03:12] I can't seem to find the actual bug [03:12] what? [03:13] like they mention that there is a bug but they don't clarify what it is [03:13] unfortunately [03:13] yes, it's MANY bugs [03:13] 😬 [03:14] svio support wasn't very good in versions < 2.14 [03:16] if you are expecting aws to file a bug against ubuntu, your mistaken [03:16] you would have to file that bug, aws couldn't care less [03:16] aws already went and did the work and found that you need >= 2.14 to be stable [03:17] the version in ubuntu 14.04 is 2.11.3-k [03:20] I'm just wondering why Canonical hasn't published an updated version of the driver in their EC2 AMI images. [03:21] If this is a known issue, then it would make sense for canonical to address it by publishing a fix. [03:21] ask them [03:22] I wouldn't assume they know about it though [03:23] https://bugs.launchpad.net/cloud-images/+bug/1254930 [03:23] Launchpad bug 1254930 in cloud-images "AMIs do not have EC2 Enhanced Networking flag set" [Undecided,Confirmed] [03:25] thank you, this is the bug [09:00] Good morning === disposable3 is now known as disposable2 === jamespag` is now known as jamespage [12:13] Hello all [12:14] o/ [12:14] What is the best strategy for a mail server High avaibility ? [12:15] I have postfix and devecot running on the same server and I want to create a backup (slave) for taking control in case of the master faillure [12:15] having more than one, i guess. [12:16] Hi chat [12:16] Genk1: what level of HA are you aiming for? [12:17] bhuddah, I didn't get the point ? can you explain more please ? thanks [12:18] Genk1: it's not necessarily a fail-over setup you want. you can also have active-active setups. it all depends on what your goal is. and your budget. [12:18] bhuddah, I have a distant cloud environment of 3 VM [12:19] bhuddah, my goal is to always have the service UP and I guess an active-active setup is not necessary in my case [12:19] I don't have a huge traffic [12:19] Genk1: "always" is impossible. [12:19] bhuddah, Ok let's say a 99.99 % avaibility then :) [12:20] Genk1: it sounds like a simple backup will be enough. [12:21] Genk1: depending on the size of the mail store you might need some time to restore though. [12:21] bhuddah, you mean a secondary MX server ? [12:21] Genk1: i just mean a traditional data backup. regular and tested. [12:21] bhuddah, OK, what's about the cost in system faillure ? do I have to operate manually ? [12:22] bhuddah, hmm [12:22] as long as your downtime is shorter than a couple of days you won't lose any mail. so you just gotta make sure that you can restore quick enough. (in a couple of hours) [12:22] bhuddah, you mean to simply backup files and be able to mount a server quickly ? [12:23] Genk1: you can get quite quick with that if you train it regularly. [12:23] bhuddah, I see, but the problem is that the operators need to answer mails as fast as possible [12:24] bhuddah, the corporate activity is depending heavily on emailing system [12:25] the system will fail. sooner or later. [12:26] a good single system will last years and years before you have unscheduled downtime. [12:27] bhuddah, OK I see [12:27] what if I want to go with an MX backup ? [12:27] having 2 servers operating if the master fail the secondary server takes control ? [12:28] of course you can grow your system to multiple mx servers [12:28] cluster operation is necessarily a lot more complex than single server systems. [12:29] bhuddah, you're absolutly right [12:29] but what can you suggest me for a multiple mx setup ? [12:29] it's a trade off where you might gain little and have a lot more risk to handle. [12:30] i'd run with multiple active MX's then. [12:30] especially if I have 2 systems to put in HA (Postfix and devecot) [12:30] they can throw mails in a centralized backend storage pool. [12:30] hmm [12:31] and users access that storage pool via the dovecot server(s) [12:39] bhuddah, perfect thank you al ot [12:40] Genk1: good luck. [12:41] bhuddah, Ah! one last question please. how about the storage pool ? what can you suggest me for a clouding environement ? using Gluster, NFS.. for example ? [12:41] usually whatever you already have for storage. [12:42] bhuddah, hmm I don't think that our hoster has a lot of things to offer in that area [12:42] some might just use a NAS. others might have a larger SAN storage. [12:42] bhuddah, what's about rsync ? [12:43] no. it must be real-time. in that case. [12:44] bhuddah, wow OK that's the difficult point then [12:45] HA systems are complicated. [12:45] bhuddah, true, and cost a lot [12:45] you can calculate how much a day or two downtime cost. [12:46] bhuddah, but I don't see the need for a real-time stuff ? I think that 1 min and more is tolerable in our case [12:46] and then you know what you can invest to mitigate that. [12:46] bhuddah, yes true [12:46] Genk1: the point isn't the speed but the shared locking because there are multiple paths through the system. [12:47] bhuddah, you're right [12:48] bhuddah? your cheap vps provider will last years and years before you have any kind of outage? not true [12:49] I don't know why you want to get all fancy attempting to make this HA [12:49] coreycb: zul: hi, the qemu triggered nova test opn ppc64el failed again - did you happen to find what it really is? [12:49] just use simple dovecot built in HA [12:49] cpaelzer: no i wasnt able to reproduce it [12:49] http://wiki.dovecot.org/Replication [12:49] patdk-lap: you get what you pay for. certainly. so the cheapo vps provider will fail earlier :) [12:50] cpaelzer, zul: well for our failing deployment which hit a similar issue, it was due to needing a newer version of seabios backported to the cloud archive [12:52] cpaelzer: i was thinking of getting back on a ppc64el and running autopkgtest [12:56] zul: ok, the seabios in zesty is pretty new (4 weeks) [12:57] zul: your access on the machine of last week should still be good [12:57] zul: please let me know if I can help to resolve [12:58] k [13:40] coreycb: im going to start on the rc1 candidates but not upload them [13:41] zul, ok [13:52] tomreyn: fyi, 'ufw disable' is good enough [14:38] Help! i can not send mail from one pc to another which are in the same network! [14:39] i only see the mail in side the sender pc /var/mail....lab1 [14:42] the sender pc is using exim4 and the receiver pc is using postfix [14:42] helpp!!!! [14:44] any one in here and help! [14:48] Help! i can not send mail from one pc to another which are in the same network! [14:48] i only see the mail in side the sender pc /var/mail....lab1 [14:48] Helppp! [15:22] Help! i can not send mail from one pc to another which are in the same network! [15:23] i only see the mail in side the sender pc /var/mail....lab1 [15:37] !patience | jemoo [15:37] jemoo: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/ [16:16] i am installing webmin on ubutnu server.. but when i tried lsb_release -a it is showing me " No LSB modules are available. | Distributor ID: Debian | Description: Debian GNU/Linux 8.6 (jessie) | Release: 8.6" [16:17] FYI: https://blog.sucuri.net/2017/02/content-injection-vulnerability-wordpress-rest-api.html [16:23] !webmin [16:23] webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. [16:24] also that [18:12] I have a firewall on ubuntu... have set policy rules and zones... app on ubuntu itself does it use loc or fw ? [18:13] what firewall software are you using? [18:13] but normally anything on the machine itself is fw [18:13] ok [18:13] so what are 'loc' for ? [18:14] no idea, what did you configure loc as? [18:14] loc = local ? [18:14] if I make a very broad guess, loc might mean local, and stand for anything coming from the local network [18:15] ah ofc 'facepalm' [18:15] was thinking local = machine itself [18:16] I don't use local in any of my firewall configs [18:39] hi, trying to preseed some boxes where the OS disk already has some stuff on it. No matter what I try I keep being prompted about what to do with my disk [18:39] I would like to simply tell the install to nuke whatever is there and install as if it was a blank drive, ignoring all partitions [18:40] anybody that has had that problem and has a working config? [18:45] drab: Use the partitioning recipe section in the sample preseed file to go by. It has the other options given as well to automatically proceed and so on. https://help.ubuntu.com/lts/installation-guide/example-preseed.txt [18:47] drab: The relevant section says "# This makes partman automatically partition without confirmation, provided [18:47] # that you told it what to do using one of the methods above." with the d-i options to use below [18:53] genii: yeah I already have all of that, and it works on blank drive, but not on a drive on which for example windows had been installed on [18:53] or another version of ubuntu for that matter [18:56] I've seen some people having similar problems if the drive in question had lvm on it and the autodetect would find the volumes and try to reuse them despite the options in the preseed [18:57] some of those folks seem to have sort of abused the d-i early_command to delete the VGs and delete the MBR [18:58] drab: I had before an automatic install system with preseed. Unfortunately I do not currently have access to the preseed options that were used. But when for instance it stalled I would examine the output of console 4 for what kind of input it was expecting then alter the preseed accordingly [19:11] genii: how do you check? I guess I'll test that later, I thought I cycled through all the terminals and don't remember a way to see what questions exactly it was asking [19:11] if that was possible that'd be great [19:13] With server install it gives you 4 terminals, tty0 is the default you see, tty1 and tty2 you can use to gain a commandline, tty4 is where you can see output like what commands are currently being executed to produce whats on the first terminal [19:13] tty3, rather [19:23] genii: ok,thanks, I'll try to look at that output and see if I can recognize a question. Is there an obvious link between what shows on screen and a preseed option? [19:24] drab: It should actually be showing you something like the actual d-i command which is currently running [19:48] coreycb: updating openstack cruft in universe [20:02] dannf: hey, testing out the smbios paramters in qemu-system-aarch64; can you test passing in '-smbios type=1,manufacturer="Foobar"` and then in the booted image see if this shows up in /sys/class/dmi/id/* ? [20:09] rharper: checking.. [20:13] rharper: $ sudo grep -ir Foobar /sys/class/dmi [20:13] $ [20:14] modprobe sysfs_dmi ? [20:14] also, dmidecode [20:15] I was on an arm64 cloud (beisner had one) which had /sys/class/dmi/* populated, Xenial image IIRC [20:15] rharper: it is populated [20:15] rharper: there just isn't any file that contains that string [20:15] ok, that was what I saw as well [20:15] so smbios on qemu aarch64 isn't working [20:15] =( [20:15] but it should be =) [20:15] rharper: however, iirc, ARM may rely on a newer version of the spec [20:15] was going to file a bug and have someone look at fixing qemu [20:15] maybe type needs to be updated? [20:15] not sure [20:16] but it's been in qemu for almost 2 years [20:16] lemme dig up a bug... [20:16] cool [20:16] dannf: the goal here is to have openstack nova pass the OpenStack Nova product name into the guest so cloud-init can know it's on an OpenStack cloud and do the right thing with datasources [20:20] rharper: i don't think the bug i was looking at is relevant. yeah, doesn't seem to work. [20:20] ok [20:20] it's likely regressed; I suspect that some thigns work [20:20] for example, -uuid still works [20:20] 1:2.6.1+dfsg-0ubuntu8 [20:20] but other stuff doesn't [20:48] dannf: if you file a new bug, can you add me to it? or do you want me to file one right now? [20:51] rharper: i'd say go for it, but feel free to subscribe me in case upstream needs a quick test [20:52] dannf: ok === Grapes is now known as Guest63940 [21:32] anyone into lacp/bonding ? [21:32] just wonder what mode I should choose === Guest63940 is now known as Gr8pes [22:06] dannf: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1662345 [22:06] Launchpad bug 1662345 in qemu (Ubuntu) "smbios parameter settings not visible in guest" [Undecided,New] [22:08] rharper: cool [22:14] blueking: fwiw I've just put in a quad port nic into my machine and will get to that question shortly [22:15] doesn't it boil down to the question, why do you want it? redundancy or throughput? [22:33] throughput... [22:41] still afail it's not as easy as x $num_of_nics [22:41] that's not how LCAP works [22:41] I've been reading around and to achieve that sort of multiplier ppl seem to have done weird shenanigans with vlans etc [22:45] really? I haven't seen any vlan shenanigans [22:45] but if you've only got two computers involved and use only say two tcp connections between them, there's a 50% chance both connections will be sent over the same NIC.. [22:46] lacp doesn't give you throughput, only redundency [22:47] to get throughput with lacp requires a LOT of clients [22:47] if you want throughput, you need to use roundrobin, not lacp, and switches don't like roundrobin [22:47] they don't? oh :/ [22:50] I've always been under the impression there were three types available: active/passive, hash-based, and round-robin, and I've always had the impression that round robin was more expensive than hash, so no one used RR... [22:52] I was aware that switches didn't like RR tho, but yeah, that was my impression too, lcap in the end doesn't really give you tput [22:53] especially not for a single connection, which is what most people think of when wanting to use bonding [22:53] ie cp a large file over nfs or something [22:53] s/was aware/wasn't aware/ [22:54] yeah, "but it could do two of those at once" is often little solace when you're waiting forever for a file copy to finish :) [22:58] sarnold, there are like 6 or 7 types [22:58] rr is the best, but it only works on DIRECT links, server to server [22:58] I use it for my HA links [22:59] active/backup is fine if you just need simple failover and have simple switches or something [22:59] lacp (hash) works good if you have a switch that does lacp also, but getting > single port speed is not a goal of lacp [23:00] now, the other two tlb and alb where made to get >single port speeds, but they require the switch and the client machines to behave with it [23:00] tlb normally works, and does so by sending packets out multible links in a round-robin type way, but receiving only on a single link [23:01] o_O [23:01] that sounds crazy [23:01] the issue is, it uses multible mac addresses to send, and some clients that gets confusing (mac based auth checks) [23:01] heh [23:01] so while it worked great for *normal* things [23:01] I could not login to my network switch using that link [23:01] cause it would verify the source mac was the same as the user logged in on [23:02] hah [23:02] that even sounds like a good idea on the face of it.. [23:02] alb takes it a step more, and spoofs the arp to the clients to balance incoming traffic [23:02] all this sounds like compelling reasons to just buy nicer hardware [23:03] lacp can load balance from the hash sure [23:03] but it's VERY hard to maintain that balance and to balance it, unless you have a LOT of clients [23:03] so for a home, lacp won't do crap for you [23:03] unless you just want a more advanced active/backup [23:03] how does it help with backup? [23:04] does it automatically re-do the hashing alg if a link goes down? [23:04] yes [23:05] alright that's friendly enough [23:05] as long as you don't setup a static lacp, static lacp uses any active port, if it's plugged into a lacp configured thing or not [23:05] dynamic lacp will use what is configured on the other side for lacp only [23:05] so if you plug in your laptop into a server lacp configed port by accident, everything doesn't go nuts [23:06] but then you're trusting lacp to dtrt -- does it? :) [23:06] it should, it's simple [23:06] if not, your switch has issues [23:06] yay [23:06] hehe [23:06] reminds me of my netgear switch, that send broadcast packets across every vlan [23:07] which returns to "buy nicer hardware" [23:07] "you asked for broadcast" [23:07] but I marked a vlan tag on it, not ALL vlans [23:07] that caused some fun tcpdumps [23:14] I just closed a server image but I want it to appear mostly unused, is there any semi automated way to remove all of the log files? [23:14] cloned* [23:15] Pinkamena_D: try this on something unimportant first: for f in /var/log/* ; do > $f ; done [23:17] so that looks like it would just truncate all of the files under /var/log ... does it do subdirectories too? [23:17] no, just those files [23:17] you could add /var/log/*/* if you wanted files in the subdirs [23:18] I guess that should be good enough [23:18] thanks! === jerrcs- is now known as jerrcs === lfrlucas_ is now known as lfrlucas === magicalChicken_ is now known as magicalChicken === v12aml_ is now known as v12aml === not_phunyguy is now known as phunyguy === arlen_ is now known as arlen === Dmitrii-Sh_ is now known as Dmitrii-Sh === petevg_ is now known as petevg === fyxim_ is now known as fyxim === wolsen_ is now known as wolsen === cargonza_ is now known as cargonza === AndyWojo_ is now known as AndyWojo === DalekSec_ is now known as DalekSec