=== Lcawte is now known as Lcawte|Away === InfoTest1 is now known as InfoTest === waspinator_ is now known as waspinator [04:55] zul: ping, pls to comment on bug 1513367 [04:55] bug 1513367 in libvirt (Ubuntu) "qemu-system-x86_64/kvm-spice failed to boot a vm with appmor enabled" [High,New] https://launchpad.net/bugs/1513367 === cpaelzer_ is now known as cpaelzer [08:12] NFS client cause high load average low cpu usage === Lcawte|Away is now known as Lcawte [09:29] High IO-wait? === _ruben_ is now known as _ruben [11:41] Hi! [11:42] I'd like to backup my server with rsync and I like to know which folders should i backup before upgrade server? === Downtime is now known as Uptime [11:50] zingz0r: I would back up *everything*. Makes a rollback really easy. [11:51] (well, relatively) [11:51] zingz0r: see /proc/mounts and the rsync -x option. [11:51] You'll want to grab every real filesystem on your system, but not the virtual ones. [11:54] okay [11:54] thank you [12:19] zingz0r: fwiw, I tend to clone the entire disk (clonezilla) before making big changes. Or snapshot if virtual [12:19] less headache than missing that one vital file [12:21] rsync -aAXv --exclude={"dontneeded folders","1","2"} /* /backup [12:21] its okay? dsnt? [12:44] zingz0r: You could also checkout dirvish, it uses rsync underneath. [13:01] jamespage, can you promote ceilometer from trusty-kilo-proposed? testing is complete. [13:01] jamespage, can you also promote python-saharaclient from trusty-liberty-staging? testing is complete for that too. [13:20] :) [13:26] dw1.xyz [13:44] coreycb, ack on it now [13:47] when will Ubuntu get teaming for network? I can't find packages for "libteam" or "teamd".. but I can create a team using ip tools.. [13:50] aww ye [13:55] cpaelzer: in https://git.launchpad.net/~ubuntu-server/dpdk/commit/?h=ubuntu-xenial&id=b5b9a5d95a9ee17fff1642f41c78e112a0aabbc4 why add /usr/bin to the PATH if the goal is to avoid dependency on /usr? [13:57] AtuM: back in April, apparantely. https://launchpad.net/ubuntu/+source/libteam [13:57] No idea if it works though. [14:08] I've tried to install in on 14.04.3.. probably need to wait for the next lts [14:11] Well, 14.04 was released before last April. [14:12] If you don't want to update to the latest release (understandable for LTS-ness), then a consequence is that you don't generally get to enjoy the latest features. [14:14] rbasak [14:14] rabasak, sorry for prematurely pressing enter :-) [14:15] rbasak, the intention after some disussion was to avoid the same bug showing up again in case one doesn't remember some day and adds a piece to the init script [14:16] cpaelzer: is there any way we can test directly instead, say using dep8? [14:16] rbasak, the reason why we not "just added the path" alone was that others suggested /usr could not be mounted in rare cases [14:18] rbasak, once we have a way to safely derive all binaries called from a shell script the rest would likely be easy [14:19] cpaelzer, smb: would it be possible to have a dep8 test that requires virt isolation, checks that there is exactly one mount, restarts the service and checks that there is exactly one mount again? [14:21] rbasak, maybe... [14:26] rbasak, but that test would only cover one specific symptom of the underlying issue [14:26] cpaelzer: what's the underlying issue? [14:27] rbasak, that referring to /usr/ binaries needs a PATH to there set, and even if it is it could sometimes break if /usr is not yet mounted at the time being executed [14:27] cpaelzer: I disagree. I'd say that the underlying issue is that the service start is supposed to make sure that hugetlbfs is mounted, and it doesn't. That's the functionality expected, so we should test for that. [14:28] cpaelzer: similarly we should test any other functionality we add in packaging if we can. [14:28] rbasak, ok from a "function test" POV thats right [14:28] cpaelzer: it's true that if /usr isn't mounted then that could fail [14:28] As in false negative [14:28] As the dep8 test won't be unmounting /usr [14:29] And I admit that that is a case that probably isn't worth testing as it's too convoluted to test easily. [14:29] But we should be able to test the basic functionality. [14:29] rbasak, from your suggestion I'd even say start with no hugepages mountpoint and execute the init sript two times [14:29] it should be there after the first call [14:29] and it should still be there but only once after the second [14:29] Yeah that would be fine. It depends on how you orchestrate the test. [14:30] If you add the package as a test dependency, then the test running framework will already have run the postinst and thus the init script once I think. [14:30] OTOH you could choose not to list it as a test dependency and install it manually. [14:36] rbasak, to sum things up for the review of smb's upload request - do you want us to add such a test before accepting it? [14:37] cpaelzer: yes please. [14:37] bugger [14:37] Does that hold anything up? [14:38] coreycb, is the ceilometer update in vivid as well? [14:38] rbasak, It holds up my getting rid of it for Xenial [14:38] it holds up smb getting rid of it :-) [14:38] and dpeending on how long it takes the MIR processing [14:38] but I guess they trust us when we say dependency gets remove [14:38] d [14:38] cpaelzer, Also it becomes a little more complicated to properly do now that git is pushed with tags [14:39] smb: don't worry about the tag. It's nowhere official yet. You can delete the tag with git push --delete [14:39] cpaelzer, More or less open a new version and create the upload in a way containing both version changelogs [14:39] rbasak, also in lp git? [14:39] smb, I really think this can just be another spin of ..ubuntu2 [14:40] smb: even in lp git. It's just a random repo currently, not official anything. Nothing git is officially tied to packages yet anyway. [14:40] jamespage, no. arges, can you promote ceilometer from vivid-proposed today? [14:40] rbasak, for some reason I assumed lp git makes it hard to delete tags [14:41] coreycb, I normally gate on the main SRU process completing first... [14:41] rbasak, if that is possible then it might be just a respin [14:41] jamespage, yep, I'll ping you when it's in vivid-updates [14:41] smb, cpaelzer: everything else looks fine to upload in the current tree, assuming it all works. I haven't tried a test build to see the result of https://git.launchpad.net/~ubuntu-server/dpdk/commit/?h=ubuntu-xenial&id=0c85a8e0d245f7d0d32999489b088b559c40153e so I'm assuming it's OK too. [14:41] rbasak, I did build the tree version [14:42] rbasak, in both xenial and wily [14:42] smb: it's really easy to delete git tags. So yes you should be able to update the proposed ubuntu2. [14:42] though I won't backport the font change to wily [14:42] rbasak, normal git yes, I just was not sure about lp's implementation there [14:43] lp doesn't seem to object to any kind of force push. [14:43] It seems to work as if I had a remote ssh server with no surprises. [14:44] Though it would be nice if I could restrict force push to team admins or something to prevent accidents. [14:45] rbasak, ok, have not tried. maybe we use a stricter set of rules for the kernel. just remember hearing it being said to be hard. Not tried that either [14:46] * smb wonders whether cpaelzer would volunteer for the dep8 thing since he did all the discussion on it already (and I am currently tied up in something else) [14:49] * cpaelzer is willin to start a battle who is more tied up with smb [14:49] smb is there an online app for drawing straws? [14:49] * smb checks the appstore [14:51] smb it seems it isn't today or tomorrow for either of us - lets discuss monday morning [14:51] cpaelzer, maybe we can quickly sync on the busy state tomorrow and see [14:51] or that [14:51] we can even make remote hangout straw drawing if we want [14:51] coreycb, saharaclient -> proposed for liberty [14:52] jamespage, thanks [14:54] roaksoax: it looks like freeipmi quite badly needs a merge this cycle. I know MAAS has been involved with it. Will this impact you? [14:54] Or do you want to take on the merge? [15:00] rbasak: no shouldn't impact me at all [15:00] roaksoax: OK thanks === Piper-Off is now known as Monthrect [15:16] matsubara: around? I'm looking for the test case reviews I was asked to do but I can't seem to find them. The URLs from the meeting 404. [15:22] rbasak, they might have been deleted. [15:22] rbasak, would have to ask psivaa and om26er [15:24] rbasak, I asked psivaa in #ubuntu-devel. [15:46] Hey guys, good morning. I'm trying to install a specific version of nginx with 'nginx=version' but I get a bunch of umet package dependencies, it will always try to install the most recent candidate version for dependencies. Any way of telling apt to grab the necessary versions to meet these dependencies without doing it manually? [15:51] jge: installing old versions means that you're effectively opting out of security updates and installing a vulnerable deployment. Is that really what you want? [15:52] jge: I'm not sure how exactly to get apt to do that, but adjusting pinning and scores might be able to achieve it, I'm not sure. [15:56] rbasak: well, what I do is install the version I want then bring this version up to the most latest security version out there. [15:56] I only do security updates [15:56] You won't get security updates if you have to force apt around. [15:57] ^ that [16:00] hi - in order to get mkhomedir with freeipa-client working in Ubuntu I have to edit the fie -> /etc/pam.d/common-session and add the line - session required pam_mkhomedir.so skel=/etc/skel umask=0022 [16:00] this is ok however is there a danger my change will be overwritten ? [16:00] (on a update, etc) [16:00] ts odd though - that file was brought in via the freeipa-client package (or dependency) but dpkg -S /etc/pam.d/common-session shows no package ... [16:00] dpkg-query: no path found matching pattern /etc/pam.d/common-session [16:01] why is that ? [16:01] rbasak: so just to be clear, if I use apt-get install package=version and then try to use unatendded-upgrades with only security updates allowed it wont work? [16:01] same for -> dpkg -S /etc/sssd/sssd.conf [16:01] dpkg-query: no path found matching pattern /etc/sssd/sssd.conf [16:01] and by working I mean, will no longer get security updates [16:01] (these packages are in the default Ubuntu 14.04 repo) [16:02] jge: I can only say that it may not work. I can't say for certain that it won't. But it isn't a supported path to use anything but the latest version of a package visible to apt. [16:02] jge: if you have some reason to use an older version, then we should address that, rather than trying to plaster over it. [16:03] Yossarianuk: policy says that upgrades should never stomp on changes you make manually in /etc. However there could be a bug in implementation of course. [16:04] Yossarianuk: you may need to manually merge changes during an update though, since scripts can't generally automatically work out what you mean and apply that to a newer version of the file. === rattking_ is now known as rattking [16:04] Yossarianuk: not all files in /etc will be known by dpkg. There is default handling, but packages can also generate and manage files themselves in maintainer scripts and in that case dpkg doesn't see them. [16:05] jge: put another way, security updates bump the version number to one higher than all previously published in a given series. So if there is some reason to have an old version, that is already lower than a future security update. [16:06] jge: so it makes no logical sense to have an older version and also expect security updates. [16:06] Security updates are applied on the latest version for a given series. [16:10] rbasak: hmm ok, so how come I'm seeing this security update on the last version? https://zerobin.net/?a5b3111921fb5a1e#ovjqbTtQT0x62l64nqvirXQEVFVCGcNVGUrkEuIqTY4= [16:10] ehehehehe [16:11] jge: trusty-security is the security updates [16:11] trusty-updates is the 'updates' that happen to fix bugs [16:11] (non-security in nature) [16:11] if you want only security updates then you should not have -updates enabled [16:11] but you will miss bug fixes and other issues [16:11] * teward would know the nature of that package since he is the 'maintainer' of it in Ubuntu now [16:11] Yeah I'm aware of this [16:12] i do not have updates enable, I use unattended-updates with security origins only allowed [16:12] enabled* [16:13] maybe I'm not explaining myself all that good :D [16:13] let me try it again.. [16:13] jge: start with explaining why you are installing an older version. [16:13] rbasak: thanks for the explanation [16:14] still unsure why the line isn't added by default - it is in the Fedora/rhel packages. [16:14] I guess backing up the files regularly will be a good plan. [16:15] Yossarianuk: it's reasonable to expect that install a PAM module will enable it automatically. I'm not sure that's necessarily a good idea though; it's fraught with danger. [16:15] Also you might be installing a PAM module for a particular case but not want it in the general case, in which case adjusting common-session would be the wrong thing to do. [16:16] For example I use libpam-google-authenticator but only with ssh and not common-session. [16:17] rbasak: I would like to keep a consistent version across all servers, I wouldn't like someone to build a server and just install the latest out there. This will cause different versioning of software cross our fleets, so my idea was to install a base version shipped with 14.04 LTS and then bring this version to the latest security version [16:17] hope that makes better sense :( [16:17] jge: that's a reasonable thing to want to do. [16:17] rbasak: cheers again ! [16:17] Yossarianuk: no problem! I hope that was helpful. [16:18] jge: an easier way might be to install without -updates or -security enabled at all. [16:18] jge: and *then* enable -security only if you wish. [16:18] yep that's what I currently do [16:18] aside from having to modify that and /etc/sssd/sssd.conf (to add sudo to services) the ipa-client package works fine in the default ubuntu package (in 14.04 at least) [16:18] So then you shouldn't need apt to force versions? [16:19] rbasak: thing is that when I build a new box, and use apt-get install nginx, it will always install the latest (candidate) version [16:19] jge: not if it doesn't have -updates or -security enabled. [16:19] jge: then it'll install the release pocket version only. [16:20] jge: when I say "enabled", I mean "visible to apt via sources.list". [16:22] rbasak: that's what I thought too, I did a fresh install last night and checked the candidate version and it has 3.3 as candidate [16:22] which is the latest [16:22] maybe because I did a apt-get udpate? [16:22] I don't think you have tuned your sources.list [16:22] Look at the output of "apt-cache policy" and it'll tell you where it is picking up 3.3 from. [16:23] 3.3 is in trusty-updates only, therefore you must have it enabled. [16:23] ^ that [16:23] (which is what I was saying xD) [16:24] hmm ok, I see what you're saying [16:24] let me check [16:25] though I strongly recommend using the version *in* updates... if only because there's a fairly huge initscript pidfile extraction fix [16:25] it didn't qualify as a security bug, but it was a fairly huge issue [16:26] (lots of bugs on it) [16:26] damnit, I thought during the installation there was a prompt to turn updates off [16:26] i have them enabled :* [16:27] jge: there's the problem then :) [16:28] though keep in mind what I did just say - there's a pidfile extraction fix in the initscript, so if you have complex regex or such in the nginx configurations it can completely fail [16:28] YESSSS now i see candidate only coming from trusty-security which is the same version I have in production [16:31] i'm wondering now why you would use unattended-upgrades with only security updates enabled, when you can just disable regular udpates on your sources.list? [16:31] You might want an attended update from -updates :) [16:33] that's true. [16:34] so now that only security updates are allowed, if I run "apt-get upgrade" on this box it will only do security updates correct? [16:34] or in this case candidate version from trusty-security [16:36] right [16:36] but you won't be able to install from -updates, now [16:36] even manually [16:36] (because the system now doesn't realize there's items in that repository) [16:36] understood [16:37] rbasak, teward: you guys are great, thanks for your help. [16:39] No problem. [16:40] that's what we're here for :) [16:40] rbasak: FYI: nginx merge stalled, i'm running into package conflicts that are headaches (the fact I have to do it from source packages directly rather than a nice VCS / UDD approach for it is causing headaches) [16:41] manual pushes later won't be an issue, it's just the initial merge to the 1.9.x branches that're giving headaches :/ [16:41] teward: I use git: http://www.justgohome.co.uk/blog/2014/08/ubuntu-git-merge-workflow.html [16:41] (for merges) [16:41] *steals* [16:42] rbasak: thank you kindly! (bzr != option because the Xenial code branches aren't available... which hampers those of us who use the UDD process) [17:09] jcastro: ^ [17:19] teward: let me know if you need any help with that [17:19] teward: the future will be dgit I think. See https://lists.ubuntu.com/archives/ubuntu-devel/2015-November/039010.html === Monthrect is now known as Piper-Off [17:29] so now I'm stuck with an ansible playbook which does not support "apt-get upgrade", only supports aptitude. Someone suggested using the "hold" parameter to achieve the same behavior, but I'm not familiar with aptitude. Anyone know how this can be done? === thumper is now known as thumper-afk === thumper-afk is now known as thumper [23:18] ok im trying to set up an email server..."zimbra" to be exact...i aparently have totally lost all understanding of how dns works? [23:19] the zimbra server is behind a linux software router using iptables. I have opened ports 110 25 and995 and forwarded those ports to the zimbra server. [23:20] I have pointed my mx record at netsol to my ip address [23:20] mx has to point to a name [23:20] then name points to ip address [23:21] when i run the config for zimbra it complains about DNS ERROR - none of the MX records for mail.mydomain.com resolve [23:21] ok..so if my domain is mydomain.com...and it points to the correct ip.. [23:21] MX records cannot point to an IP. They must point to an A or AAAA record. [23:21] grendal_prime: DNS entry: mail.mydomain.com A yourip [23:21] MX points to mail.mydomain.com [23:21] but note if the IP is dynamic and on a residential provider you may get blacklisted [23:22] so then mail isn't sent/received [23:22] (and your ISP may block as well) [23:22] (almost definitely.) [23:22] oooo ok im pointing it to mail.mydomain.com and i just have an a record of mydomain.com [23:22] grendal_prime: yeah, wheverver the MX points must resolve [23:22] for example.. [23:22] no its b2b comcast [23:22] Also, an MX record cannot point to a CNAME. It MUST be an A record. [23:22] 'b2b' = ? [23:22] quantic: or AAAA [23:22] comcast built 4 business.. [23:23] teward: I figured that was sort of implicit. :P [23:23] sorry b4b [23:23] ah [23:23] quantic: :P [23:23] if i run a test from http://www.websitepulse.com/help/testtools.mx-lookup-test.html it does resolve...werid [23:23] grendal_prime: what's the domain name in question? [23:24] quantic sent to you prvt [23:25] cause its a secret...just kidding... [23:25] grendal_prime: current records look OK. [23:26] ya i just changed them [23:26] local dns cache may be outdated then [23:26] now here is the thing, it use to be a gmail hosted domain [23:27] if i send something to that account now though and i log into it, it never comes through so im assuming thats not working anymore [23:30] ok [23:31] so the zimbra install is trying to resolve booksnmore.com but it is aparently unable to do so because it just comes back to say it cant do this. [23:32] if i log into the box second ssh session and ping from there it resolves correctly .. [23:32] host name of the email server would need to be "booksnmore.com" correct? [23:35] so hosts file would be.... firstline 1127.0.0.1 localhost.localdomain localhost second line 192.168.100.100 booksnmore.com === IdleOne- is now known as IdleOne [23:46] grendal_prime: 1127.0.0.1 seems like a rather wierd address ;) [23:46] sorry two many 1s...thats its really just 127 [23:47] zimbra gives examples of zimbra.booksnmore.com [23:47] grendal_prime: do you have a DNS entry for that server? [23:47] i have one for booksnmore.com [23:48] grendal_prime: then you should not need a hosts entry [23:48] RoyK: iirc, zimbra demands that they exist. [23:48] do i need to create another a record of like...zimbra.booksnmore.com and then point the mx record to that? [23:48] quantic: yeah [23:49] does zimbra demand that forward and reverse lookups need to match? [23:49] sarnold: no [23:49] * RoyK uses zimbra without a reverse [23:49] aha [23:49] ok RoyK [23:50] grendal_prime: there may be more help in #zimbra - last I checked, zimbra isn't packaged with ubuntu [23:50] so if i have booksnmore.com and it resolves for all other services..i should be able to make an mx record for just booksnmore.com correct? [23:51] booksnmore.com. 7200 IN MX 10 booksnmore.com. [23:51] ooks ok [23:51] looks ok to me [23:51] ok [23:51] what do you use to check that by the way.. [23:52] dig mx yourdomain.com [23:52] im using a service but i would like to just ping it somehow [23:52] oh it is dig ok thanks [23:56] ok so at my router then i need to forward ports...25, 110 to the zimbra sever