=== markthomas is now known as markthomas|away === zz_DenBeiren is now known as DenBeiren === alai is now known as alai-vacation === stiv2k_ is now known as stiv2k [05:38] Hello, what is the proper method to allow tcpdump in apparmor when it is in enforcing mode? [05:39] I tried aa-genprof, but that doesn't work. I'm still getting permission denied when it goes back in enforcing mode. [06:16] nilesh: what does the DENIED message in the logs look like? [06:16] grep DENIED /var/log/syslog [06:16] or [06:17] dmesg | grep DENIED [06:30] jjohansen: it's denying all kinds of sockets [06:30] and this happens for mysqld, tcpdump, dhclient [06:30] currently I've put all of them in complain mode [06:31] even checked apparmor_parser, and the final profile file does contain the relevant declarations (net_raw, inet, etc) [06:31] nilesh: right but what are the messages, can you pastebin them? I could then tell you what rules you need to add [06:31] one second [06:34] jjohansen: here https://gist.github.com/anonymous/17aed11fde3276b44449 [06:38] nilesh: what release are you running? And did you replace the profile? [06:38] The profile looks like it has the necessary rules [06:41] it's natty with 3.2 kernel... (this is used for some custom software, so we are unable to upgrade to a supported ubuntu version right now) [06:41] the work is in progress [06:42] nilesh: okay, so this seems to be familiar, I'll see if I can dig up a bug. [06:43] * nilesh fingers crossed [06:49] nilesh: hrmm, I didn't find it, would you be willing to try a newer compiler? [06:50] nilesh: actually just try [06:50] sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.tcpdump [06:50] first and check that the profile was properly reloaded [07:04] nilesh: okay its looking like I can't build a natty package in the ppa anymore, I will have to see if I can't setup locally to do it [07:07] nilesh: the other thing you could paste me is the output of [07:07] apparmor_parser -QT -S /etc/apparmor.d/usr.sbin.tcpdump | hexdump -C [07:11] nilesh: I am going to step away for a few hours and will look at this more when I get back [08:07] jjohansen: https://nileshgr.com/tcpdump_apparmor_hexdump === Lcawte|Away is now known as Lcawte === Lcawte is now known as Lcawte|Away [08:53] Good morning === Lcawte|Away is now known as Lcawte [09:35] jodh, morning [09:35] jamespage: hi! [09:36] jodh, so.... is doko the only foundations team person who deals with python? I have a core language problem I need resolving in 12.04 [09:36] but I'm not 100% comfortable doing the update myself [09:36] https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/1081022 [09:36] Launchpad bug 1081022 in python2.7 "logging.SysLogHandler doesn't close UNIX socket when connection failed" [Undecided,Fix released] [09:39] jamespage: not just doko - we have barry of course. But he's on vac now. [09:39] hmm I can't find doko on irc either [09:42] jamespage: I think doko should be around later. If not, then slangasek/stgraber might be the best bet. [09:55] jodh, ack - I'll ping doko later [09:55] I'm happy todo the SRU - just wanted a expert review of my patch pre-upload [10:00] Am I right in thinking that the ordering of dependencies in /etc/init.d/.depend.* isn't significant (i.e. a: b c is equivalent to a: c b)? [10:04] Hi! Would you suggest using the ubuntu LTS server image as a workstation base to get rid of ubuntu-specific GUI applications? [10:04] For Android and Chromium development. [10:04] Or rather the desktop version and manually uninstall all unrequired software? [10:06] qis: I'd use the server iso or the mini iso, indeed. [10:07] lordievader: Thanks. [10:07] I'll go with the server iso since I need a postgresql database anyway. [10:17] hi all. im in a process of purchasing a new hp proliant dl380p server, and hp is pushing me to get a HP 512MB P-series Smart Array Flash Backed Write Cache for caching most accessed files from the raid array on to the OS ssd. is there a software/free way of doing this without loosing much performance? [10:19] from what i understand, its possible to achieve this using LVM with 1 vg and 2 lvs - 1 for "slow" raid and 1 for the SSD raid array... [10:19] but is it as good as using the smart array option= [10:23] jadesoturi: zfs + l2arc on ssd [10:23] sheptard, ok. zfs as filesystem for the OS then ? [10:23] have to googel l2arc. :P heh e [10:24] it's the caching part of zfs [10:25] ok. and this would give me the same results as using that smart cache controller? [10:25] well [10:25] and i am still able to use LVM ? [10:25] if you want something simple that just works, go with what you were thinking and the 'smart' cache controller [10:25] cause im thinking of setting up a encrypted LVM on top of the raid(will be handeled by the built in raid controller) [10:26] ok.. l2arc is complicated setup ? [10:26] zfs is different === Lcawte is now known as Lcawte|Away [10:27] l2arc is part of zfs [10:27] I'd just go your route [10:27] and haggle HP down on the price of the controller [10:27] ok. never used ZFS, so know little about it.. :) [10:28] this is a "Barebone" server that we will be expanding later on. but performance is crucial, so if a little more work gets me the extra horsepower im willing to do it.:P hehe [10:28] if performance is critical, why are you encrypting [10:29] I’m trying to use ubuntu-server as a proxy with iptables, I’m using these rules but it doesn’t seem to be working https://gist.github.com/jameshilliard/45d4bcf840533116bf07 [10:30] well, as i understand, that does not affect performance, since its decrypted on boot, or does it actually encrypt all the time. safety is important since it will be handling personal client information.. [10:31] jadesoturi: you dont really want to use any raid card with ZFS as it messes with it, so i suggest using the cache, though the 512MB seems very small [10:31] yeah, thats why i wanted to use a part of the os ssd for that instead.. [10:31] Eggs_, so software raid would be better then ? [10:32] jadesoturi: if you're going to spend all that money on a hw raid controller, might as well use it as a hw raid controller [10:33] well. the HW raid controller is part of the proliant dl380p chassie.. so cant really opt out it, but i can choose not to use it if l2arc is safer/better.. [10:33] jadesoturi: with the raid controller you will have more headaches than its worth on ZFS [10:33] that smart array cache is just an addon.. [10:33] Eggs_, ok.. [10:33] jadesoturi: does it have battery backup at least? [10:34] so how does one secury one self from drive failure on ZFS ? [10:34] sheptard, the smart array controller or the built in raid controller? [10:34] can't you set the controller to JBOD? [10:34] mardraum, dont have it in hand yet, so not sure, but most likely.. [10:35] jadesoturi: you choose a raid level you are comfortable with in terms of redundancy [10:35] jadesoturi: ZFS is designed to be used with direct access to the disks for fault tollerance, you `scrub` the disks and it checks the checksums against a table and corrects any bit rot [10:37] is there a mail server that I can use with ubuntu-server. I was looking at postfix but that requires creation of ubuntu accounts per each email account. I'd rather keep the two seperate [10:38] ok. so what happens when a drive fails then? dataloss? how is redundancy handeled? [10:38] the hp raid "flash-backed" raid controller is a raid controller with 512MB write-back RAM cache [10:38] the flash comes in as the fallback when power is lost, then it gets flushed to flash [10:39] (only need small caps instead of the big batteries of old raid controllers with write-back cache) [10:39] it is not accessable as a disk from the OS for zil or so [10:40] ok.. thanks for the info:) so the "caching part is handeled by the raid controller then, without ubuntu ever knowing about it then? [10:41] i just want a safe and redundant solution wthout having to cash out all of my money. [10:44] jadesoturi: you can set drives as hotswappable, if a drive fails the new one picks up, or you can manullly replace the drive without rebooting, very similar to what raid controller does but its all software based [10:45] ok.. so it is a software raid with a different name then:) ehhe [10:45] how many drives can fail before there will be dataloss? [10:45] jadesoturi: the reality is though, since you are ponying up the cash for a decent server, it does have decent hardware RAID [10:45] jadesoturi: depends on your RAID level, every time [10:46] and this is coming from a big ZFS guy (me, though not on ubuntu). [10:46] jadesoturi: you can set up RAID5,6,10 etc [10:46] jadesoturi: i use ZFS as a large store, 20TB [10:47] ~140'ish TB here [10:47] ok. let me get this straight.. if i use ZFS, then i should NOT use the built in raid, but use software raid instead? (ref. your comment about ZFS being picky with hw raid)? is this correct? [10:48] jadesoturi: ZFS is your entire RAID solution, you present the disks to it as JBOD [10:48] we are going for raid 6 to begin with, and over to raid 60 later, as it has better write performance( just cant afford all the drives to begin with).. starting with 3tb and the plan is to extend to 15TB by q3 2015.. [10:48] you don't sound like you know much about this or are willing to research (no offense). If you don't want to get fired, go with the HP RAID, which is perfectly good. [10:49] mardraum, thanks you, exactly what i was wondering about.. [10:49] well. i know enough about raid, nut not zfs,, got a little confused here.. [10:50] I would suggest get this into production and in your spare time play with ZFS and understand it more [10:50] and i am willing to do the research, this being part of it.. always best to talk to people who deal with it on a regular basis, isnt it? [10:50] jadesoturi: zfs has only recently become native to linux, it used to be fuse and its much better now, its defintatly worth a test [10:51] most likely is what im gonna do - start off with hd raid and lvm cache, and move one to ZFS once i know more about it.. :) === Lcawte|Away is now known as Lcawte [10:52] Eggs_, i will play with it a little and see. thank you so much for the help.. just one last question: the LVM cache feature - anyone used it or have any performance info on it? tried google but cant seem to find much in terms of comparison. [10:53] jadesoturi: i guess any kind of caching system depends entirly on the data your accessing, what is it used for? [10:54] document archive with php processing, pdfs, docx, jpegs etc.. [10:55] i know i can use dmcache, bcache etc also, but not sure witch one is best suited for the setup we are doing here.. [10:56] anyways. thank you for great answers.. gonne stick around in here:) but have to run to a meeting.. be back later to bug you guys some more if its ok ;) === MeltedDed is now known as MeltedLux [11:25] checked up on it, the built in controller does not support JBOD, solution is single raid0 drives, but that does really work in the long run... looks like raid+lvm it is untill i get a LSI sas controller and learn zfs some more,, shame.. i liked what i read about it... [11:29] Is there a guide how to install cloud9 on a local ubuntu 14.04 server? [11:29] Yes, I tried googling for it. A lot of noise! [11:31] Hmm, I'll just make a VM snapshot and try the instructions on https://github.com/ajaxorg/cloud9/ [11:33] hi [11:33] someone can help me plz? [11:33] i've reinstalled dhcp-server, but /etc/dhcp is empty. [11:33] i need to configure dhcp.conf in order to customize my net [11:34] but folder is empty :( [11:40] noone can help me? [11:42] http://ubuntuserverguide.com/2012/06/how-install-dhcp-server-on-ubuntu-server.html [11:42] Is there a real ubuntu documentation? [11:42] Like FreeBSDs handbook. [11:43] Or can I just use the debian documentation for server configuration and management? [11:43] At least the pre-systemd one. [11:45] qis: i dont think so, ubuntu messes with the networking a lot so its not going to be the same as debian [11:45] qis: but i dont really know more than that sorry [11:45] Ok. [11:47] qis, if you have allready checked the documentation on help.ubuntu.com then forums/google is your best bet i guess [11:48] jadesoturi: Forums have a long response time. Google and forum search usually return a lot of noise in form of outdated or inept answers. [11:49] valid point.. but guess thats the reality of the internets now-a-days. === Bilge- is now known as Bilge === Lcawte is now known as Lcawte|Away === a1berto_ is now known as a1berto [13:54] jamespage: https://bugs.launchpad.net/cinder/+bug/1403068 [13:54] Launchpad bug 1403068 in cinder "Tests fail with python 2.7.9" [Undecided,New] [13:54] \o/ [13:56] jamespage: same with neutron === Noskcaj_ is now known as Noskcaj === yokel_ is now known as yokel === ValicekB_ is now known as ValicekB [15:12] what filesystem would you recommend for a relativly large raid6+lvm setup ? 3-4tb to begin with and will be growing uo to 15-20tb during the next year. files are mostly pictures and docx/pds with normal size. [15:13] im thinking btrfs or XFS, (ZFS will be considered later, not supported on current hardware due to lack of JBOD) or will a regualar ext4 do ? [15:14] * patdk-wk just uses ext4 [15:14] due to lack of xfs knowledge [15:14] I have ran ext4 systems upto 26tb [15:14] btrfs, heh, has not been good to me [15:14] hehe ok:) isnt the limit 16TB on ext4 ? [15:14] many years ago [15:14] okok.. [15:15] well iknow btrfs might be a little unsafe, but supposted to perform very nice, right? and is heading to replace ext4 as the "main" fs in distros? [15:16] that can be very much debated :) [15:16] sure every thing someone makes they would love to be the default [15:16] thing is every file that will be written will first be preprocessed in php, do this might be a bit CPU intensive need a FS that is not so resource hungry, but still able to perform nicely:) [15:16] but the btrfs teams priories are not inline with that goal :) [15:16] eheh that is true, just read some noise on the internets about it.. [15:17] yes, they dream about that [15:17] but they fail to *do the dirty work* to make btrfs usable enough for that goal [15:17] and instead put effort into pretty things [15:17] ok. so what you are saying is that btrfs is a nogo in a production system ? [15:17] personally? use btrfs on data you don't care about [15:18] that leaves xfs and ext4 - where xfs is the best thing for big files, but how does it go with files from 1 to maybe 25mb ? [15:18] cause thats the regular size of the files for that server.. [15:18] why is xfs best for big files? [15:18] last benchmark I did, ext4 and xfs for large files where the same [15:19] apperntly redhat did some test etc and measured very nice performance on 200mb+ filesizes.. [15:19] except when deleting LOTS of LARGE files, xfs would be craploads faster [15:19] but ext4, heh, most of that was fixed [15:19] hmm ok. so from what you are saying, not much to gain from xfs ? [15:19] there are gains with xfs [15:19] but there are issues too [15:20] so you have to pick what your willing to deal with [15:20] for me, it's lack of knowledge to fix it, when it goes wrong [15:20] rbasak: howdy [15:20] xfs plays better with raid [15:20] ppetraki, really ? this will be a raid6/60 setup with lvm on top and ssds in raid 1 for the os... [15:21] rbasak: know much cloud-init? got any ideas what's going on here? https://bugs.launchpad.net/maas/+bug/1402861 [15:21] Launchpad bug 1402861 in maas "cloud-config-url ignored, install fails" [Undecided,New] [15:21] rbasak: i think cloud-init is timing out trying to reach cloud-config-url and is silently failing, but i'm not sure how to prove it [15:21] jadesoturi, https://raid.wiki.kernel.org/index.php/RAID_setup#XFS [15:21] thank you. ill have a look :) [15:22] you'll want a wide stripe, like 256K.. and your SSDs, you should over provision them by 20% to give the garbage collector headroom e.g. only allocate 80% of the disk [15:25] ok... this will be a hw raid, using the builtin p420i smart array controller from hp... but should i also use the xfs on the os drives? these drives will just run the os and be used for caching from the raid array. [15:26] jadesoturi, that would come down to preference, ext4 would be fine for OS. [15:27] jadesoturi, you should also just test it with fio, see how the array behaves wrt that filesystem [15:27] jadesoturi, RAID60 is a little overkill IMHO, customers tend to put off spare replacements... [15:29] well. my boss wants safety, so this gives him 2 drives fail before shit goes down... and this being a big array and all.. [15:29] also, i read it has better write performance then raid 6 [15:29] what size disks? [15:29] jadesoturi, you could install a cache to get better read performance [15:29] 900gb SAS 10k drives.. [15:29] jadesoturi, like rapid disk [15:30] ppetraki, im going to go for lvm caching to the ssd drives.. [15:30] hp wants me to buy smart cache though.. but its so pricy.. [15:30] what is rapid disk ? [15:30] what are you optimizing for? [15:30] writes? reads? [15:31] and what kinds of writes/reads do you have? [15:31] and what size is your working set? [15:31] well. preferebly both... [15:31] you can't have both :) [15:31] jadesoturi, http://www.rapiddisk.org/ [15:31] unless both are equal, then it's not optimized :) [15:32] hehe. appertly zfs has both if i understand it correctly, but thats another story.. [15:32] jadesoturi, it's a writethrough mem cache, so all your reads will come from memory [15:32] normally, I only ever optimize for writes, as my servers only do writes [15:32] reads are all cached in ram [15:32] zfs optimizes HEAVY on writes [15:32] jadesoturi, so... you won't need the 0 in RAID 60 [15:32] and causes read hell [15:32] this is why zfs needs lots of ram [15:33] * ppetraki can't stand zfs, can't wait to get a new hd in laptop to give it the boot [15:33] * ppetraki correction btrfs :) [15:33] * patdk-wk hugs his zfs systems [15:33] hehe so split opinions on this matter:P [15:33] I'm sure it works, btrfs, not so much [15:34] well, the issue with zfs, and btrfs, is it optimizes writes, cause it's cow [15:34] so all writes are APPENDED to the end of the disk [15:34] nice stream writes, from your random writes [15:34] so it's fast [15:34] well. the system will be running a php/mysql web app where users will be uploading files(mostly jpegs and some pdfs) and doing some revieing of this files and editing the db.. so i guess we need to optimize for writes.. [15:34] the problem is, when you read that data back in, it becomes purely random reads [15:35] this will cause HUGE performance issues, if you attempt to do a *streaming read* backup job [15:35] jadesoturi, so back to parity, yeah, RAID5/6 whatever, just make sure you actually have hotspares. RAID 0+ anything increases your liability because if you lose a member, your data is gone. With the failure rate of drives these days it's easy to lose both your parity drives in RAID6 in the span of a few days === bilde2910|away is now known as bilde2910 [15:36] hmm ok. so you saying go for raid6 instead of 60 ? also, still better to have 2 drive failover then 1..... [15:37] you should never use raid5 :) [15:37] yeah. sort of got that - got a long as email from dell the other day where they officially recommended to move away from raid5 .. lol [15:37] raid6 + several hot spares + read cache like rapiddisk or equiv [15:38] and test test test with fio :) [15:38] ok. can i use lvm cache(dmcache) instead of rapiddisk ? since rapiddisk cache to dram right? and we are a little low on our specs to start with.. [15:39] I dunno about hotspares [15:39] it's better to make them a parity disk than a spare if you can [15:40] so if you went raid6 + 3 spare, I would do raid60 +1spare instead [15:40] sure you can use the lvm version. Though you might want to add more RAM, especially if you're using SSDs. it's cheaper than NAND flash that's for sure and last longer :) [15:41] only lasts as long as the power does :) [15:41] well. we will be starting with 6 drives, need 4 to build the array,, so 2 can go for spare... power is on UPS and emergency generator, we are strting with 32gb and planing to up it to 128 during next year.. [15:41] well, I just assume people have the power thing figured out by now... [15:42] ppetraki, you were talking about raid0+ something being not safe as if you loose a member your out of luck, but isnt thats why raid60 is nice? cause it adds that "safety" option.. [15:43] allowing for 2 drives to fail before your out of luck ? === bilde2910 is now known as bilde2910|away [15:44] anyways.. got to run now, be back tomorrow or later tonight. cots to catch that bus. thanks for great insight guys:) i think we will go for XFS on the raid array and ext4 on the OS drives with raid6 to start with and then move on to 60 once we get some more drives inside:) [15:47] jadesoturi, *if* you keep up the RAID6 yeah, it works. but memory is cheaper than disk and you're not caching the world [15:48] ppetraki, your forgetting about a huge issue though [15:48] one large raid6 is fine and good [15:48] * ppetraki little distracted by work [15:48] except when 1 disk is bad [15:48] then your performance goes down to 20% what it was [15:48] till the rebuild completes [15:48] patdk-wk, that's what hotspares are for [15:48] no [15:48] it has to rebuild [15:48] TILL it is rebuilt, your screwed [15:48] well, sure, it took a fault [15:49] doing a raid60, will mitigate that, only 70% [15:49] it's all about context and the availability of the application, if you need something that's nonstop then buy an equalogic [15:49] cause half of it, is stil lfull speed [15:49] sure, with twice the disks [15:49] SSDs aren't cheap [15:49] equalogic? [15:49] you have a valid point [15:51] ours kept going down, about 3times a year, they are cold storage mode now, we don't even power them up anymore [15:51] anything that needs good reliability and speed we put onto pure systems [15:52] 3years now, and 0 failures from over a dozen of those [15:52] I don't know what's up with your deployment. I work with a storage dev formally from EQL so that sort of uptime comes as a surprise to me [15:52] or well, several failures, but nothing service impacting, and rarely performance impacting [15:53] it was just a unit connecting via 8g fc, to a vmware cluster [15:53] just had problem after problem after problem [15:54] or rather, it worked fine, till stressed, then would fall over [15:54] I don't doubt your experience, it just strikes me as odd. [15:55] the whole thing seemed odd, have some others without issues [15:55] but they never get stressed [15:55] and the unit that was having issues, without production data now, runs fine [15:55] but distrust has been built [16:00] it is odd, usually they work hard to rectify stuff like that [16:03] jhobbs: sorry, otp since you pinged === bilde2910|away is now known as bilde2910 === Guest29894 is now known as hxm === DenBeiren is now known as zz_DenBeiren === Lcawte|Away is now known as Lcawte === markthomas|away is now known as markthomas [17:00] guys, i wrote a upstart script and when I call it with sudo service my-service start it does not back to prompt, it keeps running forever. How can i made it back to prompt? [17:03] & [17:04] i try it without success :( [17:04] i also try to use nohup with & === genii is now known as ChristmasPresent === ChristmasPresent is now known as genii === martinst is now known as martins-afk === martins-afk is now known as martinst === martinst is now known as martins-afk === genii is now known as ChristmasPresent === ChristmasPresent is now known as genii === genii is now known as ChristmasPresent === ChristmasPresent is now known as genii [19:46] my pptp client is adding a route rule before it connects, how can I disable that feature [19:47] HellMind: Most likely it add the route after it connects, doesnt it? [19:48] bekks: no, thats the default route, before it adds some route to not loss connectivity to the server, here is an example: 190.210.182.172 dev lo scope link src 192.168.0.1 [19:48] 190.210.182.172 is the pptp server, 192.168.0.1 is the client box ip, [19:49] A route using the loopback interface?! [19:54] yes, it seems if for avoid using the new defaultroute to connect to the pptp server [19:54] ugly but it does that [19:56] HellMind: Could you pastebin netstat -rn please? [19:57] http://pastebin.com/uLRiQznM [19:57] bekks [19:57] HellMind: You are using xen as well. [19:58] yes [19:58] HellMind: And that route you are talking about is a host route, not a defaultroute. [19:58] yep, the pptp server ip [19:58] when I do pon thatpptp it adds it [19:58] if I remove it it got added again [19:59] But thats no defaultroute. [19:59] isnt added using ip.up ip.down cuz it never executes those [19:59] the default route is 192.168.0.1 which is the same ip as the server [19:59] I use ip rule (marks) to choose routes [20:00] But you dont have a defaultroute, no matter which tool you use. [20:00] Which? [20:06] ? === fridaynext is now known as golftdi === golftdi is now known as fridaynext === markthomas is now known as markthomas|away === martins-afk is now known as martinst [20:51] Hi, i've a problem with apt.-get on kernel ... someoune could "save me"? [20:52] linux-generic : Depends: linux-headers-generic (= 3.2.0.69.82) but 3.2.0.74.88 is to be installed [20:54] redfox: perhaps your mirror has an inconsistent set of packages; it'd be a little bit of work to add new servers for e.g. germany-based servers, but it is likely to let you continue === _thumper_ is now known as thumper [20:55] there was an error on /boot partition, was too little, so I deleted old kernel (not the one that is "incriminated") [20:55] I made autoremove options and after made so [20:56] sarnold: what do you mean? [20:57] redfox: well, your /etc/apt/sources.list will probably include something like it.archive.ubuntu.com, right? if you add similar lines for de.archive.ubuntu.com, you'll download package lists from two mirror networks, and there's a good chance the other mirror will be updated by now [20:58] redfox: ... hmm, is the real problem that you're out of drive space to install them? I thought it gave different errors for that.. [20:59] Does a bind9 dns server use alot of hardware resources on a network with ~ 3 pc's [20:59] Just to set up some A records on my network [21:00] for* [21:00] sarnold: I made the change on source, without success, the errors are: dpkg: dependency problems prevent configuration of linux-generic: [21:00] linux-generic depends on linux-image-generic (= 3.2.0.69.82); however: [21:00] Package linux-image-generic is not configured yet. [21:00] linux-generic depends on linux-headers-generic (= 3.2.0.69.82); however: [21:00] Version of linux-headers-generic on system is 3.2.0.74.88. [21:01] redfox: did you re-run apt-get update first? [21:01] sure [21:01] kevindf: feel free to use the weakest, oldest machine you own for that job [21:01] redfox: hmm. maybe you've got worse problems then :/ [21:02] I know .... any suggest to reinstall old modules ? [21:02] redfox: how much free space do you have on /boot ? [21:03] sarnold: That's kind of the problem :) I'm using weak old machines, one of them is running a Zabbix, webmin, apache, mysql server and other one apache, teamspeak, openvpn server [21:03] 236M 187M 37M 84% /boot [21:03] df [21:03] i wonder if i could add a dns server up to that on one of the servers [21:03] sarnold: this is output on apt-get dist-upgrade [21:03] The following packages have unmet dependencies. [21:03] linux-generic : Depends: linux-headers-generic (= 3.2.0.69.82) but 3.2.0.74.88 is installed [21:03] linux-image-generic : Depends: linux-image-3.2.0-69-generic but it is not installed [21:04] redfox: yeah, cleaning some older kernels might help; you can apt-get purge linux-headers-generic and linux-generic, apt-get purge older kernels, then apt-get install linux-generic and linux-headers-generic again (to make sure you keep getting security updates) [21:05] sarnold: the problem is that apt-get before want to solve conflict [21:05] redfox: that's why I like to remove the packages that introduce the specific version requirements.. [21:06] any ideas? [21:09] redfox: any luck with apt-get purge linux-generic linux-headers-generic ? [21:10] sarnold no it wants to install before linux eneric .. .that has unmet dependance [21:10] That might be painful if you don't reinstall linux-image-generic immediately again afterwards [21:10] genii: indeed, that's the last bit of advice from earlier :) hehe [21:10] it don't want to install [21:11] there's a way to "tell" to ignore the depends? [21:11] only for this 2 package for now? [21:12] redfox: not easily; purging packages until apt is content with its database is often the easiest way out [21:13] but I can't purge it [21:13] apt-get purge linux.... can't execute before it don't solve the pgrade problems === markthomas|away is now known as markthomas [21:14] redfox: can you pastebin the results from the most recent apt-get purge linux-generic linux-headers-generic attempt? [21:15] http://pastebin.com/jq7qDY5E [21:16] this is apt-get -f install [21:16] http://pastebin.com/WHWYC0ve [21:17] redfox: okay, now try this: apt-get purge linux-generic linux-headers-generic linux-image-generic [21:17] redfox: it'll probably throw another error message.. [21:18] NO ERROR [21:18] now I'll reinstall it? [21:18] apt-get install linux-generic linux-headers-generic linux-image-generic [21:19] sarnold you save my life :) [21:19] redfox: now check out dpkg -l 'linux*' | grep ii -- prune that list as needed [21:20] sarnold: what you mean? [21:21] redfox: chances are good you've got four to six kernels installed; I suggest deleting all but two kernels. keep the kernel you're running, keep the newest kernel, and if they're the same, keep one more :) [21:21] sarnold: http://pastebin.com/aCHtqUcQ [21:22] redfox: and what does uname -r show? [21:22] 3.2.0-37-generic [21:22] I didnt reboot [21:22] it's been a while, hehe :) [21:23] yes [21:25] sarnold: so I can do easily: apt-get remove linux-headers-3.2.0-68 linux-headers-3.2.0-68-generic linux-image-3.2.0-68-generic [21:25] redfox: yeah, and keep going; I think this ought to do what you want: http://paste.ubuntu.com/9544128/ [21:27] sarnold: Wonderful, now I have only 2 kerner. 74 last and 37. should I reboot and purge 37 too? [21:28] redfox: not yet [21:29] redfox: reinstall the metapackages, first, apt-get install linux-generic linux-headers-generic linux-image-generic [21:29] for all say .... is already the newest version. [21:30] redfox: okay, I -think- you're put back together again :) [21:30] fiuuu let's try to reboot? [21:31] redfox: don't delete the -37 right away, though, it's served you well lately, you might as well hold on to it while testing the -74 :) [21:31] sarnold: ok so reboot? or there are other check that you want me to do .. to be sure [21:32] It's always good to keep at least one other previous working kernel around [21:33] redfox: I think you're good for rebooting [21:33] I'm doing [21:35] sarnold: WOW i'ts up [21:36] redfox: :D [21:36] and the kernel is the last [21:36] redfox: you might want to take the german mirrors back out of your apt sources, maybe just comment them out; you ddon't really need to download all the lists twice, and out-of-sync mirrors doesn't happen all that often [21:37] bitfury: test [21:37] sarnold: yeah I put back the original mirror list ... I must thank you .... you save me :) [21:37] redfox: great :) have fun! [21:37] bitfury: test === beisner- is now known as beisner === martinst is now known as martins-afk === a1berto_ is now known as a1berto === martins-afk is now known as martinst === Lcawte is now known as Lcawte|Away === paralle21_ is now known as parallel21 === martinst is now known as martins-afk === martins-afk is now known as martinst === martinst is now known as martins-afk