[00:37] sudo apt-get update gives this on lucid [00:37] *** glibc detected *** /usr/lib/apt/methods/http: munmap_chunk(): invalid pointer: 0x00007fb1bd37b460 *** [00:37] any suggestion? [00:37] just rebooted and still same error [00:38] anything in dmesg? [00:39] no.. I thought glibc bug was fixed that was introduced on Aug 4th [00:50] hi all. i need to install and load the reiser4 kernel module on a ubuntu-server. can somebody tell me how can this be done? i have already formated a partition by installing the needed tools like progsreiser4 and using mkfs.reiser4 but i am not able to mount it. it say always unknow filesystem. i guess this is because i need to load the kernel module for this filesystem also. [00:51] xperia: try "modprobe reiserfs" [00:54] https://bugs.launchpad.net/ubuntu/lucid/+source/eglibc/+bug/1352504 worked [00:54] Launchpad bug 1352504 in eglibc "Regression in 2.11.1-0ubuntu7.14; segfault in getservbyname" [Critical,Fix released] [00:55] sarnold: thanks a lot will try it out and report back! [00:58] axisys: oh I'm sorry, I forgot all about that one :( [01:03] sarnold: modprobe reiserfs loads the old module instead the new one reiser4. becoause of this i am still getting the error message "unknow filesystem »reiser4«" [01:03] when i try to mount the reiser4 partition. [01:04] xperia: oh, I'm sorry. I didn't realize reiser4 hadn't been merged: "Reiser4 has a questionable future and no immediate plans for merging into the Linux kernel." http://en.wikipedia.org/wiki/Reiser4 [01:07] sarnold: okay thanks for the info. looks like i need to compile it to get it load on 14.04 [01:08] xperia: yeah. oh well, thatshouldn't be too horrible. check out the kernel-package package -- it can help building kernel debs for easy dpkg -i :) [01:12] sarnold: cool will just do it. i really want this reiser4 kernel module === markthomas is now known as markthomas|away === brrr is now known as they === they is now known as us === us is now known as brrr === kickinz1|afk is now known as kickinz1 [06:35] I have postfix using mysql as a backend for some information, but postfix is jailed to /var/spool/postfix so it can't access the mysql socket. I've fixed this by creating a mount point "/var/run/mysqld /var/spool/postfix/var/run/mysqld bind defaults,bind 0 0" in fstab and this mounts fine and works like a charm [06:36] but, when rebooting, it complains that /var/run/mysqld doesn't exist and halts booting [06:36] when I skip it, and manually say 'mount /var/spool/postfix/var/run/mysqld', it works fine again [06:36] /var/run/mysqld is created after the file system mounts [06:37] /var is also "ram" as I recall, rather than an actual disk [06:37] hmm, that would explain it, any decent way of fixing this without hacking around? [06:37] well /var isn't in ram, there is persistent log storage there [06:37] maybe /var/run [06:38] /var /run is I think [06:38] tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) [06:38] so the '/run' outside of var is, yea [06:38] if you look though /run is a symlink to /var/run [06:39] looks like a normal folder to me [06:39] I've not got an ubuntu box at hand to verify, I'm working from memory [06:39] I could be wrong on that [06:39] drwxr-xr-x 16 root root 600 Sep 12 08:29 run [06:39] in . [06:39] er, in / [06:39] but anyway, a way to 'late' mount this? [06:39] or should I then make an upstart script that depends on mysql [06:40] weird thing is that this works fine on another ubuntu 12.04 box, which is the same version as this box but might be a few packages behind [06:41] oh wait, I added 'nobootwait' there [06:41] I'm not sure of the best way I'd handle this [06:41] nobootwait - I don't think that will work, try it though [07:41] can anyone say whether there is an official recommendation regarding using ubuntu's own apt repositories for 14.04 to install puppet-common (http://packages.ubuntu.com/trusty/admin/puppet-common) as opposed to using puppet labs' release repository (as described here: https://docs.puppetlabs.com/guides/install_puppet/install_debian_ubuntu.html)? any pros/cons for either option? [07:42] Good morning. [07:43] thejoecarroll: Standard Ubuntu policy would be to pull everything from their repo, as they might apply Ubuntu specific patches. However the puppet version from the puppet repo works fine on Ubu Trusty. [07:44] thanks lordievader [08:25] hi all. how can i upgrade the kernel easy to 3.15 on a ubuntu 14.04 kernel ? i really need to run 3.15 Kernel instead 3.13 [08:28] !info linux-image-generic [08:28] linux-image-generic (source: linux-meta): Generic Linux kernel image. In component main, is optional. Version 3.13.0.35.42 (trusty), package size 2 kB, installed size 28 kB [08:29] xperia: For Trusty that is not possible. Besides 3.15 is EOL. [08:37] lordivader: hmm is not 3.15 the new kernel that will be used in the comming ubuntu version 14.10 ? what do you mean with EOL End Of Life? [08:39] xperia: Yes, EOL = End Of Life. 14.10 uses 3.16. See https://www.kernel.org/ [08:39] !info linux-image-generic utopic [08:39] linux-image-generic (source: linux-meta): Generic Linux kernel image. In component main, is optional. Version 3.16.0.14.15 (utopic), package size 2 kB, installed size 28 kB [08:40] Perhaps if 3.17 is released before the kernel freeze of oct 9 that one will be included. [08:45] In terms of stability would it be wise to upgrade to 14.10 if you are currently running 14.04? [08:46] on 14.04, is there any firewall on by default? [08:47] No. It is never adviced to upgrade an development release. === Mogwai is now known as Guest57803 [08:47] i'am getting "upnphttp.c:1043: warn: HTTP Connection closed unexpectedly" when running minidlna [08:47] brontosaurusrex: Yes. UFW is installed per default, UFW is a frontend to IPTables. [08:48] brontosaurusrex: I don't think that is a firewall problem. [08:48] lordievader: ok, any guesses? [08:49] "ufw status verbose" says inactive [08:49] brontosaurusrex: The server/client program. Something closes the connection. A firewall will prevent the connection from ever being setup (if instructed to do so...) [08:50] lordievader: okay thanks. [08:58] well, the gupnp-tools are showing my server [08:58] among many others on this network [09:02] also xbmc is finding/playing my stuff, so i guess its working somehow [09:04] lordievader: i really need to get the ubuntu kernel sources for 3.15. how can i obtain them on a 14.04 system ? i looked for ppa but could not find anything. [09:04] xperia: You know that compiling your own kernel is unsupported here? [09:06] this is new to me as i am on a ubuntu 14.04 ubuntu server and thinked here i am right to ask that simple question. where should i go then ? [09:07] okay will try #ubuntu-devel then! lets see [09:07] xperia: The sources of the 3.15 kernel can be found here: https://launchpad.net/ubuntu/utopic/amd64/linux-image-generic/3.15.0.6.7 [09:08] xperia: What I ment was that compiling your own kernel is not supported by Ubuntu. [09:08] xperia: #ubuntu-devel is for the development of Ubuntu itself, I don't think you'll get much help there. [09:08] xperia: Why do you need 3.15 specifically? [09:08] ahh okay thanks a lot will just try it out. [09:08] xperia: #ubuntu-kernel would be your best bet. [09:08] But I suppose they will tell you the same. [09:08] Odd:Bloke: for reiser4 support [09:09] Odd_Bloke: [09:09] reiser4 is still a thing? [09:10] Odd_Bloke: for One Billion small Files and more there is no better File System Choise or do you think else ? [09:11] Sorry, that wasn't a very constructive thing to say. [09:12] xperia: Have you tried http://ubuntuhandbook.org/index.php/2014/06/install-upgrade-linux-kernel-3-15/ ? [09:13] Odd_Bloke: reiser4 just got new features especially for ssd drives only a few days ago and this patches work only with kernel 3.15 versions. thanks for the link will just check it out. [09:15] Odd_Bloke: ahh i need to build the 3.15 Ubuntu Kernel from sources to enable resier4 support. the link lordievader posted looks very good. will help me a lot. [09:15] xperia: If you are going to build a kernel, build 3.16. Like I said 3.15 is EOL. [09:16] xperia: And when things break and you are asking for support mention that you are running a custom kernel. [09:19] lordivader okay then i will try to build the 3.15 Kernel Version. I hope it will work with the reiser4 3.15 patches. [09:19] ehh sorry 3.16 [09:30] lordievader: sorry to ask this stupid question but what ppa do i need to add on a 14.04 system to be able to fetch the ubuntu kernel sources ? [09:32] xperia: Since you don't have the utopic sources simply download them from launchpad. [09:36] thanks. you just gave me the idea to add the utopic sources to the /etc/apt file :-) [09:37] xperia: No don't do that. [09:37] xperia: That's a really bad idea. [09:38] xperia: Please read what I type. === kickinz1 is now known as kickinz1|afk [10:46] anyidea about a 3Tb RAID1 read as 800Gb RAID1 via 12.04.3 LTS / Uefi install ? === kickinz1|afk is now known as kickinz1 [11:09] Is there a non X boot configuration manager that can handle ubuntu's divided boot configuration? [11:09] hi guys. i want to upgrade my running production servers to the supported version of ubuntu. i only have ssh access to my servers and was wondering what is the best way to go about it [11:09] 13.10 at the moment and wish to upgrade to 14+ [11:10] i understand 14.4.1 is prefferable in terms of suport period [11:16] anyone here? [11:16] icu [11:16] ubuntunoob: That surely is possible. [11:16] !install [11:16] Ubuntu can be installed in lots of ways. Please see https://help.ubuntu.com/community/Installation for documentation. Problems during install? See https://wiki.ubuntu.com/CommonProblemsInstall - Don't want to use a CD? See http://tinyurl.com/3exghs - See also !automate [11:17] morning [11:18] For some reason ubuntu is the only distro that is a pain in the boot. I really hope that migration to systemd is going to be fast. Else we still have upstart junk together with legacy sysvrc and systemd unit files to manage... [11:18] what about, sudo do-release-upgrade? [11:18] is it safe to do it over ssh? [11:19] !eolupgrade | ubuntunoob, you'd have to do an eolupgrade. [11:19] ubuntunoob, you'd have to do an eolupgrade.: End-Of-Life is the time when security updates and support for an Ubuntu release stop, see https://wiki.ubuntu.com/Releases for more information. Looking to upgrade from an EOL release? See https://help.ubuntu.com/community/EOLUpgrades [11:19] yes it is however it is recommended to at least have means of "physical" access [11:20] ubuntunoob, how remote is your "remote" [11:20] another country [11:20] ubuntunoob, also use screen [11:20] i have email support from the vps company and they are fairly responsive [11:21] ubuntunoob, sounds like a solution is possible. can your server be taken off-line for the upgrade? [11:21] yes [11:23] ubuntunoob, peaches & cream! read the install alternatives link. Never done one myself, but I'm pretty sure you can enable an EOLupgrade via ssh. [11:23] ubuntunoob, do make sure you use screen or another type of multiplexer that has reconnect functionality [11:24] cfhowlett, ubuntunoob: Saucy is still present on the main archive, so I don't think the special provisions for an upgrade from a release which has been demoted to old-releases need apply [11:24] maxb, you almost certainly know more than I - help? [11:24] cfhowlett, yep i have done it nothing special about it [11:24] I'd say "sudo do-release-upgrade" will still work at this time. [11:24] ok, i will go read the eol guide now. thank you [11:25] Don't worry about the EOL guide, 13.10 is not that old. You are simply in the position of doing a normal upgrade [11:26] (though remote) [11:27] I believe you just need a standard run of 'do-release-upgrade', paying attention to the things it tells you about running the upgrade in a screen session, and having some sort of emergency plan in place for if the worst happens and boot fails [11:28] do_release_upgrade opens a screen itself. [11:28] maxb, what's the command flag to dry run??? [11:28] dry run? [11:29] maxb, "don't execute" but show output [11:30] cfhowlett: It has a sandbox mode, but I don't know if that is what you mean. [11:30] ubuntunoob, actually, do-release-upgrade will show the upgrade plan and await one more authorization initialize so do that. [11:30] Does do-release-upgrade have one of those? I can't see that it would be particularly helpful, as a release upgrade is sufficiently complex that a dry run won't be very representative [11:31] ok so i am doing it on a test server first and running do release upgrade without screen (because you said it opens up its own screen) [11:33] ubuntunoob: Err, do open your own. It will probably detect it. But better be safe than sorry. [11:35] ubuntunoob: Also when you want to stay on LTS releases edit /etc/update-manager/release-upgrades (and set prompt to lts) [11:37] than you, i am doing it now [11:39] after editing to lts, closing all important running processes, backing up important stuff, opening a screen and running d--release-upgrade it says - Checking for a new Ubuntu release No new release found [11:40] ubuntunoob, eol upgrade option then [11:40] tried running - apt-get-upgrade, still no go [11:40] cfhowlett, what do you mean by that? [11:40] !eolupgrade [11:40] End-Of-Life is the time when security updates and support for an Ubuntu release stop, see https://wiki.ubuntu.com/Releases for more information. Looking to upgrade from an EOL release? See https://help.ubuntu.com/community/EOLUpgrades [11:42] ok, gonna do some more reading now and try it out. thanks [11:59] in the requirements section it says i should have some sources, such as this one: ## EOL upgrade sources.list # Required [11:59] deb http://old-releases.ubuntu.com/ubuntu/ CODENAME main restricted universe multiverse [12:00] ubuntunoob, CODENAME = trusty for 14.04 [12:00] and that i should replace CODENAME to my release, so is that the new or the old release? [12:00] so the new, ok [12:01] ubuntunoob, errrr false. oldrelease = name of your old distro [12:01] !13.10 [12:01] Ubuntu 13.10 (Saucy Salamander) was the 19th release of Ubuntu. Support ended on July 17th, 2014. See !eol, !upgrade and http://ubottu.com/y/saucy [12:02] ubuntunoob, "saucy" in your case. [12:02] thank you, i am looking it up now [12:05] i looked for them here: /etc/apt/sources.list and the necessary sources are not present. should i edit the sources.list file and write those sources myself? [12:05] ubuntunoob, paste your sources.list [12:05] !paste [12:05] For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [12:07] http://paste.ubuntu.com/8326629/ [12:09] ubuntunoob, OK, as I stated : never done an EOL upgrade but as I read it, you have point all those debs to the old release repos. [12:10] so this i what i intend to add [12:10] http://paste.ubuntu.com/8326642/ [12:10] ubuntunoob, looks like nope [12:10] ubuntunoob, those do NOT point to the OLD RELEASE repos [12:12] ubuntunoob, look at the EOL wiki. the /etc/apt/sources.list ##EOL upgrade sources.list must be edited so CODENAME = saucy [12:14] i think i get it, something like this - deb http://old-releases.ubuntu.com/ubuntu/ saucy main restricted universe multiverse -----should work then [12:14] ubuntunoob, make this easy. rename your current sources.list to SAVEDsources.list then save the EOL upgrade sources.list as the new /etc/apt/sources.list [12:25] i added the sources to the new sources.list file and when i ran "aptitude update && sudo aptitude safe-upgrade" it had many errors. had 404 page not found errors and W: Failed to fetch [12:29] i found this "https://help.ubuntu.com/community/TrustyUpgrades" and i think it is going to work. gonna try now [12:30] same as before "No new release found" [12:31] Hi Folks - nfs-kernel-server stops working shortly after clients make a connection. I saw this in dmesg http://pastebin.com/tECnbRWj - any ideas? Thanks [12:38] ubuntunoob, do not ADD the new sources - replace the sources. [12:39] yeah, i did that [12:40] i am now following http://askubuntu.com/questions/453883/upgrade-from-13-10-to-14-04-no-new-release-found [12:40] i am also trying it as root so now i am trying as a non root [12:45] Phibs, did you get sorted ? [12:54] i followed a guide and performed the following steps: http://paste.ubuntu.com/8326879/ [12:54] as a non root user with sudo privelages [12:55] and yoooooooohooooooooooooo it works! [12:55] No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty [12:57] i am still a bit concerned about how this was done because i do not understand what happened, which is fine for my test server but can you please reassure me so i can reproduce these steps in my production server? [12:58] ubuntunoob, line 13 - correct command = sudo do-release-upgrade [12:59] that is what i thought, so i can safely do this on my production server? there is nothing that is not kosher here? [13:00] ubuntunoob, no other errors on your test box, right? [13:01] seems fine [13:01] i am gonna check the main functions like lamp etc before i continue [13:01] ubuntunoob, then put a brew in the freezer, launch your udpate, retrieve said brew and watch the magic [13:02] yeah the magic happened!!! [13:02] and the brew was delecious [13:02] :) congrats on a job well done. [13:02] thank you [13:03] ubuntunoob, happy2help [13:03] i have another question though [13:03] ask [13:03] i do not want to be locked out of my production serrver that is protected by key file [13:04] should i accept password authentication and maybe some other stuff i dont know of before upgrading? [13:04] ubuntunoob that stuff is completely over my head. ask in main #ubuntu channel. [13:04] ok thank you for all of your help [13:05] btw you can fix the answer given in ubuntu forums to include do- instead of do if you like http://askubuntu.com/questions/453883/upgrade-from-13-10-to-14-04-no-new-release-found [13:22] lordievader: wanted just tell that i was able to recompile finally the Linux Kernel with Reiser4 Support. Created just a wiki about this on the ubuntu help page => https://help.ubuntu.com/community/reiser4#preview Thanks a lot for the Help! [13:23] xperia: No problem. Have fun ;) [13:24] xperia: any reason you are using reiser4 ? [13:25] ikonia: storing over one billion data files with a size of 500 to 800 Bytes leave no other choise than using reiser4 on linux as file system. everything else sucks! [13:26] xperia: despite it not being part of the kernel, you still think that it's an option for you ? [13:28] ikonia: it is activly developed. it just gor a few days back support for ssd trim and i will join the developer team and donate money to reiser4. So yes it is the only option for me if i want have a fast and unlimited file system. [13:29] no critique, just wondered how you felt about basically using a "dead" file system as far as the kernel's native suppport is concerned [13:31] xperia: out of interest, have you tried ZFS for that? [13:31] * cfhowlett is too polite to mention that the namesake murdered his wife === rberg_ is now known as rberg [13:33] dasjoe: i thinked to use XFS but from what i have read when it comes to handling of very small files reiser4 beat every other filesystem. [13:33] Hello all. I've got a 3 drive RAID 5 with mdadm. One of these drives is starting to give SMART errors, so I've added another drive to the RAID. It is adding as a spare; Can I force it to sync because I know this other drive is going to die soon... [13:34] korylprince: mark the other one as failed [13:34] korylprince: and the spare will take over [13:36] Thanks! [13:43] xperia: Would be interested to know if the amount of time spent actually processing will outweight the amount of time spent working out how to get reiser4. ;) [13:51] Odd_Bloke: indeed let see. this billion file storage and access file system on linux is something that is a work on progress since years. See red hat research about this here => http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf [13:55] xperia: A lot has changed since 2011; btrfs is (pretty much) production-ready. === cmagina_ is now known as cmagina [13:58] Odd_Bloke: ohhh i had very bad experience with btrfs on ubuntu! i can not recommend btrfs even i used the latest version on the newest ubuntu kernel version like recommended. the troughput was compared to ext4 very very bad. i talked with the btrfs people and they could not understand it. i used a simple loop to store hundreds of millions data on a ssd disk and btrfs was absolute non usable... [13:58] ...for such a task. [13:59] btrfs also uses LOTS of memory compared to other file systems. [14:00] xperia: are you going to profile reiser4 v. xfs? [14:04] xperia: also since I did not see it mentioned, reiserfsprogs pkg is for reiserfs3, reiser4progs is pkg for reiserfs4 [14:04] xperia: and the kernel module is there by default. [14:05] jrwren: do you mean with profile doing a benchmark. is not my plan as i would need to erase the reiser4 ssd partition and replace it with xfs to make such a profile. phronix maked a benchmark testing the latest reiser4 filesystem and it was pretty well. they said however to avoid XFS for small Files and istead use reiser4 see => http://www.phoronix.com/scan.php?page=article&item=reiser4_linux35&nu [14:05] m=1 [14:06] yeah reiserfs is supported but not the newer reiser4 version. everytime i try to mount a reiser4 partition on ubuntu it gives me the error message FileSystem not supported. [14:06] xperia: i highly recommend comparing the two yourself. Those benchmarks are rarely if ever accurate. [14:07] xperia: ah, i see now you are correct. Sorry about that. [14:08] yes lot of people said it to me to use xfs. i will do for sure at one time. ext4 and so on can not be used because of the inode limitation. So only Reiser4 and XFS is the option. and from what is know XFS has big disatvantages when it comes to deal with lot of Sub Directory's [14:12] That has not been my experience. [14:12] When you say "lot of" do you mean on order of millions, or billions? [14:31] Gotta leave for a bit now, but look forward to discussing nfsd with you folks when I return. Thanks! [14:43] jrwren: millions. your experience with XFS makes me really eager to try out really XFS also. looks like you are very happy with XFS! did you als tryed reiser4 before to compare the experience with this two file systems? [14:51] xperia: sadly no. after losing all my data to reiser3 many many years ago, I never touched reiserfs again. [14:59] jrwren: ahh understandable. losing all data is never good. [15:36] huh [15:36] finch seems to have disappeared from my system [15:48] wow it really is gone [15:48] not just the package but all configs too === Lcawte|Away is now known as Lcawte === markthomas|away is now known as markthomas === matsubara is now known as matsubara-lunch [16:30] smoser: no 0; [16:30] Phibs, can you explain more ? [16:30] you're on precise ? === semiosis is now known as semiosis_ [16:31] yeah [16:32] so we're installing packages from the ubuntu cloud archive [16:32] and if I have the cloud keyring installed... they fail saying untrusted [16:32] if I remove the cloud keyring pkg, they install... === semiosis_ is now known as semiosis [16:37] Phibs: Do you have an example package that causes it? [16:38] Yesterday I was able to successully set up a ubuntu file server, i then tried to set it as a webserver, following the official ubuntu guide, i was able to get as far as installing and configuring squid. then today, for some reason none of my home computers(mac and windows) see the shared samba folder. what could be the reason? the server is visable but attempting to browse it gives a error saying "windows cannot access the s [16:38] TJ-: python-mock [16:38] 1.x with 'cloud' in the name [16:39] Phibs: And which repository/ies are added to sources.list? [16:39] 1s [16:39] Phibs, i can't reproduce that here. [16:39] http://paste.ubuntu.com/8328347/ [16:40] hmm [16:40] yeah I tried it in a separate box and it worked fine [16:40] I really have no idea why it was being dumb :( [16:40] how do I figure out what key is needed for a specific .deb? [16:41] repo was this [16:41] deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/icehouse main [16:41] Phibs: "pastebinit < <(sudo apt-get install )" [16:42] also, I am hitting this repo, via another box where I used apt-mirror [16:42] woudl that cause any issues ? [16:42] TJ-: getting another box spun up to paste error [16:43] Yesterday I was able to successully set up a ubuntu file server, i then tried to set it as a webserver, following the official ubuntu guide. [16:43] i was able to get as far as installing and configuring squid. [16:43] what does squid have to do with a web server [16:43] then today, for some reason none of my home computers(mac and windows) see the shared samba folder. what could be the reason? [16:44] it was in the guide [16:44] this if my first time, i was just following the guide [16:44] Today the server is visable but attempting to browse it gives a error saying "windows cannot access the server" "check the spelling of the name." any ideas what may be the issue? or a search term i should use? Im probably searching wrong because im not finding anything. thanks. [16:46] !patience | minus1999 [16:46] minus1999: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/ [17:00] TJ-: still getting box set up to break ;0 === ideopathic_ is now known as ideopathic [17:06] I would think this bug affects a few more people than just me :P https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1364091 [17:06] Launchpad bug 1364091 in mdadm "Possible RAID-6 corruption" [Undecided,New] === TDog_ is now known as TDog === matsubara-lunch is now known as matsubara [17:12] hey all, trying to use vmbuilder to build a saucy (14.04) VM on precise (12.04) base… process hangs on "Base system installed successfully", tried updating to latest python-vm-builder via precise-proposed still nothing [17:12] anyone else have an issue like this? [17:16] Free95: The way I create/install VM's remotely is by starting them with virt-install and then connecting to their vnc server. [17:17] lordievader, what if you're trying to make a minimal OS install? aka JeOS? [17:17] Same applies. [17:18] If that minimal OS goes to a vm that is. [17:18] Oops, confused girt-manager with girt-install haha [17:18] *g-> v [17:19] hallyn: ping [17:19] hallyn: what does lxc-start --share-net do? [17:20] zul: never heard ot it [17:20] oh. [17:20] hallyn: its in the man page :) [17:20] so it'll grab the /proc/pid/ns/net of target pid and connect the container to that namespace [17:21] so two containers can share a netns [17:21] hallyn: so if i create an netns using the ip command before starting the container and specify that pid it will use that netns? [17:22] yeah, well so lon gas you have a pid sitting in that netns, yes [17:22] hmm. interesting [17:23] lordievader, the thing that stinks is that I can't specify what packages I want to install right on the command line [17:23] Free95: I suppose you are trying to install Ubuntu. Take the mini iso or do a debootstrap install. [17:23] hallyn: so if i presetup somethign like 50 netns namespaces, i should just able to start an lxc-container from one of the random 50 netns containers i started [17:24] yeah [17:24] if that's what you wanted :) [17:24] hallyn: do you need to configure anything in the lxc config file? [17:25] lordievader, vmbuilder is basically a frontend to debootstrap.. I guess the real question is, how can I easily provision VMs without building the ubuntu cloud infrastructure first? === Adri2000 is now known as Guest24333 [17:25] zul: not sure [17:26] checking whether all the network setup is ignored in thta case [17:26] Free95: I have no experience with cloud things. I don't think I can help you with that. [17:26] me neither haha [17:27] well I guess I'll just have to setup a basic ubuntu image and just clone it any time I want to make a new VM quickly :-/ [17:28] zul: i think you'll want to set lxc.network.type = none in the config file [17:28] Hey Guys, Does anyone know how to limit the number of tunnels through sshd? [17:28] ok cool thanks [17:31] We have a chrooted sftp connection. We're getting ddos by the user when he can't access a file, it crashes his service without terminating the sftp connection then launches another sftp connection. [17:31] TJ-: yeh can't repro anymore... [17:31] TJ-: question is, why would the ubuntu cloud stuff install if the key isn't installed, is that stuff signed w/ the normal ubuntu key? [17:33] Since its ftp though a ssh tunnel, limiting the number of session doesn't work. We had over 1000 open sftp connctions earlier today. [17:37] zzxc: http://nixcraft.com/showthread.php/13764-SSH-limit-number-of-simultaneous-logins-attempts-for-each-user something like that? [17:42] RoyK: Hmmm thats actually an interesting idea. I don't know if it will work since they don't actually have an session assoicated with them. [17:43] * RoyK tries [17:46] zzxc: works for me [17:46] RoyK: Really? Awesome! [17:47] erm [17:47] no [17:47] it doesn't [17:47] I tried to ssh in twice (limit 2), and then sftp failed [17:47] but several sftp sessions don't count [17:49] Hahaha, yeah thats what I was worried about. Thank you for you're help though. I'm going to see if there is a configation on the sftp side to limit the number of concurrent connections. Otherwie I'm going to have to use something like tcpserver to create a firewallish like thing. [17:49] But Seriously, thank you for the effort. [17:50] :) [17:52] zzxc: it might be easier to do it in iptables [17:53] zzxc: but that'll mean maximum connections from an address, not a user [17:53] zzxc: MaxSessions should stop this, though, but seems it doesn't [17:54] RoyK: Yeah, the ip address is used by multple users. [17:54] bingo [17:56] zzxc: Could you use pam_limits with maxlogins ? [17:56] RoyK: The issue with using maxSession is that sftp doesn't use a session. [17:56] zzxc: ah [17:57] zzxc: could it be possible to configure it to use one? [17:58] TJ-: You mean the /security/limits.conf bit? [17:59] zzxc: Yes [18:00] zzxc: e.g/ "username hard maxlogins 2" [18:00] TJ-: Yeah, actually thats what Royk suggested as well. You can still use as many sftp connections as you want. [18:00] can I use /etc/network/interfaces.d for multiple network configuration files? [18:00] zzxc: really? they share a log-in? [18:01] RoyK: Huh I didn't think of that. Thats worth looking into [18:02] zzxc: please share if you find a solution :) [18:02] TJ-: On my system yes. RoyK was actually the one who tested it though (and thank you again for that). I've been digging though some things. [18:03] RoyK: Yep, gladly. [18:04] TJ-: tested it - see above - set limit to 2, logged in twice with ssh, after that sftp failed, but with 10ish sftp sessions, no problems if the number of ssh logins weren't too high === RobertLaptop_ is now known as RobertLaptop [18:32] I’ve been struggling to keep nfsd up and running, and when it fails, clients who have nfs shares mounted from this server eventually hang. The problem *seems* to go away if I require both server and client to use vers3. Have any of you had this issue? [18:32] Using 14.04.1 on both server and client. [18:33] Mip: Those issues are common when you use either nointr and/or hard as mount options. [18:33] soft is your friend [18:34] bekks: thanks - I have been using soft as an option, and still had the problem. Same options now, only I’ve added vers=3 to server and client, and it seems more stable. Any other thoughts? [18:34] Mip: Enforce "intr" too. [18:35] bekks: how/where would I do this? [18:35] Mip: Just use intr in addition to soft. [18:36] bekks: ok, in fstab on client. Am I losing much by requiring my nfs mounts to be made using vers=3? [18:37] Mip: Since you are asking the answer is no ;) [18:38] bekks: - thanks! I’m going to continue to test, but I could reliably get tainted kernels using nfs4, but could not for the life of me find a solution. I’m surprised the problem isn’t more widespread. [18:38] Mip: Tainted kernels? Can you please pastebin that? [18:39] bekks: sure - I pasted this one this morning, but I’ll do it again here: http://pastebin.com/tECnbRWj [18:40] bekks: last line shows the kernel tained (after 2nd crash) [18:41] Are you using ACL over NFS? [18:41] Not intentionally [18:42] note line 8 says 'Not tainted' -- it could be that the first null-pointer exception taints the kernel for future BUGs.. [18:42] .. to save people from wasting time trying to troubleshoot the second or third problem, when the first one is probably the source of the issues [18:43] zzxc: rssh might be the answer [18:43] zzxc: that is, if you don't need to allow logins for the users [18:43] zzxc: I'm actually using rssh. Is there a limt feature in that? [18:44] RoyK: ^^ For some reason I typed my own name... [18:44] hehe [18:45] zzxc: sec [18:46] RoyK: Huh, I found a guy using a very simalar set up with a very simalar situtation. [18:46] A chrooted sftp jail using rssh with 1000+ concurrent sftp connections which caused our system to crash. [18:47] this one? http://sourceforge.net/p/rssh/mailman/message/22190380/ [18:49] Haha yep [18:52] sarnold: to what do you attribute the cause in line 8? [18:52] It’s definitely nfsd, right? [18:54] Mip: alomst certainly, yes. please do file a bug on that if you haven't yet :) [18:57] sarnold: though I’ve been using linux for several years, I’m not sure I *know* how to file a bug. Where would be the appropriate place? [18:58] lordievader, so I got virt-install working…. but I'm doing the install manually. How do I pressed and stuff? For instance, I'd like to setup the disk partitions automatically [18:59] Free95: I have no experience with preseed or kickstart. But I'm sure someone else here can assist you with that ;) [18:59] aw man, you got me this far and that's it? :P haha [18:59] Mip: the easy thing to do is trigger the bug again and then run "ubuntu-bug linux" [19:00] zzxc: seems like most of the world wants to find a solution to this :P [19:00] Free95: what's your question? [19:01] sarnold: I’ll look into that. Any way I could use a copy of dmesg that I created after I saw the bug? I cat’d it to a file. [19:01] RoyK: Yeah, I've noticed that too. I mean it seems like a pretty simple thing. [19:01] hey pmatulis, I'm used to using vmbuilder to make my VMs, including with essential packages etc… but there's a serious debootstrap bug somewhere that has recently appeared and I cannot create trusty VMs [19:02] Mip: if you don't want to go -that- far, you could just file a bug at launchpad against linux and paste in the dmesg .. the bots iwll nag you for more details and if you're still running that kernel, you can probably placate them well enough [19:02] so I'm trying to find functionality that's similar in nature to debootstrap… basically automate as much of the install as possible [19:02] Mip: https://bugs.launchpad.net/ubuntu/+source/linux/+filebug [19:03] zzxc: there's #openssh which may have some smart heads [19:03] sarnold: okay - excellent! THanks so much, and if I can’t get far enough, I can get it to happen again. I’ve seen it happen on this server, as well on a dell pc that was using as a test box to rule out hardware on this server. [19:04] Mip: very nice ) [19:05] sarnold: Man, I am *so* happy I’ve been able to make this stop happening. So, “intr” as an option in addition to “soft” will help to keep my clients from hanging if my nfsd stops running? [19:06] so pmatulis do you have any experience with something like automating installs? [19:06] Mip: at least you'll be able to ^C or kill the things. it might not be awesome but I always missed it when I forgot to include it on my mount lines.. :) hehe [19:08] sarnold: Cool. Well, I’ll test that too, so I can see what I’m up against. I had been mounting the nfs share, and then using that share as my rsnapshot root. I’d like to be able to trust that again…. [19:09] zzxc: I can see you're in there - did you ask there? [19:11] Free95: you can combine preseed with either virt-install or pxe. it's a fairly big topic so you'll need to study these main pieces yourself and then come back with questions with whatever doesn't work [19:16] Free95: it works well, except preseed is exacting and requires a lot of tests/troubleshooting to get right === Guest24333 is now known as Adri2000 [19:32] RoyK: Sorry I missed that [19:33] pmatulis, this seems like a pretty good tutorial: http://agilesysadmin.wordpress.com/2013/09/09/simple-unattended-ubuntu-install-in-a-vm/ [19:33] RoyK: Yeah I already asked. their solution was to use tcpserver, and that it couldn't be solved simply using just sftp. [19:34] I already asked a couple of days ago but maybe the crowd is different at this time of day: Does anyboady here run Ubuntu for ARM64 on an actual ARM64 machine? [19:34] RoyK: But Honestly with everything else I have on my plate I don't want to have to work through adding an additional piece to the stack. [19:35] sarnold: Not sure if you’re still around, but I just submitted the bug. Thanks again for your help! [19:35] SturmFlut: Are theere already market ready arm-64 servers out there? [19:36] zzxc: Yes, by different manufacturers. I am currently doing a bit of market research for my employer and would like to talk to other people running ARM64 [19:36] Mip: great, thanks :) [19:36] zzxc: Sadly it looks like the first round of ARM64 machines will be about twice as expensive as comparable machines by Intel or AMD [19:37] zzxc: tcpserver is rubbish - use iptables instead [19:37] issue: finished installing a vm via virt-install, but now aptitude is complaining that everything is broken! [19:38] that is, no resolution to dependencies [19:39] SturmFlut: Yeah the price should fall pretty quickly. But honestly the idae of using Arm for a server seems a little bit weird. [19:40] RoyK: Yeah, I know but either way, what we need is a user based firewall. This same vendor has multiple user's on our site. [19:40] zzxc: Why? ARMv8-A is a pretty decent architecture, the CPUs should be quite fast and you can use most PCI Express devices. Even NVIDIA GPUs and stuff like that. [19:44] oh weird.. ran aptitude for the first time and it had preselected a bunch of crap I didn't want. Once I told it to deselect everything, the "unmet dependentcies" issue vanished [19:45] SturmFlut: Honestly its because its just becuase its a different architecture. I've played around with a few distros on an arm chip, and while most packages are pretty good every so often you run into something that just doesn't have some dependency and you can't build it. [19:45] zzxc: the solution is to fix it in sshd :P [19:46] RoyK: Hahaha. My experience is that my fixes in a big opensource project like that usually break something else. [19:47] zzxc: obviously [19:47] you can't make an omelette without breaking eggs [19:50] zzxc: That's no suprise. Linux distributions had over 20 years to get everything right on x86/amd64, most ARM stuff is pretty recent. ARMv8 was released just over two years ago. [19:52] Hello all [19:53] I have a bind question if there's anyone who's good with it? [19:53] jsmith-argotec: What type of bind? [19:54] Bind 9.8.1 [19:54] named not bindings [19:54] !ask | jsmith-argotec [19:54] jsmith-argotec: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [19:55] OK got it [19:56] Bind 9.8.1 on Ubuntu 12.04. Trying to add KMS server entry and getting checkname error for having an underscore in entry. I've found articles that say to fully qualify the entry to get around the issue without success. Entry: _vlmcs._tcp.domain.com. SRV 10 0 1688 p53sw01.domain.com. Error: named.db:16: _vlmcs._tcp.domain.com: bad owner name (check-names) [19:57] SturmFlut: Yeah, I know. It will get better, but its currently some what of a chicken and egg issue. Arm will needs to mature more, which would happen quickly if there is enough people using it but a lot of people won't use it until more mature. [19:59] funny thing is, my pi works well on Raspbian [20:00] zzxc: You are probably talking about ARM on the desktop, in servers or in notebooks. The rest of the world is already dominated by ARM and it works extremely well. [20:01] SturmFlut: Good point. [20:02] zzxc: And there are ARM notebooks, ChromeBooks by Google. Seem to work well enough for millions of people. [20:05] SturmFlut: Actually thats what I have as a reference point. Third party distros on a chromebook. Some packages just haven't been ported over yet. [20:07] apropos ARM https://www.youtube.com/watch?v=Jq5nrHz9I94 [20:30] RoyK: 64 Raspberry Pis do not actually qualify as a Supercomputer ;) === a1berto_ is now known as a1berto [21:08] SturmFlut: well, the design of a supercomputer is the thing [21:08] SturmFlut: you don't need good processors to learn about the design === med_ is now known as Guest56170 === justizin_ is now known as justizin === daker_ is now known as daker === Az is now known as Guest13175 === Lcawte is now known as Lcawte|Away [21:58] nevermind I got it work by allowing the KMS server to update DNS itself === yofel_ is now known as yofel === thesheff17_ is now known as thesheff17 === Adri2000 is now known as Guest54358 [22:28] Guys is there a way to rollback a server a day or 2 ? [22:28] incase i've deleted files by mistake === beisner- is now known as beisner === ayr_ton is now known as ayr-ton === inaddy is now known as tinoco === esde is now known as Guest35577 === Guest35577 is now known as esde === huttan_ is now known as huttan === obi_ is now known as obi12341 === yokel_ is now known as yokel === Monotoko is now known as Guest51025 === Caitanya is now known as Guest84986 === ming is now known as Guest38185 === foo__ is now known as foo === adam_g` is now known as adam_g [23:35] Okay, I'm kindof stumped. I added a column to a table in MySQL, and while local on the server I see the column, but any remote connection shows no such column. I've run FLUSH TABLES, I've restarted the service . . . no change. === Haswell is now known as 6A4AAORST === jpds is now known as 6JTAAFS52 === hggdh is now known as 6JTAAFWBI === daveops is now known as Guest40532 [23:40] Oh riiiiight, that server was being crashy and so I moved MySQL to a different server. Durrr. === DJones is now known as Guest81776 === az is now known as Guest80153 === Locke2002 is now known as Guest92015 === Monotoko is now known as Guest9010 === phaidros is now known as Guest59968