=== Lightsword_ is now known as Lightsword === justizin_ is now known as justizin [02:34] how do you properly create a backup of a Ubuntu server? /var/log and /etc? anything else? [02:37] phuh: depends heavily upon what the server does; /home if it serves as a shell host for your users, or if you run a database server you'll want to find out how to get a consistent database dump out of it.. [02:37] sarnold: what about system specific ones? [02:38] are /etc and /var/log the only ones important for backup purposes? [02:38] phuh: for every service that your server runs, figure out what you'd be sad to lose if it dissapeared, and go track it down. :) [02:39] phuh: I keep /home/ /etc/ /usr/local/ -- I'm not too worried if I lose my logs. [02:39] sarnold: why isn't log important for you/ [02:39] phuh: I only ever investigate them if something seems broken and I want to find out why. I wouldn't care to restore them. [02:40] phuh: if you want to keep your logs safe from intruders or something similar, probably a separate log host would make more sense [02:41] sarnold: do you at least save them somewhere like S3 just in case you'll be interested? [02:41] sarnold: what's the benefit of a separate log host? [02:41] phuh: no, but that makes sense.. [02:42] phuh: it's updated real-time, so you can examine logs immediately when problems happen -- if you only back up the logs every six hours, you'll probably miss the most important logs when something does happen [02:43] phuh: I'm off, have a good night :) [02:44] thank you sir sarnold [02:44] phuh: you're welcome :) have fun! === elliotd123_ is now known as elliotd123 [04:19] bacula is wonderful! [04:33] Hello all! I'm having a slight issue getting vnc4server working on Ubuntu 14.04 [04:33] I have the VNC server running, but when I access it I just get 3 check boxes saying: "Accept clipboard from viewers" === sevensixtwo is now known as Seven_Six_Two [06:41] Anyone around who has used bind in the not to distant past [06:43] InFierno: be better if you just stated what you needed help with [06:44] True dat; configs for a clients site that is just pointed to my nameservers (i setup mine just fine) but clients gives errors - ive forgotten something really obvious [06:45] For client I created @, mail - A records - ftp, www CNAME records and MX records [06:45] what did i forget [06:46] and for NS IN records i created mynameservers IN NS domain [06:46] I think its on the NS in i have the problem === jibel_ is now known as jibel [06:57] InFierno: Added the zone file to them main config file, configured any allowed slaves, etc? [06:57] yup [07:07] Found it [07:07] forgot to add an exception for that domain to the firewall [07:07] (hardware one) [07:31] Good morning. [08:00] TJ-, can you take a look at the ns records and see if you can spot the issue [08:01] InFierno: Sure [08:03] rtrproject.com [08:07] can anyone help me install samba client on ubuntu server? [08:07] You're missing the ns1.rtrproject.com A record [08:08] its ponted to ns1.blacksail.software [08:08] so i shouldnt need its own NS record? [08:09] You have the SOA pointing to "ns1.rtrproject.com" which has no A record [08:09] Ah cheers [08:09] it shouldnt be :) [08:09] maybe you want that to be "ns1.blacksailsoftware.com." [08:10] yea :) [08:10] Gonna swap something else at the same time --> just seeen some stuff in the code of zpanel i dont like [08:10] And I usually have the SOA contact being the hostmaster@ the actual domain that manages the SOA [08:11] You tried vesta cp by any chance ? [08:11] I have 2 clients who need a CP so any ideas [08:11] Unless you can guarantee the DNS administrator will see emails to the email address currently set [08:11] CP? [08:11] control panel [08:11] oh... I prefer terminal shells [08:12] me also [08:12] But they are pretty dumb and need a cp [08:13] dw1: that's interesting, can you open a bug number and pass it to me? [08:44] k1ng, What do you mean? [08:49] k1ng, : http://www.debian-administration.org/article/340/Using_Samba_on_Debian_Linux === RoyK^ is now known as RoyK^Work [09:00] TJ-, u here? [09:05] xnox: ok https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1316989 [09:05] Launchpad bug 1316989 in initramfs-tools "update-initramfs not finding UUID-less raid arrays in mdadm.conf" [Undecided,New] [09:12] dw1: thanks, i'll correct the package though =) [09:14] dw1: do you have control over /etc/mdadm/mdadm.conf? a long time ago ubuntu has switched to UUID based assembly. [09:14] xnox: yeah I can add the UUIDs [09:14] dw1: i would do backups of the config [09:15] dw1: and then ddo /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf [09:15] dw1: however, just adding UUIDS to the existing stanzas should also work. [09:17] xnox: I was told the explicit non uuid declarations would "help it autodetect, if you use older version metadata" so I felt more comfortable keeping them. but if it doesn't matter, well i can go to what mkconf suggests [09:17] xnox: kind of new to raid stuff :) [09:17] dw1: it is weird that we complain though. [09:18] dw1: both should be equivalent. [09:23] xnox: error gone when using either the mkconf one or adding UUID= parameter to end of others [09:24] xnox: as expected [09:24] dw1: good. if you want to keep all options, then by all means just add UUID= parameter at the end. [09:24] xnox: sounds good, thanks [09:38] Hi, I destroyed my system by accidently erasing all kernels and rebooting. I have found a nice help page http://to.ly/xgZ4 , but i ran into a problem. [09:39] i have a software raid setup [09:39] so mounting the partitions as described doesn't work [09:40] DenBeiren: you have to assemble the raid in the livesystem to [09:40] DenBeiren: with mdadm and then mount the partitions fomr the raid [09:40] DenBeiren: check out the manpage for mdadm [09:41] ok will try [09:41] thanks for the tip! [09:41] mdadm --assemble --scan might be the command you are looking for [09:49] seemed to work out [09:50] only the boot partition raid array has en error [09:51] when i try to mount, the device doesn't exist [10:09] nm, i made a syntax error [10:14] http://pastie.org/9148436 what am i doing wrong here? [12:14] guys, my crypted devices are not getting booted [12:15] I mean not getting mounted during boot without manual intervention [12:15] boot complains about some serious errors were found while checking the disk drive for all ecnrypted devices [12:15] have someone seen such an errors? [12:15] how did you overcome? === Lcawte|Away is now known as Lcawte [13:43] hey folks, how I can upgrade 13.10 64 bit server to 14.10 64 bit server? [13:43] I mean 14.04 [13:44] fun wait until point release #1 in July. then sudo do-release-upgrade [13:45] point? === liam_ is now known as Guest5040 [13:45] fun 14.04.1 [13:46] how come? [13:46] i though 14.04 already pretty good [13:46] fun OR download 14.04, make a USB, boot USB and install [13:47] cfhowlett: You shouldn't need to wait if you already have 13.10 [13:47] what usb lol its remote server [13:47] Pici do-release-upgrade unavailable until point release [13:47] cfhowlett: Sure it is. Just not for 12.04 to 14.04 [13:48] Pici NOT true. LTS to LTS is allowed. [13:48] Pici so how to I do it? just run do-release-upgrade? [13:48] nope there is more to it [13:49] http://ubuntuhandbook.org/index.php/2014/01/upgrade-ubuntu-13-10-saucy-to-ubuntu-14-04-trusty/#comments [13:49] bingo :D [13:50] fun: you shouldn't need to use the -d switch at all. [13:50] cfhowlett: 13.10 to 14.04 should work just fine right now using do-release-upgrade. 12.04 to 14.04 will not work until 14.04.1 is released. [13:50] Pici so do-release-upgrade will make sausy 14.04? [13:51] Pici ah! good to know! [13:51] and do I need to edit /etc/update-manager/release-upgrades ? [13:51] to change normal to lts? [13:51] fun: It should work as long as prompt is not set to none. [13:55] :) running update [13:55] yay [14:02] some minor warnings however overall seems fine [14:02] :) [14:02] unable to delete some old dirs [14:03] lol grub now [14:03] saying local version modified but it has new version [14:06] I'm having some trouble with cloud-init. Is there a list somewhere of the steps cloud-init goes through and somewhere where I can see what data it's all using? [14:07] I'm using a local installation of OpenStack. If I launch a VM manually with my own cloud-config script, everything works as intended. If I launch the VM using Heat (OpenStack's orchestration tool) with the same cloud-config file, stuff breaks. [14:07] no idea [14:08] Most notably, if I specify a custom username for cloud-init, the created home directory when using Heat is owned by root:root. [14:09] Pici: ok upgrade done, some minor warnings for some soft overall seems fine :D [14:10] huzzah [14:10] I am suprised OVH dont have 14.04 in its CP yet :D I could promt them to add it :) [14:12] Ubuntu 14.04 LTS haha [14:12] done :D [14:13] suprised people use ovh [14:13] its great [14:13] who do you use? [14:13] if you don't care about being blacklisted [14:13] blacklisted where? [14:13] lol [14:14] most mailservers have most of ovh blacklisted [14:14] they made a huge statement, about fixing the issue [14:14] but nothing ever came of it [14:16] patdk-wk: simply use sendgrid [14:16] :) [14:16] http://www.spamhaus.org/sbl/listings/ovh.net [14:17] re read above :D [14:17] aws similar well many ips :) [14:17] also who do u use then :) [14:17] aws blocks you, if you send more than just a few emails per day [14:18] personally? [14:18] I bought my own hardware, ip address space, and asn [14:18] so here u go [14:18] :) [14:18] also aws allows to send tons of emails if u pay [14:18] ;) [14:19] spammers aren't into paying [14:19] patdk-wk: lol well aws is well known company [14:19] and ovh same [14:20] but the reputations of both, are widely different [14:20] to you yes [14:21] ovh is going on roadshow worldwide feel free to visit and voice your concerns there :) [14:21] hehe [14:33] I'm having some trouble with cloud-init. Is there a list somewhere of the steps cloud-init goes through and somewhere where I can see what data it's all using? [14:33] I'm using a local installation of OpenStack. If I launch a VM manually with my own cloud-config script, everything works as intended. If I launch the VM using Heat (OpenStack's orchestration tool) with the same cloud-config file, stuff breaks. [14:33] Most notably, if I specify a custom username for cloud-init, the created home directory when using Heat is owned by root:root. [16:34] anyone have any ubuntu 12.04 servers with netextreme broadcom cards? trying to figure out a speed issue [16:36] I need to install dropbox on my headless 12.04 Server for multiple users - does anyone have experience with this? I can't find a tutorial focused on using dropbox with more than 1 user... [16:38] davidwebb: does it need to be dropbox? maybe "owncloud" might work [16:39] budman_: that looks interesting. My clients already have dropbox, but it looks like setting this up would be pretty easy too... [16:40] I know someone that just set it up and is loving it, I think the only downfall he said was you have to pay for the apple/android app$1 or so [16:40] for dropbox or owncloud? [16:40] owncloud [16:41] so does this mean all the data stays on my server? [16:41] I'm pretty sure one of my clients will want it sync'd elsewhere as well... [16:41] as long as you don't download it [16:41] or back it up somewhere [16:42] Does it work just like dropbox, where if he's running it on his desktop, it'll download / sync files to his desktop automatically? [16:42] yea it might not meet your needs, just throwing it out there, while I wait for help :) hoping someone runs ubuntu with broadcom cards [16:42] the world attempts not to run anything with broadcom cards [16:45] yes i know, it just happens to be what I have to work with. im thinking about just ordering some 4 port intel cards [16:45] but seeing my options if I can fix it, with a different driver or similar, has to be 1 lone ranger out there ha. [16:50] owncloud has packages for xUbuntu - are they the same as what I'll need for Ubuntu? [16:51] (I don't know the differences between xUbuntu and Ubuntu....) [16:54] davidwebb: just skimming their opensuse build service page, I actually think they mean 'xUbuntu' as a placeholder for 'ubuntu, kubuntu, xubuntu, lubuntu, edubuntu, etc.' [16:54] gotcha. thanks sarnold [16:56] (and probably it'll work for derived distributions such as mint..) [17:03] is there any reason whatsoever for a ubuntu 14.04 server - freshly installed - not to reply to icmp requests coming from certain ip addresses? [17:06] pseudo: probably it won't reply to broadcast packets [17:07] sarnold - it is seeing, but not responding to icmp echo packets coming from a certain ip address from the 10.x subnet [17:08] that is a pretty large subnet [17:08] * patdk-wk blames routefilter [17:08] the server is on 10.30.0.0/24, but another box connected to the same switch is on 10.10.10.40. it responds to about 10% of the icmp packets coming from that ip [17:09] well, that won't work [17:09] not without a router [17:09] same switch [17:09] i.e. layer 2 packets [17:09] no reason it shouldnt work [17:09] ya, should not work [17:09] sure it does [17:09] it's not in the same subnet [17:10] not in same subnet == must be routed [17:10] did you skip networking 101? [17:10] apparently i did [17:10] i guess that makes sense [17:10] give me a sec to rethink a few things [17:15] okay, you are right about the layer 2 thing - i believe that the network is setup such that the 10.10.10.40 router connects to an upstream router and then hairpins back into the 10.30.0.0/24 network. [17:15] the tl;dr is that packets origininating from 10.10.10.40 make it to the 10.30.0.x machine [17:15] i can see the icmp request's come into that box with tcpdump [17:15] but it never sends out a reply [17:16] i other ubuntu 14.04 machines in the same subnet that respond just fine to icmp requests [17:17] pseudo: compare their routing tables, perhaps that one machine doesn't know how to route to 10.10/16 ? [17:17] the routing tables are identical [17:19] again - the packet makes it to the box. the box just never tries to send a reply [17:21] wb pseudo_, you missed nothing while you were gone [17:21] how many nics does that box have? [17:22] ty :) [17:22] one nic [17:22] where are you running tcpdump? [17:22] on the node that i am trying to ping [17:24] omg, this is so frustrating - it just started working again. literally nothing has changed. [17:25] arp issues? [17:25] o_O [17:25] that would be bloody annoying to track down :) [17:25] that's what i thought, but i flushed my arp tables earlier, plus - i was seeing EVERY icmp request packet [17:25] I wonder if those packets are arriving with broken checksums? [17:26] request doesn't mean anything [17:26] if the server doesn't know the arp entry for the gateway/router [17:26] therefor it never makes a reply [17:26] patdk-wk: there is one gateway router and networking was fine everywhere else [17:26] everywhere else != this switch port [17:26] i could ping out through that router [17:27] ok [17:27] patdk-wk: actually, i lied. i don't know that for sure [17:27] thinking back - i was able to ping within my subnet, but i don't think that i tried pinging upstream [17:29] patdk-wk: although. this node is just a kvm image who's nic is a bridged card. and other vms were working fine [17:29] oh? [17:29] could be lots of fun then [17:30] yeah, there are alot of moving parts - i still think it needs to be an ubuntu/kernel thing though - the fact that tcpdump saw a request and did not respond says that networking is all right [17:30] at the same exact time that it was not responding to that ip, it would repond to a ping from a local ip [17:30] does tcpdump automatically report back packet checksums? [17:31] and you are doing it inside the vm right? not on the host? [17:31] yes, inside a vm [17:31] and sarnold, i think it will report a bad checksum by default, but im not sure about that [17:33] aha :) --dont-verify-checksums [17:33] bugger. i liked that theory. [17:33] idk what to say guys, it seems to be working now - i guess i will chalk it up to some sort of arp caching issue. i really don't think that was the problem, but i'm out of theories and it unfortunately just started working again [17:36] pseudo_: good luck, and if you figure it out I'd be curious to learn why :) [17:37] sarnold: yeah, if i figure it out i will definitely report back. thanks for taking the time to help. you too patdk-wk. [18:05] sarnold: i have a theory - i think that our upstream firewall/switch was trying to preventing arp spoofing by disallowing unsolicited arp replies. before this problem started, i took down a vm that had the same ip address and brought a new one(with a different MAC) up in it's place. [18:07] pseudo_: ooh, seems plausible enough. the best part is you've got something you can test now :) === justizin_ is now known as justizin [18:53] ubuntu server liveCD doesn't include fdisk [18:53] anyone know if this is intentional? is there are substitute? [18:53] zealj: I suppose parted is available. [18:54] Fdisk for gpt disks is useless. [18:54] does parted allows you to view the partition table without changing them? [18:54] Yes. [18:54] cool, thanks [18:54] zealj: man parted ;) [18:54] yep :) [19:10] Hi, VMs created with uvt-kvm are not persistent, correct? What are possible use cases for uvtool, is there a way to make a VM persistent? [19:12] oh here it goes [19:12] finally [19:12] I tried to join this channel like 5 times [19:12] it wouldn't join [19:15] stoned: What ended up actually working? Going through GUI buttons rather than "/join"? [19:19] Yes! [19:19] Jordan_U, my friend yes [19:19] I got join to work now. The command was broken for / [19:19] I am on ZNC so I restarted my client and now it works [19:19] Thanks [19:20] You also have to be identified with NickServ to join here, I believe. [19:20] Huh, nevermind actually. [19:30] Hi! I would like to create a sftp user for a www folder. This works if the folder belong to root, but if i set the owner to root, the webserver cannot read it. Isn't it possible to have a sftp user which doesn't need the folder to be owned by root? [19:36] sure [19:36] addgroup web [19:36] adduser www-data web [19:36] adduser root web (optional) [19:37] chown -R www-data:www /var/www [19:37] chmod -R ug+srw /var/www [19:37] 1) you create a group called 'web' [19:38] 2) you put the apache www-data user (and files in /var/www belong to root) so you do #3 [19:38] 3) you add root to group web [19:38] 4) you make everything inside /var/www owned by www-data (apache user) and owned by group 'www' you just created [19:38] 5) you set permissions for user and group to be able to read and write ug+rw [19:39] 6) permission 's' is sticky bit. New files created in /var/www /will take parent directory permissions [19:39] for example after all this [19:39] touch /var/www/tmp [19:39] ls -l /var/www/tmp will report same permissions as /var/www minus d and x [19:39] I hope that helps/explains. [19:40] replman, ^^ [19:45] sticky bit is so nice, let me quickly drop files from my user account into /var/www [19:54] stoned: thank you very much! I will try it this way... [19:55] replman, you are most welcome [20:07] stoned: I always get a "fatal: bad ownership or modes for chroot directory component" in auth.log when trying to login through sftp === Lightsword_ is now known as Lightsword [21:17] http://d.pr/i/2r6y - wth is it just ubuntu 14.04 doing something special? I even tried a 3.11 kernel on the 3.2 kernel boxes. [21:32] what's the proper location for a subdomain, if I currently have /home/fridaynext/public/site.com/public, and I want sub.site.com. [21:33] where does the 'sub' folder go? [22:12] hello guys! [22:13] I was trying to download the ubuntu server for pc-64bit version but all I found was the amd64, is that all right if I install it on a intel processor? [22:14] webfox: yup, that's expected :) [22:14] sarnold: why amd64 then? [22:14] (the long story is, intel's 64 bit instruction set was the "itanium" cpu, which took off nearly nowhere and not at all. amd invented the 64 bit extension to the x86 instruction set, so it got called amd64, and then intel capitulated and made clone chips...) [22:16] right, thank you. [23:04] Hi I have a server with two internet interface , eth0 and eth1. I have installed Ubuntu 12.04. The eth1 is connected to internal network and the server acts as dhcp server on that interface. I like the configure the dhcp server so anything the server reboots it remove the lease file and start again. Is this possible? thx [23:05] rostam: I'm curious why you want that.. it feels like the kind of thing that wil lcause all kinds of confusion, so I'm hesitant to suggest anything.. [23:06] sarnold, agree with you. This is a condition was asked from me, and if you think it is not correct then I will not follow on it. Thanks [23:07] rostam: well, until I know more about it I don't want to say it is wrong -- but I think the clients that have been handed addresses might not expect their lease to be revoked when the server reboots. [23:09] sarnold, in our test bed, we are experimenting on network configuration, so anything they change something they expect the dhcp request to be honored but the system does not since it has already leased the available IPs. I hope this is clear. However, if I want to just test this condition (renew lease time) is there a option? thx [23:10] rostam: okay, that's good enough for me :) hehe [23:10] rostam: the first thing I'd try is adding some lines to the dhcpd upstart or sysv-init file to delete the leases database [23:11] sarnold, any pointer how I can investigate this? thx [23:12] rostam: sure, check the manpage for the dhcpd for the path to the leases database; if it doesn't say, look around in /var/lib/, it probably lives there. [23:13] rostam: then look in /etc/init/dhcpd or /etc/init.d/dhcpd or something similar for the script that starts the service -- just add a 'rm /var/lib/whatever || true' command in the right place -- the || true will make the command exit with success even if the file doesn't already exist, which might be handy [23:13] sarnold, thanks === sarnold_ is now known as sarnold