[00:20] <sweettea> what does one use on a linux iptables masquerade router
[00:20] <sweettea> to do QOS
[00:22] <sarnold> sweettea: I've used this before: http://lartc.org/wondershaper/
[00:24] <sweettea> tc?
[00:24] <sarnold> traffic control
[00:24] <sweettea> that is the first hit on google for
[00:24] <sweettea> nice
[00:24] <sweettea> how was the experience?
[00:25] <sarnold> sweettea: quite good :) interactive stayed interactive...
[06:57] <vedic> I have my server located at remote place. I would need to take real time backup of data coming to that server as another server at another location would need that. Is it good to use OpenVPN and then do rsync? or rsync itself is good enough. Security is important as it is customer's sensitive data. Also, for regular maintainance is it advice to use OpenVPN compared to ssh?
[06:59] <melmoth> rsync can be easily tunnelled in ssh.
[06:59] <melmoth> much more easy to set up compared to setting up a vpn
[07:05] <vedic> melmoth: Would rsync + ssh provide almost real time data transfer? Data should not remain on the first server for more than 5 seconds
[07:05] <melmoth> forget about it
[07:06] <melmoth> you are looking for real time mirrorring sumthing, rsync is not the tool you are looking for
[07:06] <melmoth> may be drdb, i never used it
[07:08] <vedic> melmoth: Any other method for data transfer in real time? Each file is just 100 to 150 KB but there are about 30 files every minute
[07:09] <melmoth> none that i am aware of (but there may be , this is not a problem i was ever confronted with)
[07:10] <vedic> melmoth: Over a LAN, NFS can handle it but over internet, I don't have experience
[07:10] <melmoth> well, not really.
[07:10] <melmoth> if you set up a nfs server, your data will still be hosted only in 1 place
[07:10] <melmoth> so there will be no replication of data, wich is what you say you were looking for.
[07:11] <melmoth> if what you need is shared storage, then yes, nfs is one way to go
[07:11] <bradm> vedic: maybe something like a clustered filesystem could do what you want
[07:11] <vedic> melmoth: It will do if 2nd remote server is able to access the data available on first remote server but eventually it should transfer. If that is possible, transfer can be delayed for about 1 or 2 minutes
[07:13] <vedic> melmoth: NFS at least will provide access to the resource on another server so I said NFS on LAN can handle that.
[07:14] <melmoth> nfs will solve the 'several system can see the same data at the point in time' problem, not the "live backup of data" one.
[07:15] <melmoth> (plus i m not 100% sure flock (2) is implemented in nfs. If you need concurrent access on file locked with flock, you ll probably a shared block device and clustered file system
[07:15] <vedic> melmoth: ok
[07:18] <vedic> melmoth: From your experience, can you comment how fast rsync can be made in starting data transfer? What would be typical delay if that is minimized
[07:18] <melmoth> it depends :-)
[07:19] <vedic> melmoth: Consider that you have the situation I mentioned
[07:19] <melmoth> if i understand correctly, it is designed to compared both file (source and destination), and only trnasfer the delat between the 2.
[07:19] <melmoth> so first time you launc it takes some times... Then it s a bit faster
[07:19] <vedic> melmonth: yea, ideally
[07:19] <melmoth> but it s not designed to be a live stuff
[07:20] <vedic> melmonth: But in my case, its all new files every time
[07:20] <melmoth> you run it every once in a while, and i guess you dont want to backup a live database file storage this way
[07:20] <melmoth> vedic, i dont think rsync is what you are looking for, really.
[07:20] <vedic> melmoth: Basically I want to transfer files stored in a directory as soon as possible and then delete that file from directory
[07:21] <melmoth> no idea how to do that.... nor with rsyc, and even less with nfs.
[07:21] <vedic> melmoth: So basically no such easy tools then looking at cluster file system
[07:22] <melmoth> clustered file system is a file system you put on a _shared block device_ that several nodes access it directly
[07:22] <bradm> vedic: even cluster file systems won't really do that, they're usually for keeping multiple copies of something, not one in a different location
[07:22] <melmoth> it s kind of nfs, except what is shared is not a _fileysystem_, but a _block device_
[07:22] <vedic> melmoth: I see
[07:30] <Ravi> Hi My name is RaviTeja. I have a query..Could anyone please help... I had installed ubuntu 12.04 64-bit desktop edition in a machine. The hardware configuration of the machine is its a HP manufacture with, i5 processor 4 GB RAM and 250-GB Hard-disk. I installed lamp-stack in the machine and configured mysql database. This is for an local office purpose and a survey form is included in the document root of the apache. it will have
[07:30] <Guest14403>  Now the question is is there any limit for concurrent connections for the machine? If so, how do i increase the concurrent connections?
[07:31] <Guest14403> hi..Could anyone please help regarding this? Its bit urgent
[07:31] <Myrtti> you got cut off at it will hav
[07:36] <Guest14403> Some users reply back saying that the server is un-responsive.. There is no band-width restriction
[07:50] <vedic> Why DSA is limited to 1024 on ubuntu? in FIPS 183-3, it is mentioned that it can go upto 3072
[07:53] <andol> vedic: In what context? The default /etc/ssh/ssh_host_dsa_key?
[07:54] <vedic> andol: yea, key gen
[07:56] <andol> vedic: My *guess* is that it being a ssh thing, clients expecting a dsa host key to be 1024 bits. Not that I think there are many ssh clients around who aren't capable of at least prefering the rsa key instead.
[08:31] <Masshuu> So I have a random issue. I setup a server on a spare computer I had here, I set a static IP but sometimes it reverts to DHCP for no reason.
[08:32] <Masshuu> doing a ifdown -a && ifup -a fixes it
[08:34] <jpds> Masshuu: /etc/network/interfaces ?
[08:36] <Masshuu> relivant bits http://pastebin.com/nGV6kqUu
[08:37] <Masshuu> though since I can't figure out how to modify the resolv.conf since it simply says it will be overwriten, im going to add this
[08:37] <Masshuu>         dns-nameservers 8.8.8.8 8.8.4.4
[08:37] <Masshuu> what happened to the simpler times
[08:37] <jpds> Masshuu: dns-nameservers is what you're suppose to do.
[08:37] <Masshuu> oh lol
[08:44] <Masshuu> perhapse for some reason its still renewing every XX time
[08:45] <jpds> Masshuu: Did you kill dhclient ?
[08:49] <Masshuu> I duno but this should work if thats the case: apt-get remove isc-dhcp-client
[08:51] <Masshuu> Lets get to breaking stuff. Its what I do best
[09:02] <jpds> Masshuu: Removing the package doesn't mean that it's stopped running,
[09:02] <jpds> Masshuu: ps aux | grep dhclient
[09:39] <jamespage> ivoks: ping
[09:40] <ivoks> jamespage: pong
[09:40] <jamespage> ivoks: hey
[09:40] <ivoks> jamespage: i have to relocate to my office right now; let me ping you in 30 minutes
[09:40] <ivoks> is that ok?
[09:40] <jamespage> ivoks: sure - I'll be around
[09:41] <ivoks> great... brb
[10:06] <ivoks> jamespage: back
[10:08] <lwizardl> hello
[10:09] <cfhowlett> lwizardl, greetings
[10:10] <lwizardl> i'm looking at starting my own server for hosting a few small sites. but the issue comes to the dns nameservers. I was always told they was a bad idea to host on your own
[10:11] <cfhowlett> lwizardl, as I understand it, using the google dns is the preferred default ...
[10:12] <jamespage> ivoks: hey
[10:13] <ivoks> jamespage: i just need some ceph expertiese, if you are willing to help :)
[10:14] <jamespage> ivoks: I am
[10:14] <ivoks> jamespage: for ceph 0.56, if one uses cephx auth, certificates need to be established, right?
[10:15] <lwizardl> cfhowlett, so using the google dns is looked at the proper way but does that also handle the nameservers as in ns1.blah.tld?
[10:16] <cfhowlett> lwizardl, sorry, I'm not qualified to give this level of advice.  please stay in channel and ask someone more informed ...
[10:17] <lwizardl> cfhowlett, thanks :)
[10:19] <jamespage> ivoks: yes
[10:20] <ivoks> ok, let me rever back to none then, just to speed things up
[10:26] <rbasak> lwizardl: I'm not sure what cfhowlett means by google dns. For a small number of servers, I'd just use whatever dns servers your dns registration provider gives you to use.
[10:27] <lwizardl> rbasak, so like the godaddy nameservers and then point the site to the small server? it has been a few years since I last done this.
[10:31] <rbasak> lwizardl: correct. Then set up an A record with godaddy with your server's address. But I (personally) wouldn't use godaddy. I've heard lots of people complain that it's hard to get anything done through their interface without having to say no to buying lots of extras
[10:32] <lwizardl> rbasak, yeah i have switched almost all my domains from them. only have 3 left on them and they are going to be transfered in the next 2 months to my new registar
[11:22] <Daviey> jamespage: In Austin, we hit bug 1123998 .. did we fix it locally, or just chmod?
[11:23] <jamespage> Daviey: hrm - maybe
[11:23] <jamespage> lemme dig out the bug - hallyn was working on it - some sort of udev regression
[11:24] <Daviey> jamespage: don't panic, i'm sure hallyn will pick it up.
[14:04] <hallyn> jamespage: do you still see that?  If it's urgent I can work around it with manual getfaclin postinst.
[14:04] <hallyn> But I need to talk more with pitti about the core udev bug
[14:05] <hallyn> (i can't actually test right now - netinst images are not working for me)
[14:06] <hallyn> but bug 1103022 is the one that really needs to get fixed
[14:06] <hallyn> I guess I'll dig through udevacl code today
[14:08] <zul> smoser:  saw that bug you opened looking at it now
[14:10] <jamespage> hallyn: agreed re the bug that needs fixing
[14:11] <jamespage> it just stuffed my first raring+grizzly deployment
[14:11] <hallyn> jamespage: do you want me to add getfacl to postinst for now?
[14:11] <hallyn> in fact maybe I'll do that and get rid of the ugly udev rule - in postinst will be prettier
[14:11] <hallyn> breakfast -biab
[14:11] <jamespage> hallyn: its not mega urgent; I'd prefer to wait for the right fix assuming thats days and not wees
[14:11] <jamespage> hallyn: have a nice breakfast!
[14:23] <vedic> How to ensure that apt-get doesn't upgrade kernel? I have few libraries that are compiled for current kernel headers.
[14:23] <vedic> security updates should be installed though
[14:33] <vedic> How to ensure that apt-get doesn't upgrade kernel? I have few libraries that are compiled for current kernel headers. security updates should be installed though
[14:35] <cwillu_at_work> vedic, might consider setting those packages up in dkms
[14:36] <vedic> cwillu_at_work: It is for remote server. I would prefer that security updates happens automatically every week or less but package upgrade should not happen automatically
[14:37] <cwillu_at_work> so don't include -updates in your package list
[15:08] <BrEphraim> Here's my situation: I'm trying to migrate between hosting providers. It's an ecommerce site, so I need to have an instantaneous db changeover. In order to avoid DNS TTL-related lag time, I need the old server to automatically forward all requests to the new server. My problem is: the new server runs on a load balancer, and so is a CNAME record rather than a static IP. I am at a loss as to how to set up the
[15:08] <BrEphraim> forwarding in this case.
[15:10] <resno> BrEphraim: hmm, thats intersting. and im not the person to answer. but thats just interesting
[15:13] <JesterJ85> quick question: I'm running ubuntu server in virtualbox. Is there a way I can just have it automatically start with the latest headers, instead of just giving me the choices with the countdown?
[15:14] <MagicFab> JesterJ85, make countdown zero, it will go to default (latest)
[15:14] <MagicFab> BrEphraim, why not schedule downtime? Check activity, there could be an easier solution even if you have a small window.
[15:15] <alimj> BrEphrahim: First go with MagicFab solution. Downtimes are good for these kind of job. Also why dont you use two CNAMEs?
[15:15] <resno> i was going to suggest that to
[15:15] <JesterJ85> is there a configuration file that I can edit? basically I'm running it headless with an automatic start script that starts it when I start my computer...but sometimes it doesn't give a countdown...so it never starts
[15:15] <BrEphraim> how would I use two CNAMEs?
[15:16] <ikonia> BrEphraim: just set two CNAME records
[15:16] <BrEphraim> accomplishing what, though?
[15:16] <BrEphraim> the point is I don't want data going to two dbs simultaneously
[15:16] <BrEphraim> need a clean break
[15:17] <alimj> CNAME 1 record: example1.com -> example2.com, CNAME 2 record example2.com -> finalexample.com, final record: finalexample.com-> real IP
[15:18] <BrEphraim> alimj: sorry, I'm still confused about how that would solve my problems in the case of people using stale DNS records
[15:19] <alimj> Easy
[15:19] <alimj> BrEprahim. 1st step. Set new server. Do not touch NS records
[15:20] <alimj> BrEprahim. Set CNAME on old server to forard all request to new server, still do not touch mail NS records on registerer
[15:21] <alimj> BrEphrahim. Finally modify NS records on registerar and update them to point to new server
[15:22] <BrEphraim> alimj: so will setting the CNAME on the old server take effect immediately, even if ISPs have cached the old records, eg A record for the subdomain pointing to old server's static IP?
[15:22] <alimj> BrEprahim: Yes, In order to be sure you can reduce TTL to a really short value such as 120
[15:23] <alimj> BrEprahim. Change the TTL to 120 now, do it tomorrow
[15:23] <alimj> Your server will be down just for two minutes
[15:23] <alimj> Actually you can even avoid it
[15:24] <alimj> But Two minutes should be OK
[15:24] <alimj> Just to be on the safe side
[15:24] <BrEphraim> alimj: ok. my knowledge is scanty, so I didn't know if the old server's local DNS would override the registrar's
[15:24] <BrEphraim> thanks everybody
[15:24] <zul> yolanda/jamespage: https://code.launchpad.net/~zulcss/python-swiftclient/final/+merge/148211
[15:25] <vedic> Need advice. Currently I have one server that is 8 core Xeon with 8GB RAM. I can't get another server at least for next 8 months. I need to run database for my application. Is it good to run database inside VirtualBox (command line no gui as its on remote server), and keep application and database separate? or it is safe to run it along with application but allow access from localhost only. I already have ssh lock down the server.
[15:25] <yolanda> ok
[15:26] <yolanda> zul, done
[15:26] <alimj> BrEphrahim. Just in worst case senario, if you decrease the TTL to just two minutes, you will have enough time to switch everything while the DB on first server is down
[15:27] <BrEphraim> alimj: yeah, two minutes downtime is no problem
[15:28] <alimj> BrEphrahim. I still recommend you to wait 1 day until your old DNS records on all DNS caches arround the world are expired.
[15:28] <alimj> Then the new short lived new records with just two minutes will be effective
[15:30] <alimj> BrEphrahim. and the next day increase the TTL to 14400 again. The very old value of 86400 is not recommended anymore
[15:30] <BrEphraim> alimj: fortunately, I already reduced the TTL on saturday, so I should be good to go
[15:30] <alimj> Then good luck
[15:31] <alimj> BrEphrahim. You can still query some famous nameservers to check the current status of DNS records on them
[15:32] <alimj> BrEphrahim. But if you reduced it yesterday, you are good to go
[15:32] <BrEphraim> alimj: thanks very much, very grateful for all the help
[15:32] <zul> yolanda: one more https://code.launchpad.net/~zulcss/python-novaclient/final/+merge/148214
[15:32] <alimj> Anytime
[15:39] <yolanda> zul, done
[15:39] <zul> thanks
[16:58] <hallyn> stgraber: on bug 1121917, when you tried to reproduce, did you create bridges extbr0 and intbr0?  how did extbr0 relate to bond0?
[16:59] <hallyn> just bridge_ports bond0 in extbr0,
[16:59] <hallyn> while intbr0 had no ports by default?
[16:59] <stgraber> hallyn: nope, I went the lazy way, setup eth0 and eth1 in a bond, then created a container with two network interfaces attached to lxcbr0 and then installed ifenslave-2.6 in the container
[16:59] <stgraber> hallyn: that was based on the assumption that the problem is with eth0 and eth1 in the container having the same name as those on the host was somehow triggering the removal from the bond due to uevents
[17:00] <RoyK> anyone here that knows how I can place a bridge on top of wlan0 with wlan0 attached to a network with wpa2? wlan0 works well, but with the bridge, it fails badly
[17:00] <hallyn> stgraber: ok, thx
[17:00] <hallyn> RoyK: last i knew, wlan0 could not be bridged
[17:00] <RoyK> oh
[17:00] <RoyK> why not?
[17:02] <hallyn> not sure
[17:03] <qhartman> RoyK, hallyn, http://serverfault.com/questions/152363/bridging-wlan0-to-eth0
[17:04] <hallyn> yeah http://kerneltrap.org/mailarchive/linux-ath5k-devel/2010/3/21/6871733 (linked from there) seems to give the best explanation
[17:04] <qhartman> tl;dr - "Bridging doesn't work on the station side anyway because the 802.11
[17:04] <qhartman> header has three addresses (except when WDS is used) omitting the
[17:04] <qhartman> address that would be needed for a station to send or receive a packet
[17:04] <qhartman> on behalf of another system."
[17:04] <RoyK> qhartman: not that sort of bridging - I want to create br0 and set that as the primary interface for virtualisation use, just like I do with eth0
[17:04] <qhartman> RoyK, even so, I think it's the same root problem
[17:06] <hallyn> RoyK: i'd recommend looking at how libvirt and lxc set up virbr0 and lxcbr0
[17:06] <hallyn> but if you want the better perf of a real bridge...  can't
[17:07] <RoyK> heh
[17:07] <RoyK> guess I'll get a longer tp cable, then
[17:07] <Guest-1119> Hello, does anyone know what 1001 is?
[17:08] <RoyK> that's a number, 1000 + 1
[17:08] <qhartman> RoyK, could get a physical wireless bridge device and hook that to the eth on the server
[17:08] <Guest-1119> Guest:x:1000:
[17:08] <Guest-1119> ftpusers:x:1001:Guest
[17:08] <RoyK> qhartman: no big deal
[17:08] <Guest-1119> what does that mean ?
[17:08] <RoyK> that's the UID
[17:08] <RoyK> numeric user id
[17:08] <Guest-1119> Ah
[17:09] <Guest-1119> For some reason the user set up on vsftpd has access to root o_o
[17:09] <RoyK> well, most users can access the root filesystem
[17:09] <RoyK> unless they're chrooted
[17:09] <Guest-1119> I chromed the acc
[17:09] <Guest-1119> chrooted*
[17:09] <Guest-1119> but it still accesses root :s
[17:10] <RoyK> then you didn't chroot it
[17:10] <RoyK> what ftp server?
[17:10] <Guest-1119> a private one.
[17:10] <RoyK> well, vsftpd, ncftpd, proftpd, ... ?
[17:10] <RoyK> good old ftp, or sftp?
[17:11] <pndemc> sftp
[17:12] <RoyK> then read up on rssh
[17:12] <Guest-1119> vsftpd
[17:12] <RoyK> used as a shell to chroot users
[17:12] <RoyK> vsftpd can chroot - it's in the config
[17:13] <RoyK> Guest-1119: vsftpd can chroot users, but it won't stop them from logging in with ssh unless you're careful
[17:13] <RoyK> ssh/sftp/scp/etc
[17:19] <Guest-1119> RoyK, how do i stop a user from having access to anything other than his own directory and subdirectories?
[17:19] <Guest-1119> as in, he can't click '..' in filezilla, he can't ssh
[17:21] <RoyK> Guest-1119: rssh
[17:24] <smoser> this is a pretty awesome bug
[17:24] <smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-initramfs-tools/+bug/1123220
[17:24] <escott> RoyK, Guest-1119 vsftpd has nothing whatsoever to do with ssh
[17:24] <smoser> its 100% legitimate heisenbug.
[17:24] <Guest-1119> escott, I didn't say it did
[17:24] <smoser> if you attach a serial device to try to see whats going wrong, then it wont go wrong.
[17:25] <utlemming> smoser: I am thinking that we probably should back out ttyS0 change
[17:25] <escott> Guest-1119, the fundamental conflict in your question is this: the computer does not know what a "user" is. all it knows is that a process runs with a particular uid. if you block a users access to /usr/bin (and other folders) there are no binaries for them to run, and there is no way for them to exist on the system
[17:26] <smoser> utlemming, both of those bugs are un-related to any change
[17:26] <smoser> we've hat ttyS0 in the images since lucid
[17:26] <smoser> the change was to add 'tty0'
[17:26] <smoser> which actually changed nothing.
[17:26] <utlemming> ah, sorry, that's what I meant, tty0
[17:26] <smoser> (changed nothing with respect to these bugs)
[17:26] <smoser> at least i'm pretty sure.
[17:26] <escott> Guest-1119, you can chroot or (the more popular choice in todays world use virtual machines/containers) and drop a limited set of binaries in their lap, but you cannot deny them access outside of a folder without giving them something like a full system in that folder
[17:27] <pndemc> chroot is a jail?
[17:27] <Guest-1119> escott, ah, thanks, probably easier to use rash?
[17:27] <Guest-1119> rssh*
[17:28] <escott> Guest-1119, if you just want them to be able to upload/download files things like sftp can be easily chrooted (http://www.minstrel.org.uk/papers/sftp/). similar capabilities exist in ftp daemons
[17:28] <escott> Guest-1119, i would not trust rssh unless you are very very careful in the construction of your whitelist
[17:29] <Guest-1119> escott, so if I only want them to be able to ftp with vsftpd , if i chroot them they won't be able to log in via ssh?
[17:30] <escott> Guest-1119, vsftpd has nothing whatsoever to do with ssh. its like asking "if i chroot them with ftp will they be able to brush their teeth?"
[17:30] <escott> Guest-1119, i dont know?! Do they have a toothbrush?
[17:30] <Guest-1119> .. But with the same username and password, can they login via ssh
[17:31] <escott> Guest-1119, if you dont block them by making an independent modificiation in your ssh config they could login with a username/password via ssh
[17:31] <escott> Guest-1119, some of these ftp daemons support shadow accounts or fake accounts. where it is not put in /etc/passwd but instead in /etc/daemon.conf
[17:32] <RoyK> escott: he was talking filezilla and ssh, meaning sftp
[17:32] <escott> Guest-1119, a fake account like that is not a real account so other applications like ssh/login don't know anything about it
[17:33] <RoyK> escott: rssh works well with chrooting users for scp/sftp/rsync
[17:34] <RoyK> and blocking ssh logins on the way
[17:34] <escott> RoyK, i would not trust rssh. and i fail to see how rssh is related to vsftpd
[17:34] <smoser> utlemming, i can confirm that boot fails without 'tty1' on the cmdline just as if it is there.
[17:35] <smoser> the issue is that /dev/console gets assigned to a non-existant device, and any writes to stdout ('echo HIMOM') fail
[17:36] <RoyK> escott: it's not related to vsftpd, but he said he's connecting with filezilla, which supports ssh. I would trust rssh, though, I'm using it in a 20k user environment
[17:39] <RoyK> escott: better use rssh with ssh tunneling than using plaintext auth with ncftpd
[17:39] <RoyK> ftp over ssh is mature, and secure
[17:39] <RoyK> so is rssh
[17:40] <escott> RoyK, i wouldn't disagree about sftp. i guess the question is what is Guest-1119's question
[17:40] <RoyK> well, ask him (or her). I think I understood what (s)he said quite well
[17:51] <Blinkiz> Hello. I have a new server with Supermicro motherboard, Network card Intel 82574L and 82579M. Problem is that I can not see ethX. It does not exist. shows up fine in lspci. lsmod shows no usage of e1000e driver. dmesg has info about ethX but nothing that seems alarming. Using 12.10 server
[17:51] <Blinkiz> Someone that can help me troubleshoot this?
[17:52] <Blinkiz> ifconfig ethX up does not exist. Ethtool -i ethX can not see any devices. (ethX = eth1, eth0, eth2 and so on)
[17:55] <RoyK> Blinkiz: some boards have newer PCI IDs for commonly know cards. it might be the issue
[17:56] <RoyK> bitfury: http://blog.krisk.org/2013/02/packets-of-death.html ?
[17:56] <Blinkiz> RoyK, Hmm, interesting.
[17:57] <Blinkiz> RoyK, Where can I see the PCI IDs for my network cards? lspci?
[17:57] <RoyK> lspci -v iirc
[17:58] <RoyK> lspci -vn
[17:58] <sarnold> Blinkiz: http://communities.intel.com/community/wired/blog/2013/02/07/intel-82574l-gigabit-ethernet-controller-statement
[17:58] <RoyK> Blinkiz: you can override the pci ids allowed for a driver in /sys somewhere
[18:00] <RoyK> Blinkiz: ok, but I think it was related to supermicro - see the comments on that blogpost
[18:00] <Blinkiz> Yeah, I want to find my PCI IDs and put it into a google search, if this is the problem, others will have it
[18:00] <escott> Blinkiz, in newer kernels there is a new naming scheme for interfaces (coming from redhat) /dev/p#p# or some such that udev might map back to eth something
[18:00] <Blinkiz> escott, It is? did not know that
[18:00] <RoyK> bitfury: pastebin ifconfig -a
[18:01] <escott> Blinkiz, thats my understanding https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html
[18:01] <Blinkiz> escott, Does this apply for me running 12.10?
[18:01] <RoyK> Blinkiz: quite possibly
[18:01] <escott> Blinkiz, i believe so.
[18:02] <Blinkiz> Aaa, cool. I have a p4p1 interface!
[18:02] <RoyK> :)
[18:02] <Blinkiz> ifconfig -a did the trick, thanks RoyK
[18:03] <Blinkiz> Did not know about that. Thanks escott  :)
[18:03] <Blinkiz> Need to go, thanks for the help guys!
[19:22] <pythonirc1011> is there a good partition size recommendation somewhere that I could use for partitioning a 4TB drive for ubuntu (its a raid 10).
[19:24] <xnox> pythonirc1011: use lvm use manual paritioning -> in there choose partition free space automatically. it will give you a few options to do all in one or split.
[19:25] <xnox> pythonirc1011: lvm will allow you to later adjust the sizes, manual partitioning will allow you to see the sizes and change them.
[19:25] <xnox> pythonirc1011: if the box has loads of RAM the default swap size maybe way too large for typical use cases.
[20:13] <plars> jamespage: can you (or anyone else here who might be set up for it) try the maas and iscsi tests on the 12.04.2 candidate images?
[20:23] <hallyn> ivoks: hey, on https://blueprints.launchpad.net/ubuntu/+spec/servercloud-r-qemu, i was going to delete your TODO regarding vhost_net given bug 1029430.  were you planning on investigating that regardless?
[21:18] <dexterboy1106> I just got owncloud setup and put the share directory on my samba share as a folder I cant acces it through the share only with owncloud is there a way to add a  2nd user and group
[21:19] <dexterboy1106> I want to be able to move files in and out of the owncloud directory with filezilla or a maped network drive share on windows is that possible
[21:45] <foo> I just deployed new ubuntu server. authorized_keys for ssh isn't working. This is fairly basic, and I think it should work. Any tricks on ubuntu?
[21:51] <mikal> foo: do you have encrypted home directories turned on?
[22:05] <hackeron> hey, quick question - I need to install ubuntu server on about 50 hard drives or so. I have a little custom script to do it with debootstrap: http://pastie.org/6157544 and some other magic - but I'm stuck with automating the hard drive partitions. Anyone know of an easy way to create just 2 partitions, 1 swap that is 1GB and 1 root that is the rest of the available space?
[22:21] <foo> mikal: hm, not sure - this was a rackspace cloud server install. How can I check?
[22:29] <dexterboy1106> I just got owncloud setup and put the share directory on my samba share as a folder I cant acces it through the share only with owncloud is there a way to add a  2nd user and group
[22:29] <dexterboy1106> I want to be able to move files in and out of the owncloud directory with filezilla or a maped network drive share on windows is that possible
[22:58] <hackeron> anyone? - I need to install ubuntu server on about 50 hard drives or so. I have a little custom script to do it with debootstrap: http://pastie.org/6157544 and some other magic - but I'm stuck with automating the hard drive partitions. Anyone know of an easy way to create just 2 partitions, 1 swap that is 1GB and 1 root that is the rest of the available space?
[22:59] <hackeron> maybe the code that does the "guided partitioning" in the ubuntu installer? - any ideas where I can find this code?
[23:12] <p201> hackeron, what about FAI? http://manpages.ubuntu.com/manpages/precise/man8/fai.8.html
[23:12] <adam_g> zul: ping
[23:12] <zul> adam_g: yo
[23:13] <adam_g> zul: did those precise rebuilds w/o python3 dependencies ever get sorted out?
[23:13] <zul> adam_g: yeah i think i uploaded them
[23:13] <hackeron> p201: interesting, thanks for that
[23:14] <genii-around> hackeron: I used to do this in a preseed file with a gparted "recipe" but it's been a while
[23:14] <adam_g> zul: ok. im gonna do some more rebuilds to get http://status.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/grizzly_versions.html more green
[23:14] <hackeron> genii-around: gparted recipe?
[23:14] <zul> ack
[23:15] <zul> hackeron: why not do one hard drive and just dd the hard drive to copy it
[23:15] <genii-around> hackeron: They explain better than I can at http://askubuntu.com/questions/129670/how-do-i-modify-this-preseed-snippet-to-partition-my-hard-drive
[23:16] <hackeron> zul: that takes hours and there are different size hard drives
[23:17] <hackeron> genii-around: they used a fixed size there, I want to use all available space
[23:18] <hackeron> p201: fai looks way, way overkill - I just want to automate partitioning, debootstrap does everything else I need
[23:19] <hallyn> stgraber: so I'm thinking the lxcpath option will be '-P|--lxcpath'.  sound reasonable?
[23:19] <hackeron> any ideas where I can find the code used in the ubuntu guided partitioning during installation? - that does everything I need
[23:19] <genii-around> hackeron: A more detailed explanation of the "recipes" is at https://help.ubuntu.com/10.04/installation-guide/i386/preseed-contents.html#preseed-partman
[23:20] <stgraber> hallyn: yep, sounds good
[23:32] <adam_g> zul: requests was the package that needed modification for precise build, right?
[23:42] <zul> adam_g:  no it was one of the dependencies i forgot which
[23:43] <zul> adam_g: i think it was oauthlib
[23:44] <adam_g> zul: also, how do the packages from openstack-ubuntu-testing-bot end up getting pusehd to https://launchpad.net/~openstack-ubuntu-testing/+archive/grizzly-trunk-testing/+packages ?
[23:44] <adam_g> zul: https://launchpad.net/ubuntu/+source/requests looks like this is the one that has all the python3 additions
[23:45] <zul> adam_g: then thats is the one
[23:45] <zul> adam_g: when the builds pass it gest dput locally and in the ppa
[23:46] <adam_g> zul: where did you upload your modified requests package?
[23:46] <adam_g> zul: that is true for the openstack packages that git triggered via git, but what about the dependency rebuilds ?
[23:47] <zul> adam_g:  its done manually afair
[23:47] <adam_g> erm
[23:48] <zul> adam_g: requests is at https://code.launchpad.net/~zulcss/ubuntu/precise/requests/requests-ca
[23:48] <Ben64> hey i have multiple ip addresses. when i try to route traffic from a certain address, it starts flooding arp requests, how can i stop that?
[23:48] <adam_g> zul: oh, ok
[23:49] <zul> adam_g: afair there is a jenkins that rebuilds sources and puts them in the local ppa
[23:52] <zul> adam_g: erm...local archive rather than ppa
[23:52] <adam_g> zul: puts em the local archive, yeah. guess it doesn't push to ppa