=== james_ is now known as Guest98563 === wedgwood is now known as wedgwood_away [00:20] what does one use on a linux iptables masquerade router [00:20] to do QOS [00:22] sweettea: I've used this before: http://lartc.org/wondershaper/ [00:24] tc? [00:24] traffic control [00:24] that is the first hit on google for [00:24] nice [00:24] how was the experience? [00:25] sweettea: quite good :) interactive stayed interactive... === Sophism is now known as Guest1406 === erichammond1 is now known as erichammond === koolhead11|away is now known as koolhead17 [06:57] I have my server located at remote place. I would need to take real time backup of data coming to that server as another server at another location would need that. Is it good to use OpenVPN and then do rsync? or rsync itself is good enough. Security is important as it is customer's sensitive data. Also, for regular maintainance is it advice to use OpenVPN compared to ssh? [06:59] rsync can be easily tunnelled in ssh. [06:59] much more easy to set up compared to setting up a vpn [07:05] melmoth: Would rsync + ssh provide almost real time data transfer? Data should not remain on the first server for more than 5 seconds [07:05] forget about it [07:06] you are looking for real time mirrorring sumthing, rsync is not the tool you are looking for [07:06] may be drdb, i never used it [07:08] melmoth: Any other method for data transfer in real time? Each file is just 100 to 150 KB but there are about 30 files every minute [07:09] none that i am aware of (but there may be , this is not a problem i was ever confronted with) [07:10] melmoth: Over a LAN, NFS can handle it but over internet, I don't have experience [07:10] well, not really. [07:10] if you set up a nfs server, your data will still be hosted only in 1 place [07:10] so there will be no replication of data, wich is what you say you were looking for. [07:11] if what you need is shared storage, then yes, nfs is one way to go [07:11] vedic: maybe something like a clustered filesystem could do what you want [07:11] melmoth: It will do if 2nd remote server is able to access the data available on first remote server but eventually it should transfer. If that is possible, transfer can be delayed for about 1 or 2 minutes [07:13] melmoth: NFS at least will provide access to the resource on another server so I said NFS on LAN can handle that. [07:14] nfs will solve the 'several system can see the same data at the point in time' problem, not the "live backup of data" one. [07:15] (plus i m not 100% sure flock (2) is implemented in nfs. If you need concurrent access on file locked with flock, you ll probably a shared block device and clustered file system [07:15] melmoth: ok [07:18] melmoth: From your experience, can you comment how fast rsync can be made in starting data transfer? What would be typical delay if that is minimized [07:18] it depends :-) [07:19] melmoth: Consider that you have the situation I mentioned [07:19] if i understand correctly, it is designed to compared both file (source and destination), and only trnasfer the delat between the 2. [07:19] so first time you launc it takes some times... Then it s a bit faster [07:19] melmonth: yea, ideally [07:19] but it s not designed to be a live stuff [07:20] melmonth: But in my case, its all new files every time [07:20] you run it every once in a while, and i guess you dont want to backup a live database file storage this way [07:20] vedic, i dont think rsync is what you are looking for, really. [07:20] melmoth: Basically I want to transfer files stored in a directory as soon as possible and then delete that file from directory [07:21] no idea how to do that.... nor with rsyc, and even less with nfs. [07:21] melmoth: So basically no such easy tools then looking at cluster file system [07:22] clustered file system is a file system you put on a _shared block device_ that several nodes access it directly [07:22] vedic: even cluster file systems won't really do that, they're usually for keeping multiple copies of something, not one in a different location [07:22] it s kind of nfs, except what is shared is not a _fileysystem_, but a _block device_ [07:22] melmoth: I see [07:30] Hi My name is RaviTeja. I have a query..Could anyone please help... I had installed ubuntu 12.04 64-bit desktop edition in a machine. The hardware configuration of the machine is its a HP manufacture with, i5 processor 4 GB RAM and 250-GB Hard-disk. I installed lamp-stack in the machine and configured mysql database. This is for an local office purpose and a survey form is included in the document root of the apache. it will have === Ravi is now known as Guest14403 [07:30] Now the question is is there any limit for concurrent connections for the machine? If so, how do i increase the concurrent connections? [07:31] hi..Could anyone please help regarding this? Its bit urgent [07:31] you got cut off at it will hav [07:36] Some users reply back saying that the server is un-responsive.. There is no band-width restriction [07:50] Why DSA is limited to 1024 on ubuntu? in FIPS 183-3, it is mentioned that it can go upto 3072 [07:53] vedic: In what context? The default /etc/ssh/ssh_host_dsa_key? [07:54] andol: yea, key gen [07:56] vedic: My *guess* is that it being a ssh thing, clients expecting a dsa host key to be 1024 bits. Not that I think there are many ssh clients around who aren't capable of at least prefering the rsa key instead. === smb` is now known as smb [08:31] So I have a random issue. I setup a server on a spare computer I had here, I set a static IP but sometimes it reverts to DHCP for no reason. [08:32] doing a ifdown -a && ifup -a fixes it [08:34] Masshuu: /etc/network/interfaces ? [08:36] relivant bits http://pastebin.com/nGV6kqUu [08:37] though since I can't figure out how to modify the resolv.conf since it simply says it will be overwriten, im going to add this [08:37] dns-nameservers 8.8.8.8 8.8.4.4 [08:37] what happened to the simpler times [08:37] Masshuu: dns-nameservers is what you're suppose to do. [08:37] oh lol [08:44] perhapse for some reason its still renewing every XX time [08:45] Masshuu: Did you kill dhclient ? [08:49] I duno but this should work if thats the case: apt-get remove isc-dhcp-client [08:51] Lets get to breaking stuff. Its what I do best [09:02] Masshuu: Removing the package doesn't mean that it's stopped running, [09:02] Masshuu: ps aux | grep dhclient [09:39] ivoks: ping [09:40] jamespage: pong [09:40] ivoks: hey [09:40] jamespage: i have to relocate to my office right now; let me ping you in 30 minutes [09:40] is that ok? [09:40] ivoks: sure - I'll be around [09:41] great... brb [10:06] jamespage: back [10:08] hello [10:09] lwizardl, greetings [10:10] i'm looking at starting my own server for hosting a few small sites. but the issue comes to the dns nameservers. I was always told they was a bad idea to host on your own [10:11] lwizardl, as I understand it, using the google dns is the preferred default ... [10:12] ivoks: hey [10:13] jamespage: i just need some ceph expertiese, if you are willing to help :) [10:14] ivoks: I am [10:14] jamespage: for ceph 0.56, if one uses cephx auth, certificates need to be established, right? [10:15] cfhowlett, so using the google dns is looked at the proper way but does that also handle the nameservers as in ns1.blah.tld? [10:16] lwizardl, sorry, I'm not qualified to give this level of advice. please stay in channel and ask someone more informed ... [10:17] cfhowlett, thanks :) [10:19] ivoks: yes [10:20] ok, let me rever back to none then, just to speed things up [10:26] lwizardl: I'm not sure what cfhowlett means by google dns. For a small number of servers, I'd just use whatever dns servers your dns registration provider gives you to use. [10:27] rbasak, so like the godaddy nameservers and then point the site to the small server? it has been a few years since I last done this. [10:31] lwizardl: correct. Then set up an A record with godaddy with your server's address. But I (personally) wouldn't use godaddy. I've heard lots of people complain that it's hard to get anything done through their interface without having to say no to buying lots of extras [10:32] rbasak, yeah i have switched almost all my domains from them. only have 3 left on them and they are going to be transfered in the next 2 months to my new registar === edamato is now known as edamato-afk [11:22] jamespage: In Austin, we hit bug 1123998 .. did we fix it locally, or just chmod? [11:22] Launchpad bug 1123998 in qemu "Can't launch VMs from virt-manager or virsh: Could not access KVM kernel module: Permission denied" [Undecided,New] https://launchpad.net/bugs/1123998 [11:23] Daviey: hrm - maybe [11:23] lemme dig out the bug - hallyn was working on it - some sort of udev regression [11:24] jamespage: don't panic, i'm sure hallyn will pick it up. === edamato-afk is now known as edamato === wedgwood_away is now known as wedgwood [14:04] jamespage: do you still see that? If it's urgent I can work around it with manual getfaclin postinst. [14:04] But I need to talk more with pitti about the core udev bug [14:05] (i can't actually test right now - netinst images are not working for me) [14:06] but bug 1103022 is the one that really needs to get fixed [14:06] Launchpad bug 1103022 in udev "70-udev-acl.rules needs to put g+rw on /dev/kvm" [High,Confirmed] https://launchpad.net/bugs/1103022 [14:06] I guess I'll dig through udevacl code today [14:08] smoser: saw that bug you opened looking at it now [14:10] hallyn: agreed re the bug that needs fixing [14:11] it just stuffed my first raring+grizzly deployment [14:11] jamespage: do you want me to add getfacl to postinst for now? [14:11] in fact maybe I'll do that and get rid of the ugly udev rule - in postinst will be prettier [14:11] breakfast -biab [14:11] hallyn: its not mega urgent; I'd prefer to wait for the right fix assuming thats days and not wees [14:11] hallyn: have a nice breakfast! [14:23] How to ensure that apt-get doesn't upgrade kernel? I have few libraries that are compiled for current kernel headers. [14:23] security updates should be installed though [14:33] How to ensure that apt-get doesn't upgrade kernel? I have few libraries that are compiled for current kernel headers. security updates should be installed though [14:35] vedic, might consider setting those packages up in dkms [14:36] cwillu_at_work: It is for remote server. I would prefer that security updates happens automatically every week or less but package upgrade should not happen automatically [14:37] so don't include -updates in your package list [15:08] Here's my situation: I'm trying to migrate between hosting providers. It's an ecommerce site, so I need to have an instantaneous db changeover. In order to avoid DNS TTL-related lag time, I need the old server to automatically forward all requests to the new server. My problem is: the new server runs on a load balancer, and so is a CNAME record rather than a static IP. I am at a loss as to how to set up the [15:08] forwarding in this case. [15:10] BrEphraim: hmm, thats intersting. and im not the person to answer. but thats just interesting [15:13] quick question: I'm running ubuntu server in virtualbox. Is there a way I can just have it automatically start with the latest headers, instead of just giving me the choices with the countdown? [15:14] JesterJ85, make countdown zero, it will go to default (latest) [15:14] BrEphraim, why not schedule downtime? Check activity, there could be an easier solution even if you have a small window. [15:15] BrEphrahim: First go with MagicFab solution. Downtimes are good for these kind of job. Also why dont you use two CNAMEs? [15:15] i was going to suggest that to [15:15] is there a configuration file that I can edit? basically I'm running it headless with an automatic start script that starts it when I start my computer...but sometimes it doesn't give a countdown...so it never starts [15:15] how would I use two CNAMEs? [15:16] BrEphraim: just set two CNAME records [15:16] accomplishing what, though? [15:16] the point is I don't want data going to two dbs simultaneously [15:16] need a clean break [15:17] CNAME 1 record: example1.com -> example2.com, CNAME 2 record example2.com -> finalexample.com, final record: finalexample.com-> real IP [15:18] alimj: sorry, I'm still confused about how that would solve my problems in the case of people using stale DNS records === Sophism is now known as Guest40919 [15:19] Easy [15:19] BrEprahim. 1st step. Set new server. Do not touch NS records [15:20] BrEprahim. Set CNAME on old server to forard all request to new server, still do not touch mail NS records on registerer [15:21] BrEphrahim. Finally modify NS records on registerar and update them to point to new server [15:22] alimj: so will setting the CNAME on the old server take effect immediately, even if ISPs have cached the old records, eg A record for the subdomain pointing to old server's static IP? [15:22] BrEprahim: Yes, In order to be sure you can reduce TTL to a really short value such as 120 [15:23] BrEprahim. Change the TTL to 120 now, do it tomorrow [15:23] Your server will be down just for two minutes [15:23] Actually you can even avoid it [15:24] But Two minutes should be OK [15:24] Just to be on the safe side [15:24] alimj: ok. my knowledge is scanty, so I didn't know if the old server's local DNS would override the registrar's [15:24] thanks everybody [15:24] yolanda/jamespage: https://code.launchpad.net/~zulcss/python-swiftclient/final/+merge/148211 [15:25] Need advice. Currently I have one server that is 8 core Xeon with 8GB RAM. I can't get another server at least for next 8 months. I need to run database for my application. Is it good to run database inside VirtualBox (command line no gui as its on remote server), and keep application and database separate? or it is safe to run it along with application but allow access from localhost only. I already have ssh lock down the server. [15:25] ok [15:26] zul, done [15:26] BrEphrahim. Just in worst case senario, if you decrease the TTL to just two minutes, you will have enough time to switch everything while the DB on first server is down [15:27] alimj: yeah, two minutes downtime is no problem [15:28] BrEphrahim. I still recommend you to wait 1 day until your old DNS records on all DNS caches arround the world are expired. [15:28] Then the new short lived new records with just two minutes will be effective [15:30] BrEphrahim. and the next day increase the TTL to 14400 again. The very old value of 86400 is not recommended anymore [15:30] alimj: fortunately, I already reduced the TTL on saturday, so I should be good to go [15:30] Then good luck [15:31] BrEphrahim. You can still query some famous nameservers to check the current status of DNS records on them [15:32] BrEphrahim. But if you reduced it yesterday, you are good to go [15:32] alimj: thanks very much, very grateful for all the help [15:32] yolanda: one more https://code.launchpad.net/~zulcss/python-novaclient/final/+merge/148214 [15:32] Anytime [15:39] zul, done [15:39] thanks === wedgwood is now known as wedgwood_away === xnox is now known as foxtrot === foxtrot is now known as xnox === wedgwood_away is now known as wedgwood [16:58] stgraber: on bug 1121917, when you tried to reproduce, did you create bridges extbr0 and intbr0? how did extbr0 relate to bond0? [16:58] Launchpad bug 1121917 in libvirt "guest removes interface from host bonding interface when "infenslave-2.6" is installed in the guest" [High,New] https://launchpad.net/bugs/1121917 [16:59] just bridge_ports bond0 in extbr0, [16:59] while intbr0 had no ports by default? [16:59] hallyn: nope, I went the lazy way, setup eth0 and eth1 in a bond, then created a container with two network interfaces attached to lxcbr0 and then installed ifenslave-2.6 in the container [16:59] hallyn: that was based on the assumption that the problem is with eth0 and eth1 in the container having the same name as those on the host was somehow triggering the removal from the bond due to uevents [17:00] anyone here that knows how I can place a bridge on top of wlan0 with wlan0 attached to a network with wpa2? wlan0 works well, but with the bridge, it fails badly [17:00] stgraber: ok, thx [17:00] RoyK: last i knew, wlan0 could not be bridged [17:00] oh [17:00] why not? [17:02] not sure [17:03] RoyK, hallyn, http://serverfault.com/questions/152363/bridging-wlan0-to-eth0 [17:04] yeah http://kerneltrap.org/mailarchive/linux-ath5k-devel/2010/3/21/6871733 (linked from there) seems to give the best explanation [17:04] tl;dr - "Bridging doesn't work on the station side anyway because the 802.11 [17:04] header has three addresses (except when WDS is used) omitting the [17:04] address that would be needed for a station to send or receive a packet [17:04] on behalf of another system." [17:04] qhartman: not that sort of bridging - I want to create br0 and set that as the primary interface for virtualisation use, just like I do with eth0 [17:04] RoyK, even so, I think it's the same root problem [17:06] RoyK: i'd recommend looking at how libvirt and lxc set up virbr0 and lxcbr0 [17:06] but if you want the better perf of a real bridge... can't === Note is now known as Guest-1119 [17:07] heh [17:07] guess I'll get a longer tp cable, then [17:07] Hello, does anyone know what 1001 is? [17:08] that's a number, 1000 + 1 [17:08] RoyK, could get a physical wireless bridge device and hook that to the eth on the server [17:08] Guest:x:1000: [17:08] ftpusers:x:1001:Guest [17:08] qhartman: no big deal [17:08] what does that mean ? [17:08] that's the UID [17:08] numeric user id [17:08] Ah [17:09] For some reason the user set up on vsftpd has access to root o_o [17:09] well, most users can access the root filesystem [17:09] unless they're chrooted [17:09] I chromed the acc [17:09] chrooted* [17:09] but it still accesses root :s [17:10] then you didn't chroot it [17:10] what ftp server? [17:10] a private one. [17:10] well, vsftpd, ncftpd, proftpd, ... ? [17:10] good old ftp, or sftp? [17:11] sftp [17:12] then read up on rssh [17:12] vsftpd [17:12] used as a shell to chroot users [17:12] vsftpd can chroot - it's in the config [17:13] Guest-1119: vsftpd can chroot users, but it won't stop them from logging in with ssh unless you're careful [17:13] ssh/sftp/scp/etc [17:19] RoyK, how do i stop a user from having access to anything other than his own directory and subdirectories? [17:19] as in, he can't click '..' in filezilla, he can't ssh [17:21] Guest-1119: rssh [17:24] this is a pretty awesome bug [17:24] https://bugs.launchpad.net/ubuntu/+source/cloud-initramfs-tools/+bug/1123220 [17:24] Launchpad bug 1123220 in cloud-initramfs-tools "cloud-image VM causes kernel panic if image is resized" [Low,Triaged] [17:24] RoyK, Guest-1119 vsftpd has nothing whatsoever to do with ssh [17:24] its 100% legitimate heisenbug. [17:24] escott, I didn't say it did [17:24] if you attach a serial device to try to see whats going wrong, then it wont go wrong. [17:25] smoser: I am thinking that we probably should back out ttyS0 change [17:25] Guest-1119, the fundamental conflict in your question is this: the computer does not know what a "user" is. all it knows is that a process runs with a particular uid. if you block a users access to /usr/bin (and other folders) there are no binaries for them to run, and there is no way for them to exist on the system [17:26] utlemming, both of those bugs are un-related to any change [17:26] we've hat ttyS0 in the images since lucid [17:26] the change was to add 'tty0' [17:26] which actually changed nothing. [17:26] ah, sorry, that's what I meant, tty0 [17:26] (changed nothing with respect to these bugs) [17:26] at least i'm pretty sure. [17:26] Guest-1119, you can chroot or (the more popular choice in todays world use virtual machines/containers) and drop a limited set of binaries in their lap, but you cannot deny them access outside of a folder without giving them something like a full system in that folder [17:27] chroot is a jail? [17:27] escott, ah, thanks, probably easier to use rash? [17:27] rssh* [17:28] Guest-1119, if you just want them to be able to upload/download files things like sftp can be easily chrooted (http://www.minstrel.org.uk/papers/sftp/). similar capabilities exist in ftp daemons [17:28] Guest-1119, i would not trust rssh unless you are very very careful in the construction of your whitelist [17:29] escott, so if I only want them to be able to ftp with vsftpd , if i chroot them they won't be able to log in via ssh? [17:30] Guest-1119, vsftpd has nothing whatsoever to do with ssh. its like asking "if i chroot them with ftp will they be able to brush their teeth?" [17:30] Guest-1119, i dont know?! Do they have a toothbrush? [17:30] .. But with the same username and password, can they login via ssh [17:31] Guest-1119, if you dont block them by making an independent modificiation in your ssh config they could login with a username/password via ssh [17:31] Guest-1119, some of these ftp daemons support shadow accounts or fake accounts. where it is not put in /etc/passwd but instead in /etc/daemon.conf [17:32] escott: he was talking filezilla and ssh, meaning sftp [17:32] Guest-1119, a fake account like that is not a real account so other applications like ssh/login don't know anything about it [17:33] escott: rssh works well with chrooting users for scp/sftp/rsync [17:34] and blocking ssh logins on the way [17:34] RoyK, i would not trust rssh. and i fail to see how rssh is related to vsftpd [17:34] utlemming, i can confirm that boot fails without 'tty1' on the cmdline just as if it is there. [17:35] the issue is that /dev/console gets assigned to a non-existant device, and any writes to stdout ('echo HIMOM') fail [17:36] escott: it's not related to vsftpd, but he said he's connecting with filezilla, which supports ssh. I would trust rssh, though, I'm using it in a 20k user environment === yofel_ is now known as yofel [17:39] escott: better use rssh with ssh tunneling than using plaintext auth with ncftpd [17:39] ftp over ssh is mature, and secure [17:39] so is rssh [17:40] RoyK, i wouldn't disagree about sftp. i guess the question is what is Guest-1119's question [17:40] well, ask him (or her). I think I understood what (s)he said quite well [17:51] Hello. I have a new server with Supermicro motherboard, Network card Intel 82574L and 82579M. Problem is that I can not see ethX. It does not exist. shows up fine in lspci. lsmod shows no usage of e1000e driver. dmesg has info about ethX but nothing that seems alarming. Using 12.10 server [17:51] Someone that can help me troubleshoot this? [17:52] ifconfig ethX up does not exist. Ethtool -i ethX can not see any devices. (ethX = eth1, eth0, eth2 and so on) [17:55] Blinkiz: some boards have newer PCI IDs for commonly know cards. it might be the issue [17:56] bitfury: http://blog.krisk.org/2013/02/packets-of-death.html ? [17:56] RoyK, Hmm, interesting. [17:57] RoyK, Where can I see the PCI IDs for my network cards? lspci? [17:57] lspci -v iirc [17:58] lspci -vn [17:58] Blinkiz: http://communities.intel.com/community/wired/blog/2013/02/07/intel-82574l-gigabit-ethernet-controller-statement [17:58] Blinkiz: you can override the pci ids allowed for a driver in /sys somewhere [18:00] Blinkiz: ok, but I think it was related to supermicro - see the comments on that blogpost [18:00] Yeah, I want to find my PCI IDs and put it into a google search, if this is the problem, others will have it [18:00] Blinkiz, in newer kernels there is a new naming scheme for interfaces (coming from redhat) /dev/p#p# or some such that udev might map back to eth something [18:00] escott, It is? did not know that [18:00] bitfury: pastebin ifconfig -a [18:01] Blinkiz, thats my understanding https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html [18:01] escott, Does this apply for me running 12.10? [18:01] Blinkiz: quite possibly [18:01] Blinkiz, i believe so. === Sendoush_ is now known as Sendoushi [18:02] Aaa, cool. I have a p4p1 interface! [18:02] :) [18:02] ifconfig -a did the trick, thanks RoyK [18:03] Did not know about that. Thanks escott :) [18:03] Need to go, thanks for the help guys! === ^^rcaskey is now known as rcaskey === alamar is now known as beastlygoat [19:22] is there a good partition size recommendation somewhere that I could use for partitioning a 4TB drive for ubuntu (its a raid 10). [19:24] pythonirc1011: use lvm use manual paritioning -> in there choose partition free space automatically. it will give you a few options to do all in one or split. [19:25] pythonirc1011: lvm will allow you to later adjust the sizes, manual partitioning will allow you to see the sizes and change them. [19:25] pythonirc1011: if the box has loads of RAM the default swap size maybe way too large for typical use cases. === wedgwood is now known as wedgwood_away [20:13] jamespage: can you (or anyone else here who might be set up for it) try the maas and iscsi tests on the 12.04.2 candidate images? === wedgwood_away is now known as wedgwood [20:23] ivoks: hey, on https://blueprints.launchpad.net/ubuntu/+spec/servercloud-r-qemu, i was going to delete your TODO regarding vhost_net given bug 1029430. were you planning on investigating that regardless? [20:23] Launchpad bug 1029430 in nova "KVM guests networking issues with no virbr0 and with vhost_net kernel modules loaded" [Undecided,Confirmed] https://launchpad.net/bugs/1029430 === jhulten_ is now known as jhulten [21:18] I just got owncloud setup and put the share directory on my samba share as a folder I cant acces it through the share only with owncloud is there a way to add a 2nd user and group [21:19] I want to be able to move files in and out of the owncloud directory with filezilla or a maped network drive share on windows is that possible [21:45] I just deployed new ubuntu server. authorized_keys for ssh isn't working. This is fairly basic, and I think it should work. Any tricks on ubuntu? [21:51] foo: do you have encrypted home directories turned on? === hatch_ is now known as hatch [22:05] hey, quick question - I need to install ubuntu server on about 50 hard drives or so. I have a little custom script to do it with debootstrap: http://pastie.org/6157544 and some other magic - but I'm stuck with automating the hard drive partitions. Anyone know of an easy way to create just 2 partitions, 1 swap that is 1GB and 1 root that is the rest of the available space? [22:21] mikal: hm, not sure - this was a rackspace cloud server install. How can I check? [22:29] I just got owncloud setup and put the share directory on my samba share as a folder I cant acces it through the share only with owncloud is there a way to add a 2nd user and group [22:29] I want to be able to move files in and out of the owncloud directory with filezilla or a maped network drive share on windows is that possible === lickalott is now known as lickalottalottap === lickalottalottap is now known as lickalottapuss === lickalottapuss is now known as lickalott [22:58] anyone? - I need to install ubuntu server on about 50 hard drives or so. I have a little custom script to do it with debootstrap: http://pastie.org/6157544 and some other magic - but I'm stuck with automating the hard drive partitions. Anyone know of an easy way to create just 2 partitions, 1 swap that is 1GB and 1 root that is the rest of the available space? === james_ is now known as Guest49482 [22:59] maybe the code that does the "guided partitioning" in the ubuntu installer? - any ideas where I can find this code? [23:12] hackeron, what about FAI? http://manpages.ubuntu.com/manpages/precise/man8/fai.8.html [23:12] zul: ping [23:12] adam_g: yo [23:13] zul: did those precise rebuilds w/o python3 dependencies ever get sorted out? [23:13] adam_g: yeah i think i uploaded them [23:13] p201: interesting, thanks for that [23:14] hackeron: I used to do this in a preseed file with a gparted "recipe" but it's been a while [23:14] zul: ok. im gonna do some more rebuilds to get http://status.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/grizzly_versions.html more green [23:14] genii-around: gparted recipe? [23:14] ack [23:15] hackeron: why not do one hard drive and just dd the hard drive to copy it [23:15] hackeron: They explain better than I can at http://askubuntu.com/questions/129670/how-do-i-modify-this-preseed-snippet-to-partition-my-hard-drive [23:16] zul: that takes hours and there are different size hard drives [23:17] genii-around: they used a fixed size there, I want to use all available space [23:18] p201: fai looks way, way overkill - I just want to automate partitioning, debootstrap does everything else I need [23:19] stgraber: so I'm thinking the lxcpath option will be '-P|--lxcpath'. sound reasonable? [23:19] any ideas where I can find the code used in the ubuntu guided partitioning during installation? - that does everything I need [23:19] hackeron: A more detailed explanation of the "recipes" is at https://help.ubuntu.com/10.04/installation-guide/i386/preseed-contents.html#preseed-partman [23:20] hallyn: yep, sounds good [23:32] zul: requests was the package that needed modification for precise build, right? [23:42] adam_g: no it was one of the dependencies i forgot which [23:43] adam_g: i think it was oauthlib [23:44] zul: also, how do the packages from openstack-ubuntu-testing-bot end up getting pusehd to https://launchpad.net/~openstack-ubuntu-testing/+archive/grizzly-trunk-testing/+packages ? [23:44] zul: https://launchpad.net/ubuntu/+source/requests looks like this is the one that has all the python3 additions [23:45] adam_g: then thats is the one [23:45] adam_g: when the builds pass it gest dput locally and in the ppa [23:46] zul: where did you upload your modified requests package? [23:46] zul: that is true for the openstack packages that git triggered via git, but what about the dependency rebuilds ? [23:47] adam_g: its done manually afair [23:47] erm [23:48] adam_g: requests is at https://code.launchpad.net/~zulcss/ubuntu/precise/requests/requests-ca [23:48] hey i have multiple ip addresses. when i try to route traffic from a certain address, it starts flooding arp requests, how can i stop that? [23:48] zul: oh, ok [23:49] adam_g: afair there is a jenkins that rebuilds sources and puts them in the local ppa [23:52] adam_g: erm...local archive rather than ppa [23:52] zul: puts em the local archive, yeah. guess it doesn't push to ppa