[00:05] upstart, by design, fires everything up simultaneously as soon as it can, unless configured dependencies preclude it [00:05] so if you depend on something having already been done, you need to configure that in your script [00:07] samuelkadolph: startup is emitted at the very very very beginning of pid 1's existence [00:07] samuelkadolph: you need to 'start on filesystem' .. not startup [00:11] That fixed it, thanks qman__ and SpamapS. I am curious about why trying to redirect the io breaks it but not launching the server itself. [00:17] hello everyone, my box has many domains and subdomains on it, I have jenkins running on port 8081. I want jenkins to be accessible only using jenkins.domain.com so that I can set up authentication on this subdomain. I am using apache proxy module for this. now I want to disable accessing it using any other domain by using 8081 port. how can I do this? [00:19] If you are using ProxyPass then just block 8081 with your firewall or bind jenkins to 127.0.0.1 so only apache can access it [00:20] how can I do any of these? I tried many things but still failing [00:21] Stop unblocking the port with your firewall and if you don't have a firewall, get one. [00:24] I prefer the second solution [00:24] when I bound jenkins to 127.0.0.1, it became not-accessible even from apache [00:25] Then you are using the wrong url [00:25] If it's http://localhost:8081/ change it to http://127.0.0.1:8081/ [00:26] this isn't what I want to do. I want it to be accessible only by using subdomain.domain1.com and not by anything else like domain2.com:8081 [00:28] Then you have to block external access to 8081 and ProxyPass to localhost:8081 from that domain only [00:31] in my virtual host file of subdomain.domain1.com, I have PassProxy / http://127.0.0.1:8081, and jenkins is bound to 127.0.0.1. now, I see a blank page when requesting subdomain.domain1.com [00:31] the good news is domain2.com:8081 is giving nothing [00:31] Having a trailing slash with ProxyPass is very important [00:33] I have it on file, sorry for forgetting it here [00:36] What does the error_log say? === medberry is now known as med_out === kentb is now known as kentb-out === Refael is now known as FernandoTertiary === RudyValencia- is now known as RudyValencia [03:59] Is it possible to have mlocate scan the local filesystems daily, but only scan remote (i.e. NFS) filesystems on a Sunday, when the bandwidth spike won't be noticed? === cerber0s is now known as cerberos [05:57] i was unable to configure my dchp during the install as i was offline. how can i do that now that i am connected? [06:14] hi === cerber0s is now known as cerberos === smb` is now known as smb [08:20] hi guys . i have a dual nic system .. one nic is connected to a network with multiple vlans, and other nic is the internet connection .. when the request comes without vlan then it gets to intenret fine, but requests from vlan are not working on most sites.. [08:21] i believe most sites are not responding to requests with vlan id .. so we need to strip it for requests going out to internet .. how do i do that ? [08:25] where do i set vlan=no for internet connection ? [08:55] c [09:16] New bug: #816313 in openssh (main) ""ssh -b x.x.x.x" or "ssh -o BindAddress=x.x.x.x" does not work." [Undecided,New] https://launchpad.net/bugs/816313 [09:30] * twb bets triangle routing [09:30] Oh, or he's just not untagging correctly.. [10:49] hi, just wondering if anybody has used iRedMail? have any opinions on it ? [10:53] as i am struggling to setup mailserver a little, i mean its running fine but i want virtual users etc [11:12] anyone around that can help with taking a server snapshot ? [11:39] tixo5: what do you mean by that? [11:40] like some shared hosting providers, allow a server snapshot, like a full image of the server [11:40] im setting up a VPS for first time, and would like to do something similar [11:40] possible to take a full image via shell ? [11:46] I am going to install a minimal x server with fluxbox on my server - is there any way i can prevent apt-get from pulling all the xorg-drivers except the one i really need (intel)? Is that even benefitial in order to keep thinks as light as possible or should i just do the old sudo apt-get install xserver-xorg xserver-xorg-core fluxbox [11:50] zul, Whenever you get online, could you ping me? [11:51] smb: ping i saw the depwait for ipxe i just added it to the seeds [11:52] zul, Ah ok, then that get sorted already. The other thing I wanted to chat about is your thoughts on the grub config idea [11:52] smb: sure [11:53] smb: what was the idea again [11:53] whats the best packaged backup solution for ubuntu ? [11:53] Well basically to have two distinct sets of command line arguments for dom0 kernels and "normal" kernels [11:54] an not the same being used for both as currently [11:54] ok.. [11:54] At least I (not sure that is common though) have the problem of using two different console definitions for both [11:55] When I start a normal kernels console=ttyS1 and for xen dom0 its console=hvc0 [11:55] oh this is the serial console stuff? [11:55] right [11:56] right [11:56] yeah im all for it, if you can give me a debdiff for it :) [11:56] zul, That should be possible. :-P [11:57] Have not prepared one yet. But basically running it in locally modified environment [11:58] smb: sweet....go for it then [12:16] best backup solution for ubuntu server? [12:17] rsnapshot? [12:18] http://duplicity.nongnu.org/ [12:23] ok thanks ill take a look [12:24] beta software? [12:25] ideally i would like to take the backup via SSH to my local machine [12:25] they have a stable release [12:26] and it supports ssh, ftp, DAV, etc [12:26] ok [12:26] tixo5: you want a snapshot of the vps? [12:26] you can also use it with duply http://duply.net/ which is a console frontend [12:26] yes basically [12:27] you could just use lvm snapshots or depending on the vps technology you use vzdump/vzsnapshot for example if it is openvz [12:27] before i wipe it and start again, incase i mess up etc [12:27] yes its openvz [12:27] doesnt my provider need to support that ? [12:27] my router says Primary DNS Server 119.159.255.37 Secondary DNS Server 203.99.163.240 , how can i know which public dns the ip refers, whats the name of that dns 2. how can i make my own dns and get the list of all the websites of the world? [12:27] tixo5: .. I thought you were the provider [12:27] jane-: #dns [12:28] jane-: A list of all the websites in the world? [12:28] no alamar i have a VPS container, i have setup everything else myself [12:28] but being the first time i am worried i have done a few things a little messy, so want to start from scratch [12:29] i am unable to take a snapshot or use the snapshot without my provider supporting that right ? [12:32] jpds yes. webs and ips, thats what dns do. [12:32] jane-: Why would you want that? [12:32] JanC: [12:32] ermm [12:32] i want to make my own dns [12:32] jane, you need BIND DNS running [12:32] with a master zone [12:32] jane-: whois 119.159.255.37 for ex. ? [12:33] then add A records and such [12:33] jane-: But you want your own DNS records for every website in the world? [12:34] jpds yes. [12:34] what are you on about [12:34] lol [12:36] that's "you're crazy" territory. usually for sites you don't handle, your dns server would query upstream & cache. trying to take a snapshot of every site in existance would be exceptionally difficult (even if you're google) [12:37] WinstonSmith: that duplicity is meant to backup local systems to another server? [12:37] how public dns work then. they have a list. dont they? [12:38] not sure if im right, but all zones are hosted by many servers all over the work [12:38] so com will be hosted [12:38] jane-: No, they query other DNS servers. [12:38] tixo5: not the whole system ( well not partitions) only files. and it can backup locally or remotely (ssh, ftp, dav, etc) [12:38] then the (.) [12:39] WinstonSmith: i sort of want to take a snapshot, is this impossible without support from the VPS provider? [12:39] tixo5: can't help you there, never used VPS. [12:39] jane-: They don't have their own copies for every DNS record in existance. [12:40] jpds then who does [12:40] jane-: Noone. [12:40] Daviey: ack a sync for me? (https://bugs.launchpad.net/ubuntu/+source/tomcat7/+bug/816393) [12:40] Launchpad bug 816393 in tomcat7 "Sync tomcat7 7.0.19-1 (universe) from Debian unstable (main)" [Wishlist,New] [12:41] if my router states pri dns and sec dns. that means it goes to that ip and fetchs ips of websites according to their names.... doesnt it ? [12:41] jane-: Yes. [12:41] Hi [12:41] jane-: And those DNS servers, will forward requests they don't know the answers to, to other DNS servers. [12:42] ya [12:42] each zone jane- is hosted by another server [12:42] (www)(.)(domain)(.)(com) [12:42] not always but u get the idea [12:42] jpds ok. and those others dns servers can be any in world.. ? isnt there a main one dns serveR that has all ? [12:43] jane-: No, they send DNS requests down the chain. [12:43] jpds how many are there any way. any gues? [12:43] jane-: . nameservers, go to .com nameservers, which go down to ubuntu.com nameservers, etc. [12:43] Hi, I have a server which is shared between me and others, (my server), I allow them http://mydomain/theirsite, but how can I stop the users from looking at eachothers files, uploading a php directory listings allows access to others files /site1/ /site2/ etc. [12:44] jane-: What exactly are you trying to accomplish? [12:44] GreenDance: are you using virtualhosts? [12:44] tixo5 yes [12:45] each user is a unix system user? [12:45] or all the same user [12:45] as thats probably your issue [12:45] jpds just studing, and may be ill make my own dns [12:45] small one [12:45] tixo5, same users [12:46] are they all legitimate dns servers, and what if i want to make my own, bind dns resolver ? ill need a list of ip names and corresponding names , how can i get it ? [12:46] tixo5, same user* [12:46] jpds ^ [12:46] apache:apache [12:46] i think [12:47] tixo5: if I create a unique linux user for each person, would that work? [12:47] well if you want different permissions i would use different users per/site [12:47] yes, there is probaably other ways [12:47] you could create virtual users using a mysql backend [12:47] really? [12:47] :o [12:47] :D [12:48] well my mailserver's users are stored in a database [12:48] so i dont see why not [12:48] jane-: Install bind9 on a machine somewhere? [12:49] jpds that was my original reply [12:49] jane-: https://help.ubuntu.com/10.04/serverguide/C/dns.html [12:49] jane-: #dns will me more help [12:50] hm [12:51] jane-, i am running BIND on my server [12:51] i host my own DNS records [12:51] somebody else hosts the (.) and (com) [12:52] you may need your domain registrar to add a 'glue record' to the (.) [12:52] Hey guys! Maybe somebody in here can help me with this: When logging in to my ubuntu server, the appearing statistics summary page shows the /home directory instead of the root directory... unfortunately I have no clue how to change that... does anyone know where I can do that? [12:53] tixo5 is it possible to make a new domain , e.g not .com but .moc ? [12:54] no [12:54] as far as i am aware [12:54] there are 'bodies' that govern things like that [12:54] the internet would become a crazy place is that was possible [12:55] ICANN has allowed generic TLDs to be registered, but it costs a prohibitive amount of money to do so. [12:55] money is the solution to most things, in this context ill stick with it not being possible [12:55] tixo5 if i make a list of .moc and supose some people make my server as their pri dns. then they can see a different google.moc ? [12:56] again jane, for 5th time your better off asking in #dns [12:56] k [12:56] nobody knows where to change the information on the login summary page? [13:06] lynxman, around ? [13:15] guys, how to add a system user on command line? [13:15] Kurisutian, you want to change th motd that displays on login? [13:17] adac: man adduser [13:17] adac: Correction: man useradd [13:18] both should work [13:36] New bug: #816414 in nut (main) "[MIR] nut (nut-doc, nut-client, nut-server)" [Undecided,New] https://launchpad.net/bugs/816414 [13:43] smoser: here [13:43] rsync'ing / root is a bad idea? [13:45] lynxman, i put one question in the merge proposal, but then i had some others. [13:45] smoser: shoot [13:45] tixo5: it normally is [13:46] tixo5, rsyncing it to another system, or carefully to another drive will work, though capturing the state of a live filesystem is less than ideal. [13:46] lynxman, i had intended that "include_once" would be really just "download_once" [13:46] i was trying to backup to my local machine over ssh but obviously cant use root as ive disabled login, so need to setup the daemon [13:46] but you implemented as "include_once". [13:46] is there any other better backup solutions to images the partition of my VPS [13:46] guess that impossible [13:47] smoser: it is download once as far as I see it [13:47] smoser: yeah, wasn't that what we agreed the naming convention would be? :) [13:47] tixo5, rsync is probably reasonable. you *can* still use rsync as root. [13:47] smoser: wait what.. you are advocating using rsync as root? [13:48] can from the system [13:48] but not remotely [13:48] in order to read files that are root-protected that is generally required. [13:48] i was looking more for a VPS snapshot [13:48] this cant be done from within the system right ? [13:49] tixo5, you can do it, you just have to have rsync client tell the server to use a different "rsync client" [13:49] see man page [13:49] --rsync-path=/home/smoser/my-rsync [13:49] rsync is totally different to snapshot though [13:49] where my-rsync has something like: exec sudo rsync "$@" [13:49] i want an image of my VPS ideally [13:49] smoser: i think we need a rsync-rootwrap. [13:49] we do indeed. [13:50] we need more setuid executables i think [13:50] lynxman, hold on [13:50] smoser: holding on :) [13:50] smoser: if i rsync'ed /, installed fresh OS and restored [13:50] everything wouldnt work right ? [13:51] tixo5, well, probably would. or very close. [13:51] but i would be surprised if there werent some issues. [13:52] so, i am looking for a solution like VPS snapshot [13:52] or at least would not be surprised if there were some issues [13:52] is that impossible without admining the dedicated server my VPS is on ? [13:52] tixo5, you'd run into the same sets of issues (at least some of them) with block level [13:52] some would be different [13:53] you need to sync the filesystem to the block device (fs_freeze) before you snapshot [13:53] most VPS providers alllow snapshot images to be created, and restored [13:53] but then, you still get a "live" filesystem. [13:53] hmm [13:53] when you start form that live filesystem, at very least that volume will be dirty and need fsck (maybe fs_freeze woudl handle that... id don't know) [13:53] its the same as if you lost power [13:53] which *normally* is fine [13:54] so your basically saying theres not much different between using rsync, and a VPs providers snapshot of the system [13:55] wish my provider allowed for offsite images to be taken [13:55] dont see why that would be so difficult [13:55] lynxman, if os.path.isfile(includeonce_filename): continue [13:55] thanks for help anyway [13:56] block level snapshots are more complete than filesytem level [13:56] more complete == safer [13:56] and its impossible for me to do that from my VPS? [13:56] for instance, some things that rsync would not pick up are your filesystem UUID or LABEL [13:56] which may exist in /boot/grub/grub.cfg [13:56] if not restored, your system might not reboot. [13:57] leaving it pretty painful to backup/restore [13:58] smoser: that's what I do, I just name it a bit differently [13:58] block level is going to be safer. filesystem level is going to be smaller. [13:58] neither are perfect. [13:58] perfect is shutdown, snapshot, start [13:58] i can do block level? [13:58] (imo) [13:58] well, you can... [13:59] shutdown, snapshot is what i want to do [13:59] dd if=/dev/sda of=- | ssh system 'dd > disk.img' [13:59] but i have not got those features in my client panel so i cant do it right? [13:59] i'm really not very familiar witih vps's, but it sounds reasonable that they do not hvae block level snapshots exposed to you. [14:00] most do, but mine is cheap :) [14:02] lynxman, ^ [14:07] smoser: replied to you in the middle of the rsync thread :D [14:08] lynxman, right. you 'continue' [14:08] so you never process that include again [14:09] smoser: hence include-once [14:09] right. [14:09] i would have preferred "download-once" [14:09] to suffice for the one time url [14:09] i dont see a real-need for process-once [14:09] as most things are 'per-instance' anyway [14:09] (ie, your mcollective stuff is per-instance) [14:09] smoser: for certs for example makes sense, that's what I had in mind [14:10] why does it make sense? [14:10] you have other controls over whether or not to act on the data more than once. [14:10] i think the data should be present to cloud-init so it *could* act on it [14:10] if it was a module or soemthing that should be acted on every time [14:10] smoser: hm okay let me paint you an scenario [14:11] smoser: you get temporary certs, which mcollective uses to connect to a provisioning collective, then it gets fed new certs for the "global" collective once its authenticated and properly provisioned [14:11] smoser: that is actually the scenario I had in mind, in this scenario I just need these certificates once, and that data should not be acted upon ever again, if so it'll reset the machine status [14:15] lynxman, thats fine [14:15] that situation works fine [14:15] you can process that hunk multiple times [14:15] because mcollective module only runs per_instance [14:15] which means once (the first boot) [14:15] no side effects will occur the second time [14:17] zul: ping...I got 10 pandaboards...how many you need for OpenStack on ARM testing :P [14:17] robbiew: one would be good [14:19] * RoAkSoAx would like one if there's one to spare :) [14:19] zul [14:19] cool [14:19] RoAkSoAx: yeah...I think I can swing that [14:19] lynxman, i have this suggested change: http://paste.ubuntu.com/652442/ [14:19] robbiew: cool thanks ;) [14:20] zul: so is it possible to have OpenStack installed across multiple pandaboards...with only LXC [14:20] i.e. multiple compute nodes [14:20] robbiew: it should be able...maybe two then [14:20] zul: ;) [14:21] smoser: well if you wish that feature instead of mine, sounds good :) [14:21] robbiew: can I have one? I promise to feed it nicely [14:21] i think its generally a superset of function [14:21] robbiew: :D [14:21] smoser: fair enough :) [14:21] the other question i had, lynxman [14:22] the private key should be 0600 right? [14:22] smoser: hm I think so, wasn't sure so I didn't implement it yet [14:22] smoser: wanted to test it first once merged and then make the change if needed [14:22] smoser: but it makes sense [14:23] lynxman, you know, you are allowed to test things *before* i merge them [14:24] its even generally smiled apon [14:24] smoser: lol ;) I normally do [14:24] smoser: this one is highly experimental though [14:25] lynxman, [14:25] well, can yo udo 2 things for me [14:25] i will push a brnach with some changes to your code [14:25] can you build it and test it? [14:26] lynxman, lp:~smoser/cloud-init/include_once_and_mc_cert [14:26] smoser: will give it a shoot [14:27] RoAkSoAx: hey...interested in getting cobbler to work with ARM images? [14:29] robbiew: though that zul already had that working.. [14:29] zul: ^^ [14:29] RoAkSoAx: nope [14:30] robbiew: yeah why not [14:30] RoAkSoAx: the ground work is there already just needs to be followed through [14:30] zul: cool ;) [14:31] * smoser re-reads above, and for the 4.23e8'th time he realizes he may have sounded rude. [14:31] sorry, lynxman [14:31] smoser: no worries, really [14:32] RoAkSoAx: cool, thx...then will send you 2 boards ;) [14:32] robbiew: hehe ok ;) [14:33] zul / RoAkSoAx: they need to work with u-boot PXE loader, and be as ready as possible to work with native PXE booting. [14:33] Daviey: agreed [14:33] (might need to use NCommander for support on that) [14:42] oh, NCommander has been package for Ubuntu? [14:42] :-) [14:42] RoAkSoAx: send me an email with your mailing address and phone, and I'll handle the rest [14:42] smoser: interested in a panda board? [14:42] * NCommander apt-get instakk's himself [14:43] heh. BTW, NCommander, thank you for rooting my android, works perfectly :-) [14:43] robbiew: will do [14:44] RoAkSoAx, i'm not un-interested. [14:44] i would plug one in and give it a try. [14:44] smoser: cool [14:45] smoser: can you shoot me an email with your address and phone...I can take care of the rest [14:45] * robbiew notes he should have this...but won't "go there" [15:07] The heartbeat + pacemaker in ubuntu-server 10.04.. is this a long-term cluster plan for ubuntu or are they moving elsewhere? [15:09] utlemming, https://gist.github.com/1100458 [15:10] smoser: nice :) [15:18] kirkland, hey ... i have an external usb drive on a server that i'd like to be a luks encrypted volume, got any experience of monting those during boot ? and i'd like to have it === med_out is now known as medberry [15:24] apw: I don't, but kees does [15:24] apw: kees uses a usb drive just like that [15:37] Kind attention please , im getting shell booters and host booters hitting my server , how can i trace whos doing it and how can i stop them [15:38] and is their some kind of special support for such problems , that im willing to be greatful in paying the sum of hes help knowledge === andreas__ is now known as ahasenack [15:44] lsheeba: what are shell and host booters? [15:44] It consists of some php flooding shells and a gui. [15:44] the gui pings the shells and gives them a command to flood a certain IP [15:44] since the shells can be on servers with high bandwidth connections, it can be a powerful flooding method. [15:45] it hurts my servers badly , all of my local network cant ping the server then [15:45] er...turn it off then? [15:45] so block their ips and try to get upstream to nullroute them [15:45] their offline [15:45] im on my personal pc running ubuntu 11.04 desktop [15:45] Howdy folks, Ubuntu cloud days (day-2) starting in #ubuntu-classroom on the hour .. see you there [15:45] ive blocked all oversea's ip's and allowed local ip's only and still how do they get access ? [15:45] (and in addition contact the hoster's abuse department of the infected servers) [15:46] im running a small GSP [15:46] with a static ip that i have purchased [15:47] booters [15:47] arent meant not to be tracked [15:47] they use shells hosted on other servers that have been hacked [15:47] * tixo5 shows his blackhat side :( [15:47] cyber crime department didnt help much , well didnt help at all [15:48] i gave you your answer [15:48] any solution? [15:48] tixo5, [15:48] a WAF? [15:48] or ddos module for apache [15:48] Hello, is anyone here familiar with mdadm raid arrays? I just got the mdadm alert email saying a drive was removed from the array. It is now marked as faulty. I was curious if there is any way to see a log of when/where/how this happened? [15:49] do u know of any specialized guru who will accept a payment to perform our security liabilities for this GSP , personal-aid not an organizational request because then we couldve gone for expensive firewall hardwares [15:50] Further, I am curious if anyone can help with replacing the failed drive with a new drive. The new drive will be a new model and likely new make. Which specs are necessary to be consistent among hard drives across mdadm raid arrays? [15:50] i do penetration testing, securing servers it not my area, although i know a decent amount [15:50] i have work shortly but i will add you incase i can help [15:50] Thanks a bunch [15:50] thisismygame: one thing at a time. Review the system logs at the time of the alert [15:51] lsheeba: Rate-limit network traffic on a per-IP basis? [15:51] i theory the problem of other gsp's here attacking me because of my price range in cost per slot [15:52] jpds, ive tried that , and it puts pressure in hogging the router's cpu " 100% " [15:53] StevenR: I just looked at dmesg and /var/log/messages. dmesg has nothing related, and /var/log/messages, is, from what I can tell, empty. :o [15:53] thisismygame: /var/log/syslog [16:00] tixo5: what do you use? openvas? [16:00] for pentests? [16:00] yep [16:00] many many many tools [16:01] i specialise in web app security [16:01] ok [16:01] trying to go for a niche :) [16:01] any tool in particular? [16:01] not being a large company i cant afford the larger tools like webinspect [16:01] * RoyK has a scan running with openvas against his office computers [16:02] openvas and such are only good for finding outdated packages rly, [16:02] thats all it really does [16:02] compares versions [16:02] I know [16:02] that's why I wondered about other tools, specifically for probing webapps [16:03] webapps, opensource like w3af [16:03] not a bad framework, quite buggy atm though [16:03] k [16:03] Daviey: ping [16:03] there is a distro called 'Samurai' that has some really nice tools, just waiting on their 11.04 ubuntu update, as jaunty is pain for me [16:03] Daviey: /win 22 [16:03] argh [16:04] how can i merge 2 folders in ubuntu? [16:05] tiphares: rsync? [16:05] or what do you mean 'merge'? [16:05] RoAkSoAx: o/ [16:05] unison perhaps [16:05] Daviey: i'll ping you again after the meeting ;) [16:06] i have 2 folders, 1 contains a,b,d, folder 2 contains b,c,d, and i want to marge them into ONE folder, containing a,b,c,d [16:06] RoAkSoAx: cool [16:07] tiphares: rsync -avP folder1/ folder2/ newfolder [16:07] iirc [16:07] tiphares: that won't help you with collisions, though, the data from folder2 will overwrite whatever came from folder1 (or was it the other way around?) [16:08] doesn't matter which ones overwrite the other [16:08] StevenR: oh, they moved it. yea this has some relevant info. Buffer I/O error, dev sda, sector 0 [16:08] as they both contain some of the identical files [16:08] why cant you just copy [16:08] move* sorry [16:08] tiphares: then just rsync, or as tixo5 said, copy or move - but rsync may be easier [16:08] depending on size rsync will be better [16:09] tiphares: if you have f1 and f2 and you want all in f2, cd f2; rsync -avP ../f1/. . [16:09] hm [16:09] oke, never used rsync, i'll check it out thanks [16:10] tiphares: it sounds like your over compicating somwething :P [16:10] somethign* [16:10] o god [16:10] yeah i don't know [16:10] i'm a noob [16:10] tixo5: rsync isn't really complicated, though :) [16:10] coming from windows; when i have 2 folders named pictures, with some of the same files in them, i can just drag and drop either folder and "merge" it with the other one [16:11] that's what i want to do [16:11] though, using a shell, of course [16:11] tiphares: mv f1/somedir f2 [16:11] what happens when some files conflict then? [16:12] just use rsync [16:12] it's the easy way [16:12] i'm confused :( [16:12] or cp -R f1/* f2 [16:12] tiphares: the unix way: There's More Than One Way To Do It [16:13] copying files wont really merge though [16:15] tiphares: cd f2; rsync -a ../f1/. .; cd .. [16:15] StevenR: I think we can call this drive deceased. http://pastebin.com/4JW5qT4j [16:15] tiphares: then remove f1 [16:16] or something [16:16] thing is i have limited space, and the folders i want to merge are pretty big [16:16] but ye [16:16] trying out some of the stuff now [16:16] :p [16:16] kind of overwhelmed with alternatives [16:17] rsync man page is like a book though [16:17] just use -a [16:17] that'll cover most of what you need [16:18] add -v to make it verbose [16:18] i'd like to know what it does before i use it:p [16:18] -P isn't needed [16:18] -a == --archive => keeps all sorts of attributes, ownership etc [16:19] ah i see [16:19] -v = verbose = ? [16:19] yes [16:19] -P is --partial --progress [16:19] dno what verbose means :( [16:19] --partial won't be needed unless working with BIG files locally, but -P is short and adds verbosity :P [16:20] verbose == noisy [16:20] verbose != quiet [16:20] yeah alright now you lost me completelyt [16:20] verbose means it is going to tell you about every action it does. [16:21] ah like logging [16:21] thisismygame: looks kinda that way, yes. [16:21] thisismygame: I think you need to tell mdraid to remove it, and then add another drive. [16:22] rsync man page is 2642 lines [16:22] that's madness [16:22] tiphares: why is it madness? [16:23] Dora:~ roy$ man rsync | wc -l 3562 [16:23] that is - wc -l returned 3562... [16:23] tiphares: there's no need to read it all [16:24] no that's why i'm here [16:24] heh [16:24] again, -avP will be quite sufficient [16:24] yeah [16:24] i get you [16:24] cd /target/dir/whereever/it/is and rsync -avP /source/dir/ . [16:24] just trying to figure out what it's actually doing [16:25] make sure you add the / at the end of source dir - otherwise it'll create the sourcedir in your dir [16:25] you can move that out later, of course... [16:26] so /source/dir/ means /source/dir/* (except /source/dir/* won't move 'hidden' files starting with .) [16:50] Daviey: ok [16:50] RoAkSoAx: Hey! [16:50] Daviey: howdy ;) [16:50] RoAkSoAx: What is the status of redhat-cluster? [16:51] Daviey: as in? [16:51] Daviey: redhat-cluster is soon to be dead [16:51] It seems it might uninstallabale [16:51] Daviey: is there a bug # [16:52] Daviey: cause last time I check it was [16:52] RoAkSoAx: no, i litterally just checked the REPORT [16:52] Daviey: link [16:52] wow, i can't spell today [16:52] http://cdimages.ubuntu.com/ubuntu-server/daily/current/report.html [16:53] Daviey: will take a look at it [16:54] Daviey: just installed it and didn't receive any failures [16:55] RoAkSoAx: same here, best i can think is main/universe mistmatch? [16:55] kages could not be authenticated [16:56] Daviey: maybe it is a sources mistmatch as when I first tried to update it showed something that some packages couldnot be authenticte [16:56] Daviey: but was resolved by sudo apt-ge tupdate [16:56] interesting [16:56] RoAkSoAx: lets spend no more time on it, and see what the cdimage shows tomorrow [16:56] Daviey: yeah [16:56] Daviey: anyways, I wanted to talk about bug #789266 [16:56] Launchpad bug 789266 in cobbler "Cobbler: Missing yum-utils & other cobbler related utils" [Wishlist,Triaged] https://launchpad.net/bugs/789266 [16:57] Daviey: according to what I can see, yum-utils Depends on yum [16:57] Daviey: do we really want to install yum in our systems? [16:57] RoAkSoAx: ok [16:57] Daviey: when deploying cobbler? [16:57] Daviey: (note that for reference I'm checking the spec file for yum-utils which depends on yum) [16:57] makes sense [16:58] Daviey: do we really want that? [16:58] do we really need yum to be installed? [16:59] Daviey: and packaging yum-utils will also mean packaging python-kitchen [17:00] RoAkSoAx: oh golly. [17:01] RoAkSoAx: Do we really need yum to be entirely installed for this basic support of it? [17:01] Daviey: as I can see in the "reposync" binary, yes we do: [17:01] from yum.misc import getCacheDir [17:01] from yum.constants import * [17:01] from yum.packageSack import ListPackageSack [17:01] import rpmUtils.arch [17:03] RoAkSoAx: Have you taken a sniff to see how much effort is involved in just the python bindings? [17:04] I suspect they will suck without the world() avaliable, but i wonder if they provide enough just for basic support? [17:05] Daviey: no I havent but from what I can see, there's lots of stuff that access yum modules [17:05] and databases and stuff [17:06] Daviey: so my wild guess is that it would need a great deal of tweaking for basic support [17:06] RoAkSoAx: I'm hessitant to suggest just ripping out the rpm support. [17:06] Daviey: I can just go ahead and finish packaging yum-utils to have it on archives [17:06] Daviey: make it depend on yum [17:06] I don't think Orchestra should just provide ubuntu/debian support :( [17:06] Daviey: and then see what happens [17:06] RoAkSoAx: sounds good to me. [17:07] I think time investigating viablity is worth it. [17:07] at least we've tried to support it that way [17:07] Daviey: yeah and it doesn't really hurt having yum-utils in the archives, since we have yum already [17:09] RoAkSoAx: GPWM [17:11] Daviey: for our first rev, i think we need to get ubuntu/debian support "right" and working well [17:12] kirkland: totally agreed. [17:12] Daviey: and i think we can do that without being evil or hostile toward other distros [17:12] Daviey: s/can/should/ :-) [17:12] kirkland: which is what we are doing :) [17:12] Daviey: \o/ === andreas__ is now known as ahasenack [17:47] anyone using TACACS+? [17:47] I am having trouble compiling from source [17:47] and also it has long been since removed from the Repos [18:04] altice: I am. [18:05] altice: ftp://ftp.shrubbery.net/pub/tac_plus/tacacs%2B-F4.0.4.19.tar.gz [18:05] and I just did ./configure to prepare it for install.. but this was a long time ago -- there is a chance that I slightly changed the source and do not remember. [18:06] yea that's what someone suggested [18:06] I saw someone elses insights on that [18:06] however, they did not apply to the errors I was getting [18:07] I'm talking now with developers to see about getting this put into the repo after I get it figured out and working [18:07] altice: what errors? [18:07] errors building or running? [18:09] building [18:09] I upgraded some in house servers to ubuntu 10.04 LTS [18:09] and I have to compile from source again for TACACS [18:10] fullstop: here's a pastebin of the output from makefile [18:10] http://pastebin.com/rsqRMefT [18:16] altice: one moment.. let me see if mine still builds. [18:17] altice: here is my full build output: http://pastebin.com/dEebuV4k [18:18] I am x86_64 [18:19] Also 10.04 LTS [18:20] I believe mine are xenon cores, i686 [18:20] what version of tacacs did you use? [18:20] The same version I sent in the link above.. [18:20] tac_plus version F4.0.4.19 [18:21] You are not trying to make -j 4 or anything, right? [18:21] you sent a link? [18:21] or you mean my link? [18:21] no, I sent a link to the tac_plus source [18:21] that's the one I am using [18:21] ohhh, psht gah, completely missed that [18:21] yea I'm using the same ver [18:22] can you make clean and pastebin the output from a fresh make? [18:22] from the same source (shrubbery) [18:22] sure thing [18:22] I went through the trouble of setting up tac_plus purely so I could restrict access to the ASA for the rancid process. [18:23] Other than that, I just have to trust myself with the ASA. ;-) [18:23] lol, to be honest with you fullstop [18:23] I have no idea what you just said ;) [18:23] I know tacacs+ purly from a AAA standpoint and cisco gear [18:24] authorization, access, accounting [18:24] (authentication) [18:24] I wanted to set up RANCID (also from shrubbery), but I wanted to restrict the rights of the RANCID user. [18:24] never read into that, what is it used for? [18:25] rancid periodically pulls the running configuration of network equipment and puts them in version control. [18:25] mine is all based on access to network equipment. Who can log in, what commands they can use, and keeping a record of what config changes were done [18:25] It lets you keep track of changes [18:25] o0o0o really? [18:25] :) I might want to look into that [18:26] Yes, that's what I use tac_plus for as well, but just to restrict access for the process that gets the configurations. [18:26] I'll write that down, RANCID might be useful in the future. [18:26] There's a fork of RANCID which will let you use git as your backend if that's your thing. [18:26] my punch list is starting to get huge....... [18:26] http://www.shrubbery.net/rancid/ [18:26] honestly, I don't do enough development work to be sold on using git [18:33] fullstop: okay I have the make file pasted, the whole ong [18:33] one( [18:33] fullstop : http://pastebin.com/DRFmEbjt [18:34] altice: try just "make" instead of "make tac_plus" [18:36] df [18:36] whoops [18:37] okay [18:38] ;) no way it was really that simple [18:38] hahaha [18:38] hahaha [18:38] wtf mate [18:38] cheers [18:41] thanks for your help [18:41] no problem. Have fun! [18:41] I'm still going to push to have this included in the repos [18:42] It wouldn't be a bad idea. It took me a while to find the source. [18:45] can rsync only copy stuff from a to b, and not move stuff? [18:45] it "syncronizes" [18:45] If you're moving stuff thats not really synchronising.... [18:46] tiphares: have you read through the man pages and examples for rsync? [18:46] it should explain it [18:46] man pages for rsync are massive, so i thought i'd ask [18:46] it's kind of like updating backups of files, you only care about recent stuff [18:46] sure sure [18:46] what's the point of this channel if people can't ask about stuff [18:46] hey hey, don't get offended [18:47] just wanted to mention that the resource was available [18:47] i'm not, just sayin :> [18:47] i found the mv tool insufficient [18:47] so looking for alternatives [18:47] tiphares: I generally start with questions, get them answered in the manpages, get new questions from reading the manpage, then experiment and ask for hlep [18:48] tiphares: what are you doing that mv is not sufficient? [18:48] i'd like the option to exclude stuff from moving [18:48] couldnt' figure out how to do that with mv [18:49] I am trying to forward several ports on a VM server to specific VMs (running ubuntu 10.4). I found some IPTables notes and came up with the following, but ufw seems to fail when I put this in before.rules and restart ufw: -A PREROUTING -i br0 -p tcp --dport 9000 -j DNAT --to-destination 192.168.1.20 [18:49] tiphares: this is unix, you combine tools [18:49] so use find or something (to create the list you want) and then run through xargs with mv [18:49] or write a script to do your dirty work [18:49] OR, create a list of files in txt, and cat this to xargs [18:49] true [18:50] you can use grep to filter out things you want [18:50] eg, you can do all sorts of things here :) [18:50] also true [18:50] small utilities to do specific things, combined in the ways that you need [18:50] power of unix tools [18:50] amen [18:50] yeah i'm aware of that i can make this happen with scripts [18:51] you don't need scripts [18:51] but i'm sorta new to nix, and wondered if there's pre defined tools to do this [18:51] cat list.txt | xargs mv ... [18:51] done. [18:51] create that list however you need to [18:52] yep, listen to Gir, that's a good method to approach this [18:52] hm [18:53] i'm confused :( [18:53] make a list of the file names your trying to move [18:53] manually? [18:53] hehe yes that or do it a more elegant way [18:53] may i ask for some input there [18:53] okay so.......first things first where are the files located [18:53] all in one folder? [18:54] yeah [18:54] k good [18:54] we can generate a list [18:54] easier since it's in one folder [18:54] are there similar strings of letters that you want to move and some you dont? [18:55] yeah [18:55] i.e.........all the files that begin with 'erg'? [18:55] give me an example? [18:55] T*R* is the stuff i want to move into another folder [18:56] so begin with TR ? [18:56] or actually, i want to move anything but T*s.A* [18:56] but yeah, can start off simple [18:56] okay so are you using regular expressions? do you understand those character combinations? [18:56] not using regex, just using * for wildcard [18:57] i'm awfully worthless at regex [18:57] if you do an "ls T*R*" does it give you what you want? [18:57] regex is powerful, ESPECIALLY for what your trying to do now [18:57] I'd highly suggest reading up on it, even though there is a steep learning curve at first [18:57] yeah i know, i have it on my bucket list:P [18:58] i am familiar with it [18:58] haha, should be a little more important than a "kick the bucket" list [18:58] heh [18:59] so basically you can use "ls" and wildcards [18:59] ls -lad T*R* [18:59] gets me the dirs [18:59] i want [18:59] perfect [18:59] now pipe that into a txt file [19:00] can i do all of that with a command [19:00] ls -lad T*R* > output.txt [19:00] yep [19:00] what is it you want to achieve? [19:00] now you have a new file named output.txt right? [19:00] everything in there you need? [19:00] cool that worked out nicely altice [19:00] excellent [19:01] now use Gir's method [19:01] cat list.txt | xargs mv [19:01] linux 101 for dummies atm alamar :P [19:01] xargs = ? [19:01] and then mv where you want [19:02] http://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/ [19:02] you could just use find for folder(-type d) and -exec mv the {} to the destination [19:02] I'm not familiar with that, alamar, go ahead and walk through that === aurigus_ is now known as aurigus [19:03] find searchpath/ -type d -iname *matchme* -exec mv "{}" destination/ \; [19:03] or -name if it shall be case sensitive [19:04] so many things in there i have absolutely no clue what is [19:04] :D [19:04] or -regex if you want to use regular expressions for matching [19:04] well OR you just stick to what you've just been told by altice ;) [19:05] i'll write your version down in my notes:p === skrewler_ is now known as skrewler [19:24] I am trying to forward several ports on a VM server to specific VMs (running ubuntu 10.4). I found some IPTables notes and came up with the following, but ufw seems to fail when I put this in before.rules and restart ufw: -A PREROUTING -i br0 -p tcp --dport 9000 -j DNAT --to-destination 192.168.1.20 [19:31] CrazyGir: can you paste your before.rules file? [19:32] what is this rule supposed to do? [19:37] sorry, got kicked [19:37] jdstrand: it's got a lot more in it than I understand [19:37] then I'll ask again. what is this rule supposed to do? [19:38] alamar: all I want to do is forward tcp to port X on the br0 interface to a specific IP [19:38] I do not see any destination nor that you are using the nat table [19:39] isn't that the DNAT --to-destination part? [19:39] I could also rephrase my question.. [19:40] oh sorry I didn't see it when scrolling in my backlog [19:40] what should my iptables entry look like to ensure port X does to a specific IP? [19:40] but -t nat is missing [19:40] is what I have correct [19:40] ok, so I should add -t nat [19:41] anything else? [19:41] iptables -t nat -A PREROUTING -i br0 -p tcp --dport 9000 -j DNAT --to-destination 1.2.3.4 [19:41] I'm adding this to before.rules, from what I have read this is the place to do so? [19:42] 1.2.3.4:9000 [19:42] if you want to work with ufw it probably is [19:43] (but I don't know what format/syntax/whatever works in there) [19:43] as I said... layering something above iptables shoots yourself in the foot when you want something more than "port open/closed" ;p [19:45] jamespage: ping [19:45] kirkland: pong [19:45] jamespage: just wanted to touch base with you one more time on hadoop/cdh [19:46] kirkland: sure [19:46] jamespage: I was asked earlier today if we should target our hadoop packages for Canonical Partner instead of the Ubuntu Archive [19:46] jamespage: I didn't know if you had any plans to improve upon the latest state of packages from iamfuzz and negronjl, and try to push them to Universe ...? [19:46] jamespage: if not, we're going to be religated to pushing them to Partner [19:47] jamespage: I'd like to think that Ubuntu users would benefit from them in Universe [19:47] jamespage: but at this point, we'd need a Platform champion to help push that [19:49] kirkland: so I want to pickup hadoop/cdh longer term but we need to sort out how we work with upstream first [19:49] kirkland: so I think that for this release partner is really the only realistic choice [19:49] jamespage: is that attainable for Oneiric? [19:49] jamespage: okay [19:49] jamespage: that gives me something I can work against, schedule wise [19:50] jamespage: we'll target Partner and/or a PPA for Oneiric [19:50] kirkland: I think that is the only choice ATM [19:51] kirkland: are you going to go with the packages your team has produced or use the upstream distribution packages? [19:51] from CDH === CrazyGir is now known as Guest40059 [19:52] jamespage: we haven't made a firm decision yet, but I think we were leaning toward our packaging [19:52] alamar: fyi, ufw uses plain iptables-restore syntax in its rules files [19:52] jamespage: do you have an opinion or information to add? [19:52] jdstrand: from what it looks like there are also different things going on in the files [19:53] kirkland: I think working with the upstream CDH packages will give you a smoother line for support/bugs etc... [19:53] kirkland: but I have not looked at that packaging [19:53] jamespage: interesting, okay [19:53] no. these are just fed into iptables-restore. granted, various chains are setup, etc, but the rules files are no more than straight iptables [19:53] jamespage: yeah, i was looking for specific information why one might be better than another [19:56] kirkland: well you get better support for older releases but nothing newer than maverick ATM [19:56] kirkland: so that might actually answer your question [19:56] jamespage: ah, yeah [19:57] kirkland: that said they do publish a full suite of hadoop plus friends - http://tinyurl.com/3mkyqtw === koolhead17 is now known as koolhead17|afk [20:18] so, can someone tell me where i screwed up the syntax here; cat filename | xargs mv TARGET [20:23] tiphares: try mv -t TARGET [20:23] xargs appends the input to the command string and mv, without specifying it further, treats the last input word as destination [20:25] didn't change much === CrazyGir_ is now known as Guest62894 [20:27] tiphares: what's the exact problem? [20:28] still working on my previous problem [20:28] moving certain stuff into a specific folder [20:28] I meant with the cat X | xargs mv -t Y [20:28] right [20:28] it returns this [20:28] mv: invalid option -- 'r' [20:28] Try `mv --help' for more information. [20:30] tiphares: try cat foo | xargs mv -t TARGETDIR -- [20:30] when adding the iptables line to before.rules, and then stopping/starting ufw, it freaks with: ERROR: problem running ufw-init [20:30] Guest62894: can you use paste.ubuntu.com and paste your before.rules file? [20:30] bah. I should be CrazyGir.. [20:31] Guest62894: do logs tell you anything more specific? also it wouild be recommendable to paste your before.rules file somewhere [20:31] that worked alamar [20:31] :S [20:31] that seems confusingly random [20:31] tiphares: pardon me? [20:31] adding '--' worked [20:32] there we go :) [20:32] bah! [20:32] tiphares: "--" prevents anything afterwards from being interpreted as commandline argumeents [20:32] this works with every command [20:32] more or less [20:32] let's say most commands [20:32] oh === Guest62894 is now known as CrazyGir [20:33] probably with all commands using getops* [20:33] there we go :) [20:33] i learn something new everytime im here :P [20:33] awesome [20:33] more, less, and most all support that. [20:33] <.< [20:33] jdstrand: my before.rules (written by someone else) is quite long, and works fine by itself [20:34] CrazyGir: well, I need to see what you added and where to see what the problem is [20:34] when I add this line, it fails: -A PREROUTING -i br0 -p tcp -t nat --dport 9000 -j DNAT --to-destination 192.168.1.20:9000 [20:34] CrazyGir: a diff of before and after is likely good enough [20:34] I added it at the end [20:34] before COMMIT [20:35] Pici: well as I said most do. but it's probably related to the use of the getopt-family of functions for commandline parsing [20:35] CrazyGir: that is your problem. the before.rules only has the *filter table [20:35] alamar: I know, was just playing with the words you chose to use to describe that. [20:36] jdstrand: ah, so I'm a bit confused [20:36] CrazyGir: see 'man ufw-framework', the 'Port Redirections' section [20:36] CrazyGir: as I understand there are different sections like *nat and *filter [20:36] where should I be putting port redirections? [20:36] within the *nat section [20:36] okies [20:36] okies [20:37] jdstrand: this is by the way what I meant with other stuff in the file ;) [20:37] alamar: it is still all iptables-restore [20:37] you can't mix and match rules for different tables [20:37] CrazyGir: when you put it in the nat section you will probably not need to refer to the nat table [20:37] and this is why I love pf [20:37] you need a *filter table, and a *nat table and the right rules need to go in the right places [20:37] CrazyGir: you could use iptables directly [20:38] it's iptables that is nuts :) [20:38] no it isn't [20:38] alamar: the 'nat seciton' you are referring to.. is this in before.rules? [20:39] CrazyGir: read the ufw-framework man page like I said :) it has what you need, I promise :) [20:39] EXAMPLES, then Port Redirections [20:39] jdstrand: yea, I'm ther [20:39] I see there are 2 things i need for this to work [20:40] not just the one line I had [20:40] * jdstrand nods [20:40] :) [20:40] well, this is an example [20:40] it is assuming the firewall is mostly closed, which is why the filter table part is there (ie, as documented, it will work with ufw) [20:41] anyhoo, gotta run [20:41] not sure I follw you there, but thanks [20:41] I would call this "mostly closed" [20:41] before.rules could be added to man 5 [20:43] jdstrand: TIL about iptables-save & iptables-restore; thank you [20:44] kirkland: ooh, qemu v0.15.0-rc0 was tagged [20:45] updating my main virt laptop to onieric today, then i'll try a sync and see hwo it goes [20:46] when starting ufw, and it fails, is there a way to get a specific line number that it errored on? [20:47] ERROR: problem running ufw-init <--- not helpful [20:47] unfortunately, no [20:47] seriously? [20:47] you can run ufw-init manually [20:47] ufw disable; ufw-init ? [20:47] /lib/ufw/ufw-init reload [20:48] kks [20:48] CrazyGir: yes, disable fine. then you will want to update /etc/ufw/ufw.conf manually to 'enable' it, then use ufw-init manually [20:49] doh', tehre i ago again, confusing the trees [20:49] 0.14.1 it is [20:49] ah so now I understand why you did not like me badmouthing ufw :) [20:49] hallyn: fyi, I uploaded a new qemu-kvm today [20:50] jdstrand: what do you mean by this? then you will want to update /etc/ufw/ufw.conf manually to 'enable' it [20:50] i saw the push. [20:50] hallyn: not sure if 0.14.1 has the fixes or not... [20:50] yeah, not sure, but i'll be checking of course [20:50] CrazyGir: ufw-init will short circuit if the firewall is disabled [20:50] still hoping 0.15.0 comes out before freeze :) [20:51] CrazyGir: since 'ufw enable' is not working for you, you need to stop the short circuit. that is done by setting ENABLED=yes in /etc/ufw/ufw.conf [20:55] ah, yes, I have that [20:57] hrm, ufw-init doesn't like the *nat I included per the manpage [20:57] before COMMIT [20:58] you probably really should paste it somewhere [20:58] !pastebin [20:58] For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [20:58] CrazyGir: the *nat is after COMMIT [20:58] CrazyGir: see, each table starts with: [20:58] * [20:58] rules for the table [20:58] COMMIT [20:59] so, you need: [20:59] *filter [20:59] your regular rules [20:59] COMMIT [20:59] *nat [20:59] ah [20:59] ok [20:59] your new rules from ufw-framwork for PREOUTING [20:59] COMMIT [21:00] does ufw have NAT in recent versions of ubuntu? [21:00] RoyK: not via the cli, no [21:00] imho the lack of nat in ufw is a major drawback [21:00] only via the gui? :) [21:00] hah [21:00] hallyn: hah, uuh, no :) [21:01] adding nat support to ufw must be a rather tough task - I guess an hour or three for a decent programmer :P [21:01] hallyn: neat [21:02] hallyn: you merging? [21:02] RoyK: the feature is planned, and I hear what you are saying. it is something I would like myself. that said, the primary audience is for bastion hosts/desktops/server, not for routing firewalls [21:02] kirkland: i'll take 0.14.1 tomorrow at least [21:02] RoyK: patches welcome and all that :) [21:02] jdstrand: still, quite a lot of people would like to use a server for NATing [21:02] yep [21:02] jdstrand: I don't have that need atm, so I don't think I'll spend much time on it [21:03] :) [21:03] jdstrand: this is going better, but iptables is not happy with the following, is there a way to get more specifics on what it doesn't like? -A PREROUTING -p tcp -i br0 -t nat --dport 9000 -j DNAT --to-destination 192.168.1.20:9000 [21:03] CrazyGir: the "-t nat" is not necessary in this setup [21:03] CrazyGir: get rid of the '-t nat' [21:03] didn't you tell me to put it in there alamar ? [21:03] :P [21:03] CrazyGir: for iptables yes [21:03] CrazyGir: you already specified the table via '*nat' [21:03] I said iptables .... [21:03] hah [21:03] :) [21:04] ok, much better [21:04] jdstrand: that makes more sense _now_ ;) [21:04] slowly piecing together my understanding with iptables here [21:04] I appreciate the patience [21:04] CrazyGir: if you are going to be fiddling a lot with before.rules, I recommend reading the iptables man page [21:04] I'm hoping to limit it to this one set of port forwards [21:05] * jdstrand nods [21:05] these servers are all setup (and actually someone else's responsibility) [21:05] so stop messing with his fw setup1!!!! ;) [21:05] I'm responsible for the VMs running on these servers [21:06] but I'm responsible for the VMs, and he's floating somewhere in some water somewhere in greece [21:07] CrazyGir: I wasn't serious ;) [21:07] hallyn: FYI, I just turned off my email subscription to ~ubuntu-virt's monitored packages (kvm, libvirt, and friends) [21:07] hallyn: please explicitly subscribe me to any bug that you'd like my attention to [21:07] hallyn: it's been ages since I've needed to do anything on any of those bugs beyond the excellent work that you, mdeslaur, and jdstrand already do [21:08] hallyn: so I turned off that swath of bugmail (so I can focus on other swaths of bugmail) :-) [21:08] Daviey: ^ [21:14] kirkland: Yeah, i think we have it covered in ~ubuntu-server, thanks for letting us know. [21:14] Daviey: np; never hesitate to subscribe me, if I can help [21:16] * Daviey subscribes kirkland to all bugs :) [21:16] * kirkland runs for cover [21:24] hah [22:05] kirkland: thx for the heads-up. (just left the faraday cage^W^Wporch for a minute :) [22:29] hallyn: heh, cool [22:30] hallyn: i think my airstream is a faraday cage [22:38] kirkland : Close to it .. Aluminum isn't known for being radio-transparent [22:38] kirkland: What's got you in an airstream? [22:39] martyn: fun thing to have sometimes [22:39] Well, sure :) I was wondering if you were travelling... [22:40] I've gone to various Burning Man related events in an airstream ... it was 60's retro fun :) [22:40] martyn: ah, no, not at the moment [22:40] martyn: nice; mine's a 1968 [22:40] Hoo .. that's nice [22:41] Hard to keep the aluminum skins in perfect condition.. but they are wonderful trailers [22:41] Got a kitchen in yours? [22:41] (some had 'em, many didn't .. beautiful mini kitchenettes though) [22:41] martyn: yup [22:43] martyn: it's pretty nice === medberry is now known as med_out [23:51] oh ffs, i go to all the trouble to instal lwindows so i can install firmware update, and the update fails to install [23:51] * hallyn hates firmware junk