[00:00] xen's live migration sounds pretty awsome... [00:01] kvm's live migration sounds equally awsome... now I'm going to have to set something up so I can try it! [00:25] anyone use mt-daap (aka firefly)? i installed the version from the repo's, and that failed when it tried to add a file. so i figured i would compile the newest from source, and i can't seem to compile it without libid3tag dependency... [00:25] is there a place where we can send recommends for next puppy version? [00:25] puppy? what does this room have to do with puppy? [00:26] I think a search bar in the package manager would be steller [00:26] ah heck I am in the wrong tab [00:26] sorry [00:26] i was confused there for a second lol [00:27] using puppy to rescue my server and irc has like 10 tabs and everyone in each has had something to offer [00:27] I am so glad for irc [00:29] does anyone here have certifications? If so, which ones? [00:29] still not sure what that has to do with ubuntu-server... or even ubuntu. [00:30] ubuntu has server certification I believe [00:30] doubt it. at least not an 'official' cert. [00:31] nothing like RHEL or SLES. [00:31] kees: jdstrand: what's your opinion on bug 293258? [00:31] Launchpad bug 293258 in mysql-dfsg-5.0 "mysql user has home directory writable by mysqld" [Undecided,Confirmed] https://launchpad.net/bugs/293258 [00:32] anybody use mt-daap or firefly on their ubuntu-server? [00:32] arrrghhh: what about http://www.ubuntu.com/training/certificationcourses [00:33] olcafo, yes, but i don't think those are like the RHEL or SLES certs. [00:35] arrrghhh: I suppose not, now than I'm looking at it. Still would be interesting to hear about. [00:36] certainly. but they won't have the same type of clout the other certs will (unless you KNOW the company wants ubuntu certs, which, i have never run into unfortunately.) [00:36] New bug: #351254 in mailman (main) "Need version bump - 2.1.11 broken with Python 2.6 (dup-of: 351648)" [Undecided,New] https://launchpad.net/bugs/351254 [00:52] mathiaz: FILE privs are often considered inherently insecure in the first place, but it might be better for the mysql user to have "/nonexistent" as its home directory to at least prevent the dotfile attack vector. [00:53] kees, jdstrand: ^^ === espacious_ is now known as espacious [02:08] Strange issue: I can access my server from a remote network and my server can access all devices locally but not any of the external ip addresses? Is this a gateway setup issue? [02:10] LumpToe: maybe a DNS issue? [02:10] LumpToe: oh, wait, can;t acces IP addresses- sorry, ignore me [02:11] LumpToe: can you traceroute or mrt from the server to see where it fails? [02:11] Yeah dig manages to resolve the addresses but tracepath stops at the route [02:12] that does sound like a gateway issue [02:13] This is a brand new install and a DHCP issued address from the router [02:13] How can I see the gateways used on my ubuntu box [02:14] I was just wondering that [02:15] LumpToe: route -n [02:15] 0.0.0.0 [02:15] brb nature calls [02:16] LumpToe: my server has 2 lines: [02:16] 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 [02:16] 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 [02:51] how can you tell which raid controllers are supported for doing installs? [03:07] back [03:07] goofey: My server has the same two lines [03:16] goofey: It was my router. Strange. I went through some of the route tables and wiped out anything with the same IP address. [03:53] anyone here ever use the dhcp3 package w/ ldap patch? [03:55] !anyone [03:55] A large amount of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? [03:56] lol [03:57] so... when i give it credentials for its account on ldap, it breaks with the error of "success" ... when i give it the wrong password, it seems to re-bind anonymously and works except it can't write to the ldap dn, so it fails. [03:57] binding with ldapsearch with or without credentials works fine [03:58] Sam-I-Am: does libnss talk to LDAP? [03:58] That is, does your /etc/nsswitch use ldap? [03:58] not on the ldap server [03:58] i dont think the patch uses libnss [03:59] seems to have all of its config in dhcpd.conf [03:59] Hmm, OK. [03:59] the debug mode doesnt provide any useful information... and the documentation is scarce. pretty much had to reverse engineer the schema file to figure out what i should have in my ldap tree. [04:00] I hates LDAP [04:00] i love ldap... [04:00] so, if i could figure out how to make the package even work... i'd consider writing useful documentation for it. [04:00] might need to find its maintainer... [04:00] the guy who wrote it is nowhere to be found [04:01] its a universe package [04:03] hi :) [04:05] how can I troubleshoot my settings if I did everything as "How-To: Set up a LAN gateway with DHCP, Dynamic DNS and iptables on Debian Etch" said. and I still cannot ping the NIC that connects to internet ??? [04:14] aranyik: Etch is not Ubuntu [04:15] ok [04:15] but its debian [04:16] aranyik: this is not a Debian support channel. [04:16] aranyik: try #debian on irc.debian.org (OFTC). [04:16] ok.. [04:16] then how can i make the same in ubuntu [04:16] ? [04:17] i know dhcp3-server is supported [04:17] how about bind9? [04:17] i think it is also [04:18] aranyik: I don't know what you're trying to achieve. [04:18] aranyik: for Ubuntu you probably want to read the Server Guide. [04:19] i tried it at forst [04:19] at first [04:20] and it wasnt working [04:20] then every thread in ubuntu will show similar ways [04:20] twb: doing some digging... seems like intrepid grabbed a buggy package version from debian... might be fixed in lenny... which means it'll probably be in jaunty [04:20] but its still not working [04:21] !enter [04:21] Please try to keep your questions/responses on one line - don't use the "Enter" key as punctuation! [04:22] a router is a very easy thing to set up, but i never had so much problem since i tried on ubuntu [04:22] Get:2 http://au.archive.ubuntu.com hardy-updates/main ubuntu-docs 8.06.1 (tar) [42.5MB] [04:22] ...ugh, forty megabytes? [04:23] What, did some jackass forget to "make clean"? [04:23] lol [04:23] or gzip [04:25] Sam-I-Am: no, it's gzippe [04:25] However I don't understand why Ubuntu 8.04's version is 8.06-1... [04:25] They ought to call it 8.04.1-1 or something. [04:25] yeh [04:26] probably a typo [04:26] Sam-I-Am: er, not likely [04:26] Sam-I-Am: more likely that they version the docs based on when they are released, and 8.06 is from -updates [04:27] Hmph. The source directory of the tarball is ubuntu-docs-8.04.2~hardy [04:27] I bet lintian doesn't like that [04:28] lol [04:33] apt-cache policy ubuntu-serverguide says: [04:33] 8.06.1 0 500 http://mirror.internode.on.net hardy-updates/main Packages [04:33] 8.04.2~hardy 0 500 http://mirror.internode.on.net hardy/main Packages [04:45] OK, so what happens in that file, AFAICT, is that there are canonical English .xml files, and then .po files that contain each English paragraph and its translation (i.e. each paragraph occurs 1 + 2*(number of translations) times). Then on top of *that*, I think the autogenerated translated .xml files are reproduced in the source? === ^law^41 is now known as _law === _law is now known as _law_ [04:54] Check this shit out: [04:54] fgrep -rl 'You can use the CVSROOT environment variable to store the CVS root' * | wc -l [04:54] 83 [04:54] huh... [04:54] That's right *eighty three* copies of the same text in the source tarball [04:54] well aint that special [04:54] I'm sure someone is just being lazy. That can't be the only way to do it [04:55] Maybe it's something horrible like Canonical builds its docs using its unpublished internal code, so the source package actually contains postprocessed files. [05:09] is there a package from repo i can install for VMWare Tool on Ubuntu Server? [05:09] Or do I have to manually install it [05:13] oh_noes: do you mean so you can run ubuntu-server as a guest inside vmware? [05:17] oh_noes: on Debian there is open-vm-tools, I can't see it in 8.04 [05:17] oh_noes: that it the "install VMware Tools" thing built properly in a .deb [05:17] s/it/is/ [05:19] its not, atm it's all manually, and you have to install your kernel source [05:19] I wasnt sure if Ubuntu had package for it, but thats ok [05:19] It's in 8.10 [05:19] http://packages.ubuntu.com/open-vm-tools [05:19] I guess you could backport it [05:22] and the 8.10 package works flawlessly [05:22] although vmware is a little weird on detecting if the guest is running vmware tools... [05:23] I'd certainly be more inclined to trust backporting the intrepid package over using the shitty virtual CD full of scripts and sharballs that vmware-server itself mounts. [05:33] yeah, also known as... binary blobs [05:34] and m-a just makes it so easy to get modules in open-vm-tools [05:37] Sam-I-Am: actually, no [05:38] Sam-I-Am: the vmware tools .iso contains pre-compiled .ko files only for RHEL kernels [05:38] Sam-I-Am: there's also the module source in there, which would be used on an ubuntu guest [05:38] and some others... like sles i think [05:38] true [05:38] but their builds break on 'modern' systems [05:38] thanks to module build dependencies [05:39] i made a patch some time ago for it... [05:40] other folks seem to combine vmware-tools and open-vm-tools [05:40] almost seems like ubuntu is getting along better with vmware these days than redhate since they're pushing their own vm system [05:42] Eh, vmware can FOAD as far as I'm concerned. [05:42] i really like virtualbox [05:42] qemu -curses along makes qemu beat it, not to mention stuff like -tftp [05:42] but trying to sell it to management is difficult [05:42] "what, we dont have to pay for it?" [05:43] Well, virtualbox has a non-free edition [05:43] That's part of the reason I mistrust it [05:43] Sam-I-Am: you can sell it to your management by calling the OSE "the demo version" [05:43] lol [05:43] thats not a valid argument [05:43] well, my first thing is trying to convert management from centos/rhel to ubuntu [05:43] Ubuntu has a non-free edition (support) so you cant mistrust it [05:44] its quite the uphill battle [05:44] oh_noes: I do mistrust canonical. [05:44] oh_noes: I would VASTLY prefer Ubuntu to be a Debian blend or subproject rather than a fork that syncs irregularly. [05:45] oh_noes: but providing support is quite different to having having two separate versions of a product. [05:45] It's not like Ubuntu has a RHEL and a CentOS version [05:46] agreed, but they only charge for the features business want. For example OSE has USB support etc, and the non-free has remote desktop. [05:46] yeah, the packages arent 3 years old :) [05:47] oh_noes: that's precisely my point. [05:47] oh_noes: it's the same business model as cedega has. [05:48] It's based on treating to wider community as second-class citizens. [05:49] welp, time for bed here [05:49] laters [05:50] I disagree. Its giving them wider community what they want for free, and charging business for anything additional they need. [05:50] Sure, if they start removing functionality used by the wider community, then they break this [05:50] but currently, what they are doing functionality wise I think is a great compromise [05:56] Apart from the fact that they're deliberately taking away features I want, but aren't prepared to pay for. [05:57] It means that if I want those features I have to add them into a fork of the product. It's a divisive business model. [05:57] If they took a "sell consulting" approach, then all the code could be open, and everybody would be working on the same codebase. [06:05] how can i tell if I'm using x86_64 or AMD64 architecture ? [06:05] what is an unstripped build ? [06:06] am i allowed to use the multiverse directory if I'm on 8.04 ? [06:06] quizme: they're the same architecture [06:06] I don't know what an unstripped build is, and yes, there's a multiverse repo for every Ubuntu version [06:07] p_quarles: I mean how can i tell if i'm 32 bit or 64 bit [06:07] quizme: the CPU or the kernel? [06:09] i'm not sure [06:09] "x86_64 or AMD64" is that cpu or kernel ? [06:10] quizme: again, x86_64 and AMD64 are the SAME THING; what I'm asking is whether you're trying to figure out if your hardware is 64-bit capable, or if the OS you're running is 64-bit [06:12] or (this might be easier) what's your real quetion? why do you need to know? [06:13] my real question is [06:13] how do i install ffmpeg [06:13] for 8.04 [06:14] quizme: sudo apt-get install ffmpeg [06:14] the dependencies and architecture questions are automatically resolved by the apt-get program [06:15] how about the codecs and libraries? [06:15] i have 8.04 on my server [06:16] https://wiki.edubuntu.org/ffmpeg <--- i found this [06:16] it looks like there is a bunch of other commands i need to do also [06:16] Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid <--- like what does that mean? [06:18] quizme: okay, I can see where your questions came from now, and let me just say that they are meaningless outside of that context; so start with the "real question" next time, okay? :D [06:19] now, to answer: follow those commands exactly and you should be good [06:19] to find out if you're running 64-bits, run in the terminal: uname -4 [06:19] oops, that should be uname -r [06:20] if it's 64 bits, it will contain the term "x86_64" in the output [06:21] as for "unstripped build", that just means a copy of the codec pack as Fluendo distributes it, rather than the modified way Ubuntu ships it [06:24] 2.6.21.7-2.fc8xen [06:24] that's my uname -r [06:25] who is Fluendo ? [06:25] so it's a Xen virtual machine? anyway, not 64 bits, so you can skip the section in question [06:26] Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid <----- i'm runing 8.04 though [06:26] i am running on AWS / EC2 [06:27] so is that a dangerous command to run ? [06:27] what command? [06:27] sudo apt-get install libavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0 <--- this command [06:27] no [06:27] ok [06:27] but why does it say 8.10 ? [06:28] oh, looking again, it appears to say those packages are available through apt-get only in 8.10 [06:28] for older versions, you'll need to use the instructions below [06:29] can i upgrade my whole system to 8.10 ? [06:29] is that safe ? [06:30] from 8.04 to 8l.10 [06:33] safe is relative; if you're asking "can it break?" the answer is yes; if you're asking [06:33] "is it supposed to break?" the answer is no [06:34] "safety" in my view is having a backup plan, and not relying on unfamiliar (to you) software to make things flawless; the latter is almost always unrealistic [06:35] good advice [06:36] anyway, the majority of version upgrade experiences are pretty smooth, but there is a significant minority that runs into big bumps during the process [06:45] New bug: #352154 in openssh (main) "ssh-agent stops responding" [Undecided,New] https://launchpad.net/bugs/352154 [07:08] I want my ubuntu server to act as a gateway...wht I understand is I have to enable routing(net.ipv4.ip_forward=1)..Is there anything more I have to do? this server is connected to another router. [07:09] do I need to setup iptables and NAT? [07:10] rags: depends which kind sharing you want to configure [07:10] rags: options are transparent bridge and NAT, google those [07:12] simplexi1: thx..will check...I just want net access for the clinet machines behind the ubuntu server...I suppose that means a transparent bridge. [07:14] rags: nat if you dont want access to it from anywhere, or bridge if you want acces it from another place than server [07:16] will the forwarding work with just wht I have done...since nat is already present on the router? [07:23] if you say: apt-get install a b c d e f ....... to reverse that can you say: apt-get remove a b c d e f ......... and that will bring the state of your system to exactly where it was before ? [07:25] quizme: if a had a dependency h, h would not be removed after step 2 [07:26] but if h was the only a dependency for a it would be listed as a package that is no longer in use, and you can remove it with sudo apt-get autotemove [07:27] oh i c [07:27] well [07:28] if you will do sudo apt-get install pack1 pack2 pack3, it will give the list of all the packages that would be installed [07:28] you can save it [07:28] what if c was installed before step 1 ? [07:29] it would not be listed in the packages that would be installed [07:29] so you won't delete it after [07:29] do you mean uninstalled ? [07:30] look [07:30] basically i am wondering if i uninstall a b c d e f I don't want it to wreck anything else that may want it there [07:31] if you had pack1 installed, and you would run sudo apt-get install pack1 pack2 pack3, pack1 would not be listed as the package that would be really installed [07:31] so you will know that you should not remove it :) [07:31] the problem is [07:31] i already ran apt-get install [07:31] so i can't see that list [07:31] heh) [07:32] quizme: /var/log/dpkg*log [07:32] twb: i c.... i have to dig in there .... thanks [07:33] oh boy [07:33] i need a bubble bath [07:33] :) just look at the timestamp [07:33] quizme: had you used aptitude, there would be /var/log/aptitude, which is more readable [07:33] i c [07:33] hmm [07:34] ok [07:34] i'll use aptitude from now on [07:34] this is goign to be hell [07:36] quizme: there is /var/log/apt/term.log [07:36] quizme: you would see the output of apt-get you ran before [07:36] basically to get my system back to a state S0 at time t0, i should remove all packages installed after t0 if they weren't already in the system before t0. [07:37] then type in apt-get autoremove [07:37] it seems like that could be automated with a script [07:38] quizme: look at the /var/log/apt/term.log [07:40] That still is not guaranteed to get you back to what you had before. [07:40] In particular, removing (instead of purging) will not remove config files [07:41] And some buggy packages will leave stuff in /etc or /var even after you purge them [07:41] database packages sometimes do that to avoid data loss. [07:42] it would be cool if there was a program that could bring your software and library state to a certain time point by dragging a "scrubber control" like in a video player. [07:42] quizme: it is called time machine in mac os x ;) [07:43] twb: so there is no clean way to make a time machine with apt-get ? [07:44] It has been discussed, but does not exist yet. [07:44] In particular it will be easier once btrfs is in production, as it (like ZFS) supports snapshots. [07:46] In general, removing packages and purging them (aptitude purge ~c) should be sufficient [07:46] It's just not GUARANTEED to be identical [07:48] ok [07:49] so i'll just aptitude purge all the packages in installed today [07:51] then reinstall what i was supposed to [07:51] hopefully that doesn't break anything else [07:53] aptitude purge ibavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0 <--- this looks pretty safe doesn't it ? [07:57] quizme: sudo apt-get purge [08:16] I'm on Ubuntu 8.04.2 LTS and for some reason I can't get vim installed correctly. The package installs but vim complains about missing features (such as no syntax highlighting). Anyone know what's going on? [08:17] Counterspell did you customize ~/.vimrc? [08:17] yes [08:18] Counterspell rename it and give it a try without it. [08:18] why is the vim build screwed up? [08:18] ok [08:18] of course that will work [08:18] but i want those features [08:18] Counterspell no, I think you messed up your .vimrc [08:18] :) [08:18] syntax [08:19] no vimrc is ok [08:19] i just copied it from my other box [08:19] nothing wrong with it [08:19] hm [08:19] regular sudo apt-get vim? [08:19] someone think the build for server would be 'more stable' without syntax highlighting? [08:19] no [08:21] yes normal sudo apt-get install vim [08:21] Counterspell are you invoking with vi? maybe try with vim? [08:21] nope; let me see i just did apt-get update and now it looks like i can install vim-full [08:22] I don't have an 8.04 box. np with 8.10. [08:23] i think i'm all set now [08:23] thanks man [08:23] fyi; install vim-nox is the way to go [08:23] where are packages downloaded to again? i want to delete some downloaded packages [08:24] Counterspell: /var/cache/apt/archives [08:24] Counterspell: thanks [08:24] spaces in file names...wrote a simple script to inventory the permissions for files and dirs with full path. all works except file names with spaces. some help? http://pastebin.com/m11102c41 [08:25] pointer? [08:37] moin [08:39] <_ruben> friartuck: i assume something like this would work (not-tested) : find ~/.nx -name "*" -print0 | xargs -0 ls -Alhd [08:41] _ruben interesting, that may work better. Thx! I'm working with sed to figure out the first script. [08:42] does apt-get build-dep only install the dependencies of a package? [08:42] _ruben your's fixed the space problem anyways. [08:57] how do I install only the dependencies of a package? [09:43] hi i installed gallery2 but i cant get the relative paths right...i tryed almost everything... [09:43] http://gallery.menalto.com/node/77317 [09:43] i followed that [09:44] can anyone throw an eye? [09:44] galler2 path is /usr/share/gallery2 [09:44] wordpres in /var/www/wordpress [09:45] domainname.com is linked in /var/www/wordpress === asac_ is now known as asac === rdw200169 is now known as rdw200169`away [12:02] New bug: #299455 in mysql-dfsg-5.0 (main) "mysql init script fails if debian-start is not executable" [Low,Triaged] https://launchpad.net/bugs/299455 === mcasadevall is now known as NCommander [12:52] During installation, tasksel offers the 'virtualisation' option. When this option is checked, the installer does not put the main user in the libvirt group. The installer automatically put the main user in the 'admin' group, it is desirable to put the main user in libvirt so that the user can use virsh 'out of the box'. [12:56] New bug: #352321 in mysql-dfsg-5.0 (main) "mysql queries "lose" results" [Undecided,New] https://launchpad.net/bugs/352321 [12:58] Additionaly, the default network is broken at install. Its default setting makes it fail on startup. Better deactivate it (not autostarted) than having a broken default configuration. Another reason why the default network should not be autostarted : it is a NAT configuration which is not adapted to several use case. In doubt, better let the user choose than make a choice for him that he will need to de-configure. [12:58] how do I create a launch to a executable? [13:56] is this irc room logged somewhere? [13:57] !logs | rst-uanic [13:57] rst-uanic: Official channel logs can be found at http://irclogs.ubuntu.com/ - For LoCo channels, http://logs.ubuntu-eu.org/freenode/ === Zaraphrax is now known as Zaraphrax[Away] [13:57] jpds: thanks :) [14:07] ivoks, hi [14:12] question. how can I give user permissions to actually write to /var/www directory ? [14:12] whithout chown , cause that just screws things up [14:15] orudie: are you familiar with linux permissions? [14:15] do an ls -ld /var/www [14:16] and paste the output (should only be one line) here [14:17] drwxr-xr-x 6 root root 4096 Mar 30 01:52 /var/www [14:17] ok, typically, root does not own /var/www [14:17] in ubuntu/debian, www-data does [14:17] did you change it? [14:17] nope [14:17] do you have a webserver installed? [14:17] yeah [14:17] which one? [14:17] installed it with tasksel apche2 [14:18] with 4 active vhosts [14:18] ok ... [14:18] and who/what needs to write to this directory? [14:20] oh [14:20] i need to upload files with ssh [14:20] i mean sftp [14:21] well typically that's not done directly in /var/www [14:22] you might create a directory like /var/www/user/ [14:22] and then chown that directory for your user [14:23] giovani, oh so i have /var/www/site1 , so i can do chown username /var/www/site1 ? wont this mess things up with apache's permissions ? [14:23] yes it would [14:23] orudie: you can have your user, and apache's group own the directory [14:24] giovani > you could put www-data in the group that owns the directory [14:24] in the group of the directory I meant [14:24] yann2: I just said that [14:25] ok I misunderstood :) thought you told him to put www-data as group [14:25] sorry [14:25] I did, I must have just misunderstood you :) [14:25] why create ANOTHER group? [14:25] :) I would create a group like "website1", and put user and www-data in it [14:26] what's the advantage of that, in this situation? [14:26] more flexibility if there are several people working on the website? [14:26] could give access to some people to one website but not the other one [14:26] non sense if he is the only user :) [14:26] ok ... he hasn't said anything about that [14:27] but yes, in that situation, that would be more flexible, it's just more complex if it's not required [14:27] I always found web permissions to be extraordinary complex and unstatisfying :( [14:27] yann2, can you help me create a group ? === Zaraphrax is now known as Zaraphrax[Away] [14:28] yann2, that will let me do what you are talking about [14:29] yann2: linux permissions being very lacking don't help -- which is why real ACLs are usually brought in :) === rdw200169`away is now known as rdw200169 [14:44] ivoks: We're looking at leaping to clamav 0.95 before release. There's a draft package in ubuntu-clamav PPA. Could you test it with amavisd-new? [14:45] i might... [14:46] i just can't tell when :) [14:46] * ivoks whishes cloning is allowed :) [14:46] I probably have about two days to decide. [14:48] ivoks: I don't know anyone else I'd trust to do it and I'm pretty tied up working on porting libclamav rdepends. [14:48] i'll test it [14:49] ivoks: Thanks. [14:49] * ScottK-palm gets back to $WORK. === rdw200169 is now known as rdw200169`afk === Zaraphrax[Away] is now known as Zaraphrax === Zaraphrax is now known as Zaraphrax[Away] [16:14] Greetings [16:14] I need some suggestions with TAR === hessml|away is now known as hessml|away|away [16:15] I have an old version of tar that doesn't support inline bzip and gzip, that also splits archives once they hit 2048 MB, and the folder I'm trying to archive is 2078 MB. [16:16] how can I tar and bzip simultaneously? [16:18] Fenix|work: run "tar --version" for me [16:18] I don't know what "old version" means exactly [16:18] not supported :) [16:18] hehe [16:18] that old [16:18] you're running ubuntu server? [16:18] I run several... but this one is not. [16:18] this is #ubuntu-server [16:19] it's an antiquated BSD derivative. [16:19] you'd need to read the manpage for the version you have [16:19] hi, i'm having a problem with an old version of winzip ;) [16:19] as for bziping ... you can simply tar it and pipe that to bzip [16:20] giovani, I do, and I have... but there are bright minds here and it appeared noone was doing anything 'pressing' so I thought to ask. [16:20] but beyond that ... clearly you'd have to refer to documentation that came with your version of tar ... it's not ubuntu, and it's not supported here [16:20] tar to stdout, pipe to bzip [16:20] so the manpage makes no mention of file limit? [16:20] nope [16:20] have bzip create the file rather than tar [16:20] Deeps: sounds like the advice I just gave :) [16:20] indeed [16:21] hopefully you can advise me with my winzip problem next ;) [16:21] giovani, to give Deeps credit, you mentioned to tar it and pipe into bzip... he suggested just to bzip without the tar [16:21] use cpio [16:21] don't use tar if it's old [16:21] Deeps, I'd be glad to help you with your winzip problem. [16:21] move it to another machine and tar it :D [16:21] Fenix|work: umm, actually, i suggested the exact same thing that giovani did [16:22] I was just thinking about mounting it via NFS to one of my ubuntu boxes [16:22] heh [16:22] I missed the tar to stdio... just saw the 'have bzip create the file [16:23] bzip doesn't create archive files [16:23] Deeps: I believe you have to replace the winzip flux capacitor [16:23] which is why you tar it first [16:24] if you tar to stdout, it shouldn't be splitting anything as it's not creating any files, it's simply being piped to bzip to create instead [16:24] (which is what giovani suggested :)) [16:25] of course, that relies on tar supporting STDOUT redirection, which, considering it doesn't support printing its version, might be a stretch [16:25] poke fun that the poor soul who has to administer some old piece of crap... [16:25] however, it MIGHT evade the 2GB limit [16:25] depending on its cause [16:25] for all we know, the system's so old it doesn't support files > 2gb ;) [16:25] 2.6GB of source code should compile pretty small [16:26] compress [16:26] jeeze [16:26] what is wrong with my brain today [16:26] sounds usuall to me :) [16:26] all that dust you've been breathing in that's been stuck in that computer since the 1980s [16:26] * Deeps gets back on with his windows MCE install [16:26] although given that it's off-topic hour, anyone know a linux alternative that'll work with an xbox360? [16:26] "work with"? [16:27] the 360 has a MS-bodged upnp-av stack [16:27] so it'll only read networked media if it's coming from WMPv11 or a WinMCE (XP-MCE, Vista) [16:27] talk to the folks at LinuxMCE [16:27] Deeps, have you visited the xbox-linux.org site? [16:28] #linuxmce [16:28] good project [16:28] also http://smart-home-blog.com/archives/836 [16:28] Fenix|work: just did, thats for running linux on the xbox, not reading network media from an xbox360 (totally different machine) [16:28] giovani: ta [16:28] xbmc seems to have some support [16:29] yeah, all this stuff's for the xbox, not the xbox360 [16:29] nm, [16:29] Deeps: just talk with #xbmc and #linuxmce [16:30] they'll know more than us [16:30] aye [16:30] < Deeps> although given that it's off-topic hour || was the only reason i asked ; [16:30] * Fenix|work thinks most people knows more than him [16:30] ;) [16:30] most people can barely operate a computer [16:30] so, given that you know what "tar" is ... I figure you're already in the top 1% [16:31] I get paid to barely operate several servers... the advantages of knowing that little bit extra. [16:31] They get their retribution by giving me this crusty old BSD derivative called QNX. And not even the new version. [16:32] qnx? [16:32] ah lol [16:32] QNX rocks! [16:32] I'm having fun trying to port over GCC 3 [16:32] so I stand a chance at porting over some more up-to-date tools. [16:34] giovani, 6 yeah... 4, not so much from an administrative point of view [16:34] you should run the "QNX is cool!" application [16:35] http://upload.wikimedia.org/wikipedia/en/f/fd/Qnx_floppy.gif [16:35] right next to "Towers of Hanoi" [16:35] :) [16:35] hehe [16:35] if anyone's interested, the correct answer to my question was GeeXboX uShare ;) [16:35] Deeps, will keep that in mind [16:35] Deeps: yeah, I figure I have to use MS MCE [16:35] to get all the features I need [16:36] nobody else supports QAM decryption with CableCards [16:36] fun [16:37] because Linux = evil [16:37] clearly [16:38] linux = scarey... like 'the earth is flat' scarey. [16:38] most devs don't want to fall off the edge of the earth, so they stay home [16:38] exactly [16:39] and management doesn't want to use linux because they think they have to release their source code. [16:39] haha [16:40] Fenix|work: talk to them [16:40] take the lead [16:40] they'll tell me... what's tar? [16:40] tar is a program we use every day - on linux it works, here it doesn't [16:40] and i have to backport real tar, which takes couple of hours [16:41] that's why you have to pay me more [16:41] simple as that :) === hessml|away|away is now known as hessml|away [16:52] ivoks, and where is 'here'? [16:52] :) [16:53] Fenix|work: at your company [16:53] ? [16:53] you said management doesn't know what tar is and are affraid of linux [16:54] just let them know that with linux everything would be cheaper, and you'll get the green light [16:56] time to go... [17:12] Someone think that they can help me with a mail-sending batch script not running correctly when run as a cron job? [17:13] hey folks, I've got some servers that a pair of gigabit ethernet adapters each. I bonded the nics and just found that they are all negotiating a 10 mb connection instead of 1000. Can anyone give me a shove in the right direction to fixing that? My google-fu is failing me here... i must be searching for the wrong things [17:13] let me check my magic 8 ball...( just ask your question) [17:14] acicula? [17:14] psyferre: you use ethtool to try and negotiate at 1000mbps? [17:15] giovani: i'd been looking at mii-tool, at the -F options, but they only appear to support up to 100baseT [17:15] giovani: looking at ethtool now [17:16] ZipmaO: just ask your question, or describe the problem, if someone knows they'll give you an answer [17:17] giovani: looks like ethtool -s bond0 speed 1000 is all i need, correct? [17:17] psyferre: try it :) [17:18] giovani: :D sorry, i'm a *nix novice and am trying to solve a production server problem quickly... we didn't realize the problem until an hour ago and are frantically trying to resolve it :) [17:19] giovani: i'll try to find something "safe" to try it on [17:19] psyferre: the reason I say try it ... is because I haven't had the problem before -- I'm giving you my best advice [17:19] but I can't be sure of what will work [17:19] giovani: i understand, thank you very much for the advice :) [17:20] you can run ethtool bond0 [17:20] to find out some basic info [17:20] that's harmless [17:21] giovani: okay, thank you :) [17:21] I'm not sure that ethtool will be effective against the bond0 device, it may have to be run against the individual nics [17:21] that's a good point, greenfly [17:21] hmmph. "No data available" [17:21] another issue is that I thought that gigabit ports required autoneg [17:22] since it's interacting directly with the MII [17:22] so if you are getting 10mbit it's possible the switchport isn't set up properly [17:23] just confirming what greenfly said -- yes, autoneg is required for 1000Mbps (had to look it up) [17:23] greenfly: i guess that's possible, though most of the switch ports are setup exactly the same way [17:23] so I'd be looking at the switch ports first [17:23] and make sure they are set to gig and autoneg [17:23] however, it seems a number of PHYs support forcing 1000 [17:23] but it's non-standard [17:24] because otherwise you'll ultimately have to set your server's nics to autoneg in which case they'll possibly negotiate down to 10mbit again [17:25] greenfly, giovani: yes, the switch ports are set to autonegotiate and max capacity [17:25] maybe try hard-coding the switch ports themselves to gig? [17:25] they currently report 1000 mbps full duplex on those two ports [17:25] are they actually gig ports? [17:25] what indicated to you that you were neged at 10Mbps? [17:25] sommer: is https://wiki.ubuntu.com/JauntyServerGuide up-to-date wrt to the sections that need to be reviewed? [17:27] psyferre: if you /did/ want to hard-code an ethernet port to 1000 and turn off autoneg this is how you would do it (as root): [17:27] if i run mii-tool -v bond0 it reports the link speed at 10mb [17:27] psyferre: ethtool -s eth0 speed 1000 duplex full autoneg off [17:27] don't run it against the bond0 interface, but test eth0 [17:28] greenfly: okay [17:28] psyferre: note that sometimes when I've run ethtool it hasn't disrupted service--other times it has [17:28] also, this won't persist after a reboot so ideally you'll figure out some way for autoneg to work [17:29] how do I activate my ftp server on ubuntu server 8.10? is there a page I can visit? [17:29] mathiaz: yes, it is now [17:29] dustin_: there are a few guides around but the main way is to figure out what ftp service you want to run and use the package manager to install it [17:29] sommer: thank ya [17:29] greenfly: okay, i wonder... when i created the bond0 interface i used this line from a tutorial: options bonding mode=0 miimon=100 [17:29] ... maybe there's another option i should have used? [17:30] psyferre: no, that doesn't affect the speed of the interface, just how it's bonded and what timeout it uses to determine when to failover [17:30] greenfly: okay, thank you [17:30] but I wouldn't run miitool or ethtool tests against bond0 [17:31] greenfly: is there a better way that you would recommend to find at what speed the bond is operating? [17:31] greenfly: which ftp service is easiest to configure from command line? [17:31] psyferre: either ethtool against eth0 and eth1 (or whatever your two nics are) or actual speed test (ie using rsync or scp to transfer a file) [17:32] greenfly: they both report 1000baseT full duplex [17:32] balance-rr often confuses other machines you connect to [17:32] psyferre: then it sounds to me like your interfaces are actually at the correct speed [17:32] greenfly: according to ethtool anyway [17:33] well, ethtool should be reading directly from the chipset [17:33] Doesn't the bond interface use a pseudo intel e100 driver ? [17:33] greenfly: pureftpd or proftp? which has the least setup? [17:34] greenfly: I know how to use both with gui tools but not command line [17:34] dustin_: if either is packaged in main it should have a pretty straightforward setup [17:34] dustin_: use pure-ftpd [17:34] if you just want a simple one, try to find one that can use local unix accounts [17:34] genii: if it did, how would it supply more than 100Mbps from multiple bonded 100Mbps interfaces? [17:35] giovani: Yes, thats just what I was thinking about [17:35] but we know that it does ... [17:35] pure ftpd is controlled arguments in the command line, and you can use puredb to create virtual users, you can control quotas, bw limits, access by hours. [17:38] jmedina: where can I find a man online for pure-ftpd? this will reduce my questions in chat ;) [17:39] dustin_: first google hit for "pure-ftpd" [17:40] dustin_: you can read pure-ftpd(8) and for ubuntu pure-ftpd-wrapper (8) [17:40] doing that atm but I am getting a lot of roundy-rounds ;( === tuxlinux_ is now known as tuxlinux [17:43] giovani, greenfly: I am utterly failing at getting a transfer speed out of scp... could you give me a hint? I tried -v and got loads of debugging messages, but i don't see anything that indicates the speed of the transfer [17:43] psyferre: when it's transfering a fle it gives the speed on the right, afaik [17:43] yeah same here [17:43] otherwise you could use rsync with --progress [17:44] giovani: heh, i see nothing that isn't directly in front of my face, that is. *sigh* Sorry about that... 27.3MB/s [17:44] it was a 28 mb file... maybe i should try something larger? [17:44] psyferre: that's definitely not 10Mbps :) [17:44] yes, something larger would help [17:44] giovani: yup! :) at least i know that much :) [17:45] dd if=/dev/zero of=/testfile bs=1024k count=512 -- that should do it [17:48] keep in mind, scp has significant overheard [17:52] hmm... just transferred an ubuntu iso... transfer speed hovers around 20 mbps [17:53] you mean 20MBps? [17:53] you reported 27.3MBps just a minute ago [17:53] that's very different from 20Mbps [17:53] yes, sorry... lazy shift key [17:53] :) [17:54] that's still more than 100Mbit [17:55] that's true... i've never been good at thinking in megabit terms... so i must be good to go! [17:57] psyferre: divide by 10 for MB from Mb... it's not exact. but it'll get you in the ballpark. [17:58] psyferre: it worked in the modem days. with start-stop bits, it was 10 bits per byte. [17:58] psyferre: with overhead, that overstates it a little but it's still reasonable. [17:58] PhotoJim: thanks :) [17:59] greenfly, giovani, and everyone else who commented: Thank you very much for helping a novice figure out what the heck is going on. I really appreciate it. === mathiaz_ is now known as mathiaz [18:19] is there any difference in TIA-568B and TIA-568A wired cabling, besides the pin order? [18:20] they should perform equally good, right ? [18:24] Iceman_B^Ltop: Yup. Just use same order on both ends [18:24] hello [18:25] Iceman_B^Ltop: I generally use B [18:25] Somebody use a mail server with multiples domains?? [18:26] christian_: of course, it's a common setup === hessml|away is now known as hessml|away|away [18:28] giovani do you have a mail server with postfix??? [18:28] christian_: yes [18:28] and various domains?? [18:28] yes ... [18:28] I have a mail server [18:28] I use postfix with virtual domains in ldap and mysql [18:29] I do not understand how to use ldap and mysql [18:29] in my mail server [18:30] christian_: neither are required for virtual domains [18:30] but postfix provides great documentation on setting it up, if you'd like [18:31] yes i view this information, but i dont understand how to use my domain1, with my domain2 [18:31] I have squirrelmail [18:31] an d the users how to check your mails [18:32] christian_: with simple plain setup you map mails address to local users, and if you want foo@domain1.com and foo@domain2.com with different mailbox, you need to create to different users and user a map [18:33] if you want both domains go to the same mailbox, just add domain2 to mydestination [18:34] which is it the setup?? [18:34] for more info read Postfix Virtual Domain Hosting Howto: http://www.postfix.org/VIRTUAL_README.html [18:34] I read about the configuration of postfix [18:35] I use postfix+mysql for virtual hosting for different customers [18:35] genii: I'm looking at a factory sealed cable that says 568-A but apprantly is wired up as 568-B [18:35] but on both ends, so that shouldnt be ap roblem [18:36] Iceman_B^Ltop: yep, a non-issue [18:36] okay [18:36] 568-B is far more common [18:37] -A is considered obsolete [18:37] kirkland: does kvm/libvirt support snapshot? [18:37] christian_: for a simple setup without mysql or ldap this howto looks good: [18:37] http://www.akadia.com/services/postfix_separate_mailboxes.html [18:38] mathiaz: yes, much better in kvm-84 [18:38] kirkland: is this feature available from virsh? [18:39] how does kvm handles snapshots? [18:39] kirkland: here is my scenario: [18:39] jmedina, What for use mysql for clients [18:39] is ts neccesary? [18:39] mathiaz: i have not idea about virsh [18:39] you dont use mysql for clients, you only store mail accounts in database [18:40] mathiaz: let's talk to aliguori in #ubuntu-virt [18:40] I prefere mysql because you can use a web based frontend like postfixadmin [18:40] mathiaz: doh... he just checked out [18:40] mathiaz: here is fine [18:40] kirkland: I'd like to run my jaunty base vm all the time (named j-base) and when I need to create a test vm based on jaunty, I would run a command (create_vm.sh j-base t-dovecot) that will snapshot the j-base vm and create the t-dovecot vm [18:40] with postfix admin manage virtual domains, different admins, mail quotas, mail forwarding, aliases [18:40] kirkland: and then I would ssh into t-dovecot [18:41] kirkland: do all my testing, and when I'm done I would just delete_vm.sh t-dovecot [18:41] kirkland: for now I'm using lv to hold the j-base filesystem and lvm snapshot to handle the snapshoting [18:41] kirkland: however I can only create a snapshot if the j-base vm is *not* running [18:42] kirkland: for consistency [18:42] mathiaz: see -snapshot in http://manpages.ubuntu.com/manpages/jaunty/en/man1/qemu.1.html [18:42] kirkland: which means that my j-base vm doesn't run most of the time. [18:46] kirkland: thanks for the pointer. I'm gonna have to think about this a bit more. === hessml|away|away is now known as hessml|away [18:46] mathiaz: mee too .... [18:48] mathiaz: i think using that -snapshot option to kvm, you should be able to master off of your base vm, and snapshot your testing to an auxilliary file [18:49] kirkland: right. That seems like a good option. [18:49] kirkland: however how would handle a live vm running from the master file? [18:50] kirkland: could suspending the master vm work? [18:50] kirkland: take a snapshot of the root block device and boot from there? [18:51] kirkland: in my current setup I'm also doing that, except that the master vm is always off. [18:51] kirkland: and I need to boot once in a while to update the system correclty. [18:51] kirkland: to boot the master vm [18:51] kirkland: I would like to avoid that [18:51] mathiaz: hmm, there is a "saveback" command you can issue [18:52] mathiaz: Ctrl-a s [18:52] Save disk data back to file (if -snapshot) [18:53] kirkland: right - could the guest issue a saveback command? [18:53] kirkland: it seems that the guest is the one that knows when it's safe to be snapshotted [18:54] kirkland: you don't want to take a snapshot of the master vm in the middle of an apt-get upgrade [18:54] mathiaz: right [18:55] mathiaz: looks like you want this ctrl-a s command when you *know* you want to saveback [18:56] kirkland: right - something like a checkpoint command [19:07] giovani / genii: Im posting about my problem on the Ubuntu forums. My server keeps dropping SSH connections, and I found that internet connections lag out too at that point [19:07] Iceman_B^Ltop: what makes you think it's ubuntu-related? [19:07] you're probably suffering bad packet loss [19:09] giovani: I had Ubuntu 8,10 desktop on that same machine up till a week ago [19:09] same hardware, except the hdd [19:09] no problems at all [19:09] well ... things can change, cables can be bad, hardware can go bad [19:09] except that Ibex Desktop has a GUI which I dont use on a headless machine, and it ate all 256 megs of ram [19:09] ubuntu server and ubuntu desktop are almost identical at lower levels [19:10] but, alright [19:10] giovani: how high is the change of that coinciding with the switch to a different OS ? [19:10] it's not a different OS [19:10] I'd say it's almost nil [19:10] the ethernet driver will be the same, unless you were using an old kernel before, and have updated now [19:10] is it possible that it's related to the server kernel? yes ... but I'd think it's damn unlikely [19:11] I have not the slightest idea. I though ti would be my network at first [19:11] take a look here if you want http://ubuntuforums.org/showpost.php?p=6985576&postcount=55 [19:11] and the tread itself I posted a small update after that [19:12] I'd forget this application-specific diagnosis [19:12] do a long ping test [19:12] and establish that packet loss is the issue [19:13] Iceman_B^Ltop: could you pastebin the output from: ip -s link [19:13] i'll try [19:15] http://pastebin.ubuntu.com/141603/ [19:17] server here /o/ [19:18] connection dropped... [19:18] there it goes === hessml|away is now known as hessml|away|away [19:38] how can I list all jpeg files in the home folder without being in that directory? (tried ls -aR /home/*.jpg and it didnt work for me). A step further, how could I list only jpg's with an underscore in the filename e.g. *_*.jpg [19:38] sorry, I meant home folder and subdirectories [19:39] billyk: probably something like find /home/$user -name "*.jpg" [19:40] I should have said I'm trying to use this with mogrify [19:41] I can do mogrify *.jpg if i'm in that directory, but I have a bunch of subdirectories I want to resize images in in multiple home directories [19:42] find /home/$user -name "*.jpg" | while read file; do mogrify $file; done [19:43] Deeps: don't you think using -exec would be better? [19:43] awesome [19:45] noob question but what does $user do? [19:45] and $file [19:45] variables? [19:45] like a shell script? [19:46] a shell script is just what's interpreted by bash [19:46] everything you run in bash is a script [19:46] billyk good intro: http://tldp.org/LDP/abs/html/ [19:46] $user there is not a defined variable -- I think he just used it as a placeholder for you to fill in [19:46] $file is a variable, as is referenced b the while look [19:46] loop* [19:47] I think it's $USER and not $user [19:47] well, for the current user, sure [19:47] will that do all the user accts? [19:47] who knows if he wants that :) [19:47] billyk: no [19:47] you'd need to wrap it in a for loop [19:47] for all the dirs in /home [19:48] just all subdirectories in the home folder [19:48] no easier way to do that than a loop? [19:48] sure, just back out the find execution to /home [19:48] that'll apply to any directory in home [19:51] cool thanks! [19:52] gonna go read that bash guide now [19:52] giovani: could be, i like while loops ;) [19:52] so I can the *.jpg in quotes is a regex? [19:53] no, that's not regex [19:53] so could I put "*_*.jpg" [19:53] oh [19:53] thats still not regex, but yes [19:53] regex would be something like ".*?.jpg" [19:54] ".*_.*\.jpg" [19:54] but yeah, what you want to do will work === mathiaz_ is now known as mathiaz [20:11] dendrobates, :) [20:41] win 26 [20:41] lose 27, heh === simplexi1 is now known as simplexio [21:06] New bug: #351378 in dhcp3 (main) "dhclient fails for virtual interfaces (IP aliases)" [Undecided,New] https://launchpad.net/bugs/351378 [21:09] Probably if the master interface already has an IP from same dhcp server, likely [21:10] (since MAC would not change) [21:11] anyone here got a canonical partner sales contact for a partner? [21:11] ours is out of office [21:12] and the temporary counterpart has been unresponsive [21:26] bash syntax question - this obviously doesnt work, but it probably best explains what I'm trying to do- if (! mogrify -identify 1.jpg | grep 800x600) (newline) then mogrify -resize 800x600 1.jpg (newline) fi [21:27] mogrify -identify 1.jpg | grep 800x600 only outputs data if the image is the right size. I want the -resize command to only be run if it's not the right size [21:28] for some reason mogrify -resize still changes an image's hash even if it's already the right resolution (bad for rsync) [21:31] Hey anybody able to help me with a preseed question? (isolinux.cfg) [21:32] I'm currently working on creating an unattended preseeded ubuntu server install [21:32] billyk: try using || between the grep command and the second morgrify command [21:32] it means the 3rd command will only run if the grep fails [21:33] cool [21:33] However, first thing that comes up (in front of the installer menu) is Language selection. Question is How do I remove that language selection? Which command could I add to isolinux.cfg? [21:33] Here is my current isolinux.cfg: [21:33] include menu.cfg [21:33] default Brownpaper [21:33] prompt 0 [21:33] timeout 0 [21:33] gfxboot bootlogo [21:33] label Brownpaper [21:33] menu label ^Brownpaper customized installation [21:33] kernel /install/vmlinuz [21:34] append file=/cdrom/brownpaper.seed locale=en_US console-setup/layoutcode=us initrd=/install/initrd.gz quiet -- [21:34] jesperronn: FAR too much pasting -- use pastebin next time === hessml|away|away is now known as hessml|away [21:34] (sorry for the many lines) -- thanks for tip @giovani [21:36] billyk: that work out ok? [21:36] Here it is in pastebin: http://pastebin.com/d2e568e5 [21:37] My challenges: 1) surpass the Language selection menu. 2) making menu item "brownpaper start automatically" if possible. [21:38] jesperronn: your question is pretty specific, and not common knowledge for someone to have -- so wait around [21:39] giovani: yeah. Thanks! :-) if the grep doesnt fail though, it outputs the result of that command to the terminal. will that be okay for a shell script? or do I need > /dev/null or something? [21:39] @giovani: thanks for your tip! I presume this is the best forum for the question even it's specific. Any links to documentation/api or examples is appreciated [21:40] billyk: || is not similar to | -- || = OR and | = pipe [21:40] so, nothing is being passed to the last command [21:40] it's just only being run if grep fails [21:40] if you wanted to run a command only if grep succeeded you'd use && [21:41] jesperronn: the wiki, google, and ubuntuforums.org probably have a good bit of info on the topic [21:42] billyk: and just if you're curious, the way that bash knows whether or not grep "succeeded", it's based solely on exit status -- it doesn't read grep's output or anything else === rdw200169`afk is now known as rdw200169 [21:43] ah [21:43] trying to digest all that :-) [21:43] billyk: yeah ... don't worry about digesting it all at once [21:43] I'm really far from a bash expert -- you just pick up a few things every time you try something new [21:43] mastering piping and output/input redirection are the most important bash skills [21:44] in my opinion [21:44] yeah, it's obviously really useful [21:44] you feel comfortable with those? [21:45] not yet [21:45] i.e. < and > and >> and | ? [21:45] haha [21:45] well, and 2> :) [21:46] ok, quick recap ... `programname < filename` takes everything in the file 'filename' and sends it to the input of 'programname' [21:46] mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png still shows grep's output [21:46] billyk: "shows" you mean it prints to the console? [21:46] is that a problem? [21:46] will it be if I have that line in a .sh? [21:47] it'll print to the console ... nothing bad [21:47] you can fix that though if you need [21:47] okay. when you execute a shell script from cron though, where would that output go? [21:48] email [21:48] most people would send console output to /dev/null (basically, discard it) instead of printing it when using cron, so that it doesn't get emailed back to the user [21:48] mogrify -identify logo.png | grep 668x476 > /dev/null || mogrify -adaptive-resize 668x476! logo.png [21:48] should do it [21:48] try it out [21:49] the other option (specifically with grep) is to run it with the -q option [21:49] it suppresses all output [21:50] mogrify -identify logo.png | grep -q 668x476 || mogrify -adaptive-resize 668x476! logo.png [21:50] okay [21:50] why doesnt it work with > /dev/null at the end? [21:50] but that'll only work with grep -- not all apps have options to not output anything -- so knowing about > /dev/null is important [21:50] yeah [21:51] billyk: why doesn't what work? [21:51] mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png > /dev/null doesnt suppress output [21:51] because > /dev/null is applying to the command to the left of it [21:51] which, in your case, is mogrify, not grep [21:51] so it needs to go after grep -- since it's grep that has the output you want to suppress === hessml|away is now known as hessml|away|away [21:52] ooh [21:52] I thought the output was just piped to the -resize command [21:52] billyk: nope, remember || is NOT a pipe [21:53] it's a special OR operator, despite looking similar to pipe :) [21:53] so, because it's not a pipe, grep's output is going directly to the console [21:53] (unless you redirect it with > /dev/null) [21:56] billyk: make sense? or still not clear? [21:57] giovani: no, I got it :-) [21:57] awesome :) [21:58] now I'm curious about the 2> though [21:58] ah, well, that's simple enough to cover [21:58] is that on http://tldp.org/LDP/abs/html/ ? [21:58] so, when we say "output" we mean STDOUT [21:59] and when we say "input" we mean STDIN [21:59] so, STDOUT is > [21:59] STDIN is < [21:59] there's one more ... STDERR -- which is 2> [21:59] which is supposed to only be used for error-related info, and not general output [21:59] ok, remember some of that from basic C programming [22:01] so STDOUT is what's output to the terminal, or what's passed in a pipe? or both? [22:01] STDOUT by default goes to the terminal, unless it's redirected with > or | [22:02] > being used to output to files, and | to pass it to the STDIN of the next application after the pipe [22:02] cool [22:02] 2> takes just the STDERR, and outputs it to a file [22:03] in many cron jobs, people want to either collect both info and error messages in one place, or discard them both, they do this with `program &> filename` [22:04] if you use 2> where does the stdout go? [22:04] whereever you instruct it to [22:04] i.e. `programname 2> myerror.log` [22:04] so I can do command -argument 2> error.log > output.txt ? [22:05] yep [22:05] or, let's say, for example, you wanted to pipe both STDERR and STDOUT to another program [22:05] you'd use redirection to accomplish that [22:06] `programname 2>&1 | secondprogram` [22:06] 2> clearly takes STDERR and then pushes it into STDOUT [22:06] and then pipe takes all STDOUT (which now includes STDERR) and passes it to STDIN of secondprogram [22:07] can the secondprogram differentiate the STDERR from the STDIN? [22:07] nope [22:07] the &1 may seem arbitrary, but, in reality, each of the three file descriptors (STDIN, STDOUT, and STDERR) have numbers, 0, 1, and 2 [22:08] so 1> is the same as > which is STDOUT [22:08] and 2> is STDERR [22:10] cool [22:12] hi giovani... [22:12] hlep me please [22:12] it might be pointless to do this, but how would you save stderr to a file and then pipe stdout to a command? [22:13] i cant do the email server with two domains [22:18] billyk: in that case, you'd use the 'tee' command [22:19] which both reads its STDIN, writes it to a file, and also sends it to STDOUT [22:19] so, `programname | tee outputfile | secondprogram` [22:20] would take the STDOUT from 'programname', write it to 'outputfile' and also send it to 'secondprogram' [22:21] now I'm off [22:21] later [22:25] giovani: Thanks so much! [22:32] damn I have a problem [22:33] MatBoy: what is it? [22:35] billyk: I love myself.... :/ === rdw200169 is now known as rdw200169`afk === rdw200169 is now known as rdw200169`away [23:15] dustin__: Your screen profiles rock btw. I've used screen for over 10 years, but never got around to actually making myself a proper profile. :) [23:37] hmm, capturing from my laptop only reveals SSH packets... [23:37] Iceman_B|SSH: sitll problems netwok problems? [23:38] jmedina: yup, still [23:39] Iceman_B^Ltop: please paste output from "ip -s link" [23:39] sigh, if only jesperronn had stuck around another hour I could have answered his question [23:40] (the answer is to put a language code of your choice, e.g. "en", in /isolinux/lang on the CD) [23:40] baffle: thanks for mentioning those profiles. I had no idea they existed. I'm going to install them and play with them. [23:43] ok kinda embarrased but I didnt know I had a profile??? :S [23:44] or did baffle have the wron guy? [23:45] jmedina: http://pastebin.ubuntu.com/141730/ [23:46] the right Dustin is on as user Kirkland [23:46] Iceman_B^Ltop: looks fine, no errors, dropeed or overrun [23:47] PhotoJim: ? [23:47] jmedina: okay [23:47] dustin__: Maybe the wrong guy. :-) [23:47] baffle: where can I go to see that great profile I never made? [23:47] :) [23:48] brb I is gonna fix my name :D [23:48] dustin__: I assumed you were Dustin Kirkland. === dustin__ is now known as mds58 [23:48] ahhh so much better [23:49] kirkland: Baffle was commenting that he really likes your screen-profiles package. [23:50] mds58: But you should apt-get install screen-profiles then. :) [23:51] jmedina: well, I have no clue then. apart from installing Ubuntu desktop, and seeing wether or not the problems cease [23:51] if they dont, it might be hardware [23:52] I have a tcpdump output as well [23:52] Iceman_B^Ltop: have you tested in a livecd? [23:52] PhotoJim: oh, sweet [23:52] baffle: thanks! [23:53] jmedina: can't say I have [23:53] but the server is running headless, can I still use a live cd then ? [23:53] or do I really need a screen [23:55] and keyboard [23:56] Iceman_B^Ltop: screen and keyboard are still useful. if things go wrong, it is often useful to be able to do a console login from the machine itself. [23:56] kirkland: Tried looking into 256 color profiles? Nice color shading etc. [23:57] kirkland: As in http://www.frexx.de/xterm-256-notes/ [23:58] PhotoJim: just the 2 things I dont have