[00:00] <olcafo> xen's live migration sounds pretty awsome...
[00:01] <olcafo> kvm's live migration sounds equally awsome... now I'm going to have to set something up so I can try it!
[00:25] <arrrghhh> anyone use mt-daap (aka firefly)?  i installed the version from the repo's, and that failed when it tried to add a file.  so i figured i would compile the newest from source, and i can't seem to compile it without libid3tag dependency...
[00:25] <dustin> is there a place where we can send recommends for next puppy version?
[00:25] <arrrghhh> puppy?  what does this room have to do with puppy?
[00:26] <dustin> I think a search bar in the package manager would be steller
[00:26] <dustin> ah heck I am in the wrong tab
[00:26] <dustin> sorry
[00:26] <arrrghhh> i was confused there for a second lol
[00:27] <dustin> using puppy to rescue my server and irc has like 10 tabs and everyone in each has had something to offer
[00:27] <dustin> I am so glad for irc
[00:29] <olcafo> does anyone here have certifications? If so, which ones?
[00:29] <arrrghhh> still not sure what that has to do with ubuntu-server... or even ubuntu.
[00:30] <olcafo> ubuntu has server certification I believe
[00:30] <arrrghhh> doubt it.  at least not an 'official' cert.
[00:31] <arrrghhh> nothing like RHEL or SLES.
[00:31] <mathiaz> kees: jdstrand: what's your opinion on bug 293258?
[00:32] <arrrghhh> anybody use mt-daap or firefly on their ubuntu-server?
[00:32] <olcafo> arrrghhh: what about http://www.ubuntu.com/training/certificationcourses
[00:33] <arrrghhh> olcafo, yes, but i don't think those are like the RHEL or SLES certs.
[00:35] <olcafo> arrrghhh: I suppose not, now than I'm looking at it. Still would be interesting to hear about.
[00:36] <arrrghhh> certainly.  but they won't have the same type of clout the other certs will (unless you KNOW the company wants ubuntu certs, which, i have never run into unfortunately.)
[00:52] <infinity> mathiaz: FILE privs are often considered inherently insecure in the first place, but it might be better for the mysql user to have "/nonexistent" as its home directory to at least prevent the dotfile attack vector.
[00:53] <infinity> kees, jdstrand: ^^
[02:08] <LumpToe> Strange issue:  I can access my server from a remote network and my server can access all devices locally but not any of the external ip addresses?  Is this a gateway setup issue?
[02:10] <goofey> LumpToe: maybe a DNS issue?
[02:10] <goofey> LumpToe: oh, wait, can;t acces IP addresses-  sorry, ignore me
[02:11] <goofey> LumpToe: can you traceroute or mrt from the server to see where it fails?
[02:11] <LumpToe> Yeah dig manages to resolve the addresses but tracepath stops at the route
[02:12] <goofey> that does sound like a gateway issue
[02:13] <LumpToe> This is a brand new install and a DHCP issued address from the router
[02:13] <LumpToe> How can I see the gateways used on my ubuntu box
[02:14] <goofey> I was just wondering that
[02:15] <owh> LumpToe: route -n
[02:15] <LumpToe> 0.0.0.0
[02:15] <LumpToe> brb  nature calls
[02:16] <goofey> LumpToe: my server has 2 lines:
[02:16] <goofey> 192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
[02:16] <goofey> 0.0.0.0         192.168.1.1     0.0.0.0         UG    100    0        0 eth0
[02:51] <lwizardl> how can you tell which raid controllers are supported for doing installs?
[03:07] <LumpToe> back
[03:07] <LumpToe> goofey: My server has the same two lines
[03:16] <LumpToe> goofey: It was my router.  Strange.  I went through some of the route tables and wiped out anything with the same IP address.
[03:53] <Sam-I-Am> anyone here ever use the dhcp3 package w/ ldap patch?
[03:55] <twb> !anyone
[03:56] <Sam-I-Am> lol
[03:57] <Sam-I-Am> so... when i give it credentials for its account on ldap, it breaks with the error of "success" ... when i give it the wrong password, it seems to re-bind anonymously and works except it can't write to the ldap dn, so it fails.
[03:57] <Sam-I-Am> binding with ldapsearch with or without credentials works fine
[03:58] <twb> Sam-I-Am: does libnss talk to LDAP?
[03:58] <twb> That is, does your /etc/nsswitch use ldap?
[03:58] <Sam-I-Am> not on the ldap server
[03:58] <Sam-I-Am> i dont think the patch uses libnss
[03:59] <Sam-I-Am> seems to have all of its config in dhcpd.conf
[03:59] <twb> Hmm, OK.
[03:59] <Sam-I-Am> the debug mode doesnt provide any useful information... and the documentation is scarce.  pretty much had to reverse engineer the schema file to figure out what i should have in my ldap tree.
[04:00] <twb> I hates LDAP
[04:00] <Sam-I-Am> i love ldap...
[04:00] <Sam-I-Am> so, if i could figure out how to make the package even work... i'd consider writing useful documentation for it.
[04:00] <Sam-I-Am> might need to find its maintainer...
[04:00] <Sam-I-Am> the guy who wrote it is nowhere to be found
[04:01] <Sam-I-Am> its a universe package
[04:03] <aranyik> hi :)
[04:05] <aranyik> how can I troubleshoot my settings if I did everything as "How-To: Set up a LAN gateway with DHCP, Dynamic DNS and iptables on Debian Etch" said. and I still cannot ping the NIC that connects to internet ???
[04:14] <twb> aranyik: Etch is not Ubuntu
[04:15] <aranyik> ok
[04:15] <aranyik> but its debian
[04:16] <twb> aranyik: this is not a Debian support channel.
[04:16] <twb> aranyik: try #debian on irc.debian.org (OFTC).
[04:16] <aranyik> ok..
[04:16] <aranyik> then how can i make the same in ubuntu
[04:16] <aranyik> ?
[04:17] <aranyik> i know dhcp3-server is supported
[04:17] <aranyik> how about bind9?
[04:17] <aranyik> i think it is also
[04:18] <twb> aranyik: I don't know what you're trying to achieve.
[04:18] <twb> aranyik: for Ubuntu you probably want to read the Server Guide.
[04:19] <aranyik> i tried it at forst
[04:19] <aranyik> at first
[04:20] <aranyik> and it wasnt working
[04:20] <aranyik> then every thread in ubuntu will show similar ways
[04:20] <Sam-I-Am> twb: doing some digging... seems like intrepid grabbed a buggy package version from debian... might be fixed in lenny... which means it'll probably be in jaunty
[04:20] <aranyik> but its still not working
[04:21] <twb> !enter
[04:22] <aranyik> a router is a very easy thing to set up, but i never had so much problem since i tried on ubuntu
[04:22] <twb> Get:2 http://au.archive.ubuntu.com hardy-updates/main ubuntu-docs 8.06.1 (tar) [42.5MB]
[04:22] <twb> ...ugh, forty megabytes?
[04:23] <twb> What, did some jackass forget to "make clean"?
[04:23] <Sam-I-Am> lol
[04:23] <Sam-I-Am> or gzip
[04:25] <twb> Sam-I-Am: no, it's gzippe
[04:25] <twb> However I don't understand why Ubuntu 8.04's version is 8.06-1...
[04:25] <twb> They ought to call it 8.04.1-1 or something.
[04:25] <Sam-I-Am> yeh
[04:26] <Sam-I-Am> probably a typo
[04:26] <twb> Sam-I-Am: er, not likely
[04:26] <twb> Sam-I-Am: more likely that they version the docs based on when they are released, and 8.06 is from -updates
[04:27] <twb> Hmph.  The source directory of the tarball is ubuntu-docs-8.04.2~hardy
[04:27] <twb> I bet lintian doesn't like that
[04:28] <Sam-I-Am> lol
[04:33] <twb> apt-cache policy ubuntu-serverguide says:
[04:33] <twb>      8.06.1       0 500 http://mirror.internode.on.net hardy-updates/main Packages
[04:33] <twb>      8.04.2~hardy 0 500 http://mirror.internode.on.net hardy/main Packages
[04:45] <twb> OK, so what happens in that file, AFAICT, is that there are canonical English .xml files, and then .po files that contain each English paragraph and its translation (i.e. each paragraph occurs 1 + 2*(number of translations) times).  Then on top of *that*, I think the autogenerated translated .xml files are reproduced in the source?
[04:54] <twb> Check this shit out:
[04:54] <twb> fgrep -rl 'You can use the CVSROOT environment variable to store the CVS root' * | wc -l
[04:54] <twb> 83
[04:54] <Sam-I-Am> huh...
[04:54] <twb> That's right *eighty three* copies of the same text in the source tarball
[04:54] <Sam-I-Am> well aint that special
[04:54] <twb> I'm sure someone is just being lazy.  That can't be the only way to do it
[04:55] <twb> Maybe it's something horrible like Canonical builds its docs using its unpublished internal code, so the source package actually contains postprocessed files.
[05:09] <oh_noes> is there a package from repo i can install for VMWare Tool on Ubuntu Server?
[05:09] <oh_noes> Or do I have to manually install it
[05:13] <twb> oh_noes: do you mean so you can run ubuntu-server as a guest inside vmware?
[05:17] <twb> oh_noes: on Debian there is open-vm-tools, I can't see it in 8.04
[05:17] <twb> oh_noes: that it the "install VMware Tools" thing built properly in a .deb
[05:17] <twb> s/it/is/
[05:19] <oh_noes> its not, atm it's all manually, and you have to install your kernel source
[05:19] <oh_noes> I wasnt sure if Ubuntu had package for it, but thats ok
[05:19] <twb> It's in 8.10
[05:19] <twb> http://packages.ubuntu.com/open-vm-tools
[05:19] <twb> I guess you could backport it
[05:22] <Sam-I-Am> and the 8.10 package works flawlessly
[05:22] <Sam-I-Am> although vmware is a little weird on detecting if the guest is running vmware tools...
[05:23] <twb> I'd certainly be more inclined to trust backporting the intrepid package over using the shitty virtual CD full of scripts and sharballs that vmware-server itself mounts.
[05:33] <Sam-I-Am> yeah, also known as... binary blobs
[05:34] <Sam-I-Am> and m-a just makes it so easy to get modules in open-vm-tools
[05:37] <twb> Sam-I-Am: actually, no
[05:38] <twb> Sam-I-Am: the vmware tools .iso contains pre-compiled .ko files only for RHEL kernels
[05:38] <twb> Sam-I-Am: there's also the module source in there, which would be used on an ubuntu guest
[05:38] <Sam-I-Am> and some others... like sles i think
[05:38] <Sam-I-Am> true
[05:38] <Sam-I-Am> but their builds break on 'modern' systems
[05:38] <Sam-I-Am> thanks to module build dependencies
[05:39] <Sam-I-Am> i made a patch some time ago for it...
[05:40] <Sam-I-Am> other folks seem to combine vmware-tools and open-vm-tools
[05:40] <Sam-I-Am> almost seems like ubuntu is getting along better with vmware these days than redhate since they're pushing their own vm system
[05:42] <twb> Eh, vmware can FOAD as far as I'm concerned.
[05:42] <Sam-I-Am> i really like virtualbox
[05:42] <twb> qemu -curses along makes qemu beat it, not to mention stuff like -tftp
[05:42] <Sam-I-Am> but trying to sell it to management is difficult
[05:42] <Sam-I-Am> "what, we dont have to pay for it?"
[05:43] <twb> Well, virtualbox has a non-free edition
[05:43] <twb> That's part of the reason I mistrust it
[05:43] <twb> Sam-I-Am: you can sell it to your management by calling the OSE "the demo version"
[05:43] <Sam-I-Am> lol
[05:43] <oh_noes> thats not a valid argument
[05:43] <Sam-I-Am> well, my first thing is trying to convert management from centos/rhel to ubuntu
[05:43] <oh_noes> Ubuntu has a non-free edition (support) so you cant mistrust it
[05:44] <Sam-I-Am> its quite the uphill battle
[05:44] <twb> oh_noes: I do mistrust canonical.
[05:44] <twb> oh_noes: I would VASTLY prefer Ubuntu to be a Debian blend or subproject rather than a fork that syncs irregularly.
[05:45] <twb> oh_noes: but providing support is quite different to having having two separate versions of a product.
[05:45] <twb> It's not like Ubuntu has a RHEL and a CentOS version
[05:46] <oh_noes> agreed, but they only charge for the features business want.  For example OSE has USB support etc, and the non-free has remote desktop.
[05:46] <Sam-I-Am> yeah, the packages arent 3 years old :)
[05:47] <twb> oh_noes: that's precisely my point.
[05:47] <twb> oh_noes: it's the same business model as cedega has.
[05:48] <twb> It's based on treating to wider community as second-class citizens.
[05:49] <Sam-I-Am> welp, time for bed here
[05:49] <Sam-I-Am> laters
[05:50] <oh_noes> I disagree.  Its giving them wider community what they want for free, and charging business for anything additional they need.
[05:50] <oh_noes> Sure, if they start removing functionality used by the wider community, then they break this
[05:50] <oh_noes> but currently, what they are doing functionality wise I think is a great compromise
[05:56] <twb> Apart from the fact that they're deliberately taking away features I want, but aren't prepared to pay for.
[05:57] <twb> It means that if I want those features I have to add them into a fork of the product.  It's a divisive business model.
[05:57] <twb> If they took a "sell consulting" approach, then all the code could be open, and everybody would be working on the same codebase.
[06:05] <quizme> how can i tell if I'm using x86_64 or AMD64 architecture ?
[06:05] <quizme> what is an unstripped build ?
[06:06] <quizme> am i allowed to use the multiverse directory if I'm on 8.04 ?
[06:06] <p_quarles> quizme: they're the same architecture
[06:06] <p_quarles> I don't know what an unstripped build is, and yes, there's a multiverse repo for every Ubuntu version
[06:07] <quizme> p_quarles: I mean  how can i tell if i'm 32 bit or 64 bit
[06:07] <p_quarles> quizme: the CPU or the kernel?
[06:09] <quizme> i'm not sure
[06:09] <quizme> "x86_64 or AMD64" is that cpu or kernel ?
[06:10] <p_quarles> quizme: again, x86_64 and AMD64 are the SAME THING; what I'm asking is whether you're trying to figure out if your hardware is 64-bit capable, or if the OS you're running is 64-bit
[06:12] <p_quarles> or (this might be easier) what's your real quetion? why do you need to know?
[06:13] <quizme> my real question is
[06:13] <quizme> how do i install ffmpeg
[06:13] <quizme> for 8.04
[06:14] <p_quarles> quizme: sudo apt-get install ffmpeg
[06:14] <p_quarles> the dependencies and architecture questions are automatically resolved by the apt-get program
[06:15] <quizme> how about the codecs and libraries?
[06:15] <quizme> i have 8.04 on my server
[06:16] <quizme> https://wiki.edubuntu.org/ffmpeg  <--- i found this
[06:16] <quizme> it looks like there is a bunch of other commands i need to do also
[06:16] <quizme> Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid  <--- like what does that mean?
[06:18] <p_quarles> quizme: okay, I can see where your questions came from now, and let me just say that they are meaningless outside of that context; so start with the "real question" next time, okay? :D
[06:19] <p_quarles> now, to answer: follow those commands exactly and you should be good
[06:19] <p_quarles> to find out if you're running 64-bits, run in the terminal: uname -4
[06:19] <p_quarles> oops, that should be uname -r
[06:20] <p_quarles> if it's 64 bits, it will contain the term "x86_64" in the output
[06:21] <p_quarles> as for "unstripped build", that just means a copy of the codec pack as Fluendo distributes it, rather than the modified way Ubuntu ships it
[06:24] <quizme> 2.6.21.7-2.fc8xen
[06:24] <quizme> that's my uname -r
[06:25] <quizme> who is Fluendo ?
[06:25] <p_quarles> so it's a Xen virtual machine? anyway, not 64 bits, so you can skip the section in question
[06:26] <quizme> Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid  <----- i'm runing 8.04 though
[06:26] <quizme> i am running on AWS / EC2
[06:27] <quizme> so is that a dangerous command to run ?
[06:27] <p_quarles> what command?
[06:27] <quizme> sudo apt-get install libavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0   <--- this command
[06:27] <p_quarles> no
[06:27] <quizme> ok
[06:27] <quizme> but why does it say 8.10 ?
[06:28] <p_quarles> oh, looking again, it appears to say those packages are available through apt-get only in 8.10
[06:28] <p_quarles> for older versions, you'll need to use the instructions below
[06:29] <quizme> can i upgrade my whole system to 8.10 ?
[06:29] <quizme> is that safe ?
[06:30] <quizme> from 8.04 to 8l.10
[06:33] <p_quarles> safe is relative; if you're asking "can it break?" the answer is yes; if you're asking
[06:33] <p_quarles> "is it supposed to break?" the answer is no
[06:34] <p_quarles> "safety" in my view is having a backup plan, and not relying on unfamiliar (to you) software to make things flawless; the latter is almost always unrealistic
[06:35] <quizme> good advice
[06:36] <p_quarles> anyway, the majority of version upgrade experiences are pretty smooth, but there is a significant minority that runs into big bumps during the process
[07:08] <rags> I want my ubuntu server to act as a gateway...wht I understand is I have to enable routing(net.ipv4.ip_forward=1)..Is there anything more I have to do? this server is connected to another router.
[07:09] <rags> do I need to setup iptables and NAT?
[07:10] <simplexi1> rags: depends which kind sharing you want to configure
[07:10] <simplexi1> rags: options are transparent bridge and NAT, google those
[07:12] <rags> simplexi1: thx..will check...I just want net access for the clinet machines behind the ubuntu server...I suppose that means a transparent bridge.
[07:14] <simplexi1> rags: nat if you dont want access to it from anywhere, or bridge if you want acces it from another place than server
[07:16] <rags> will the forwarding work with just wht I have done...since nat is already present on the router?
[07:23] <quizme> if you say:  apt-get install a b c d e f ....... to reverse that can you say: apt-get remove a b c d e f ......... and that will bring the state of your system to exactly where it was before ?
[07:25] <rst-uanic> quizme: if a had a dependency h, h would not be removed after step 2
[07:26] <rst-uanic> but if h was the only a dependency for a it would be listed as a package that is no longer in use, and you can remove it with sudo apt-get autotemove
[07:27] <quizme> oh i c
[07:27] <rst-uanic> well
[07:28] <rst-uanic> if you will do sudo apt-get install pack1 pack2 pack3, it will give the list of all the packages that would be installed
[07:28] <rst-uanic> you can save it
[07:28] <quizme> what if c was installed before step 1 ?
[07:29] <rst-uanic> it would not be listed in the packages that would be installed
[07:29] <rst-uanic> so you won't delete it after
[07:29] <quizme> do you mean uninstalled ?
[07:30] <rst-uanic> look
[07:30] <quizme> basically i am wondering if i uninstall a b c d e f I don't want it to wreck anything else that may want it there
[07:31] <rst-uanic> if you had pack1 installed, and you would run sudo apt-get install pack1 pack2 pack3, pack1 would not be listed as the package that would be really installed
[07:31] <rst-uanic> so you will know that you should not remove it :)
[07:31] <quizme> the problem is
[07:31] <quizme> i already ran apt-get install
[07:31] <quizme> so i can't see  that list
[07:31] <rst-uanic> heh)
[07:32] <twb> quizme: /var/log/dpkg*log
[07:32] <quizme> twb: i c.... i have to dig in there .... thanks
[07:33] <quizme> oh boy
[07:33] <quizme> i need a bubble bath
[07:33] <rst-uanic> :) just look at the timestamp
[07:33] <twb> quizme: had you used aptitude, there would be /var/log/aptitude, which is more readable
[07:33] <quizme> i c
[07:33] <quizme> hmm
[07:34] <quizme> ok
[07:34] <quizme> i'll use aptitude from now on
[07:34] <quizme> this is goign to be hell
[07:36] <rst-uanic> quizme: there is /var/log/apt/term.log
[07:36] <rst-uanic> quizme: you would see the output of apt-get you ran before
[07:36] <quizme> basically to get my system back to a state S0 at time t0, i should remove all packages installed after t0 if they weren't already in the system before t0.
[07:37] <quizme> then type in apt-get autoremove
[07:37] <quizme> it seems like that could be automated with a script
[07:38] <rst-uanic> quizme: look at the /var/log/apt/term.log
[07:40] <twb> That still is not guaranteed to get you back to what you had before.
[07:40] <twb> In particular, removing (instead of purging) will not remove config files
[07:41] <twb> And some buggy packages will leave stuff in /etc or /var even after you purge them
[07:41] <twb> database packages sometimes do that to avoid data loss.
[07:42] <quizme> it would be cool if there was a program that could bring your software and library state to a certain time point by dragging a "scrubber control" like in a video player.
[07:42] <rst-uanic> quizme: it is called time machine in mac os x ;)
[07:43] <quizme> twb: so there is no clean way to make a time machine with apt-get ?
[07:44] <twb> It has been discussed, but does not exist yet.
[07:44] <twb> In particular it will be easier once btrfs is in production, as it (like ZFS) supports snapshots.
[07:46] <twb> In general, removing packages and purging them (aptitude purge ~c) should be sufficient
[07:46] <twb> It's just not GUARANTEED to be identical
[07:48] <quizme> ok
[07:49] <quizme> so i'll just aptitude purge all the packages in installed today
[07:51] <quizme> then reinstall what i was supposed to
[07:51] <quizme> hopefully that doesn't break anything else
[07:53] <quizme> aptitude purge ibavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0   <--- this looks pretty safe doesn't it ?
[07:57] <rst-uanic> quizme: sudo apt-get purge
[08:16] <Counterspell> I'm on Ubuntu 8.04.2 LTS and for some reason I can't get vim installed correctly. The package installs but vim complains about missing features (such as no syntax highlighting). Anyone know what's going on?
[08:17] <friartuck> Counterspell did you customize ~/.vimrc?
[08:17] <Counterspell> yes
[08:18] <friartuck> Counterspell rename it and give it a try without it.
[08:18] <Counterspell> why is the vim build screwed up?
[08:18] <Counterspell> ok
[08:18] <Counterspell> of course that will work
[08:18] <Counterspell> but i want those features
[08:18] <friartuck> Counterspell no, I think you messed up your .vimrc
[08:18] <friartuck> :)
[08:18] <friartuck> syntax
[08:19] <Counterspell> no vimrc is ok
[08:19] <Counterspell> i just copied it from my other box
[08:19] <Counterspell> nothing wrong with it
[08:19] <friartuck> hm
[08:19] <friartuck> regular sudo apt-get vim?
[08:19] <Counterspell> someone think the build for server would be 'more stable' without syntax highlighting?
[08:19] <friartuck> no
[08:21] <Counterspell> yes normal sudo apt-get install vim
[08:21] <friartuck> Counterspell are you invoking with vi? maybe try with vim?
[08:21] <Counterspell> nope; let me see i just did apt-get update and now it looks like i can install vim-full
[08:22] <friartuck> I don't have an 8.04 box. np with 8.10.
[08:23] <Counterspell> i think i'm all set now
[08:23] <Counterspell> thanks man
[08:23] <Counterspell> fyi; install vim-nox is the way to go
[08:23] <Counterspell> where are packages downloaded to again? i want to delete some downloaded packages
[08:24] <Counterspell> Counterspell: /var/cache/apt/archives
[08:24] <Counterspell> Counterspell: thanks
[08:24] <friartuck> spaces in file names...wrote a simple script to inventory the permissions for files and dirs with full path. all works except file names with spaces. some help? http://pastebin.com/m11102c41
[08:25] <friartuck> pointer?
[08:37] <kraut> moin
[08:39] <_ruben> friartuck: i assume something like this would work (not-tested) : find ~/.nx -name "*" -print0 | xargs -0 ls -Alhd
[08:41] <friartuck> _ruben interesting, that may work better. Thx! I'm working with sed to figure out the first script.
[08:42] <Counterspell> does apt-get build-dep only install the dependencies of a package?
[08:42] <friartuck> _ruben your's fixed the space problem anyways.
[08:57] <Counterspell> how do I install only the dependencies of a package?
[09:43] <espacious> hi i installed gallery2 but i cant get the relative paths right...i tryed almost everything...
[09:43] <espacious> http://gallery.menalto.com/node/77317
[09:43] <espacious> i followed that
[09:44] <espacious> can anyone throw an eye?
[09:44] <espacious> galler2 path is /usr/share/gallery2
[09:44] <espacious> wordpres in /var/www/wordpress
[09:45] <espacious> domainname.com is linked in /var/www/wordpress
[12:52] <jdbrowne> During installation, tasksel offers the 'virtualisation' option. When this option is checked, the installer does not put the main user in the libvirt group. The installer automatically put the main user in the 'admin' group, it is desirable to put the main user in libvirt so that the user can use virsh 'out of the box'.
[12:58] <jdbrowne> Additionaly, the default network is broken at install. Its default setting makes it fail on startup. Better deactivate it (not autostarted) than having a broken default configuration. Another reason why the default network should not be autostarted : it is a NAT configuration which is not adapted to several use case. In doubt, better let the user choose than make a choice for him that he will need to de-configure.
[12:58] <Ethos> how do I create a launch to a executable?
[13:56] <rst-uanic> is this irc room logged somewhere?
[13:57] <jpds> !logs | rst-uanic
[13:57] <rst-uanic> jpds: thanks :)
[14:07] <orudie> ivoks, hi
[14:12] <orudie> question. how can I give user permissions to actually write to /var/www directory ?
[14:12] <orudie> whithout chown , cause that just screws things up
[14:15] <giovani> orudie: are you familiar with linux permissions?
[14:15] <giovani> do an ls -ld /var/www
[14:16] <giovani> and paste the output (should only be one line) here
[14:17] <orudie> drwxr-xr-x 6 root root 4096 Mar 30 01:52 /var/www
[14:17] <giovani> ok, typically, root does not own /var/www
[14:17] <giovani> in ubuntu/debian, www-data does
[14:17] <giovani> did you change it?
[14:17] <orudie> nope
[14:17] <giovani> do you have a webserver installed?
[14:17] <orudie> yeah
[14:17] <giovani> which one?
[14:17] <orudie> installed it with tasksel apche2
[14:18] <orudie> with 4 active vhosts
[14:18] <giovani> ok ...
[14:18] <giovani> and who/what needs to write to this directory?
[14:20] <orudie> oh
[14:20] <orudie> i need to upload files with ssh
[14:20] <orudie> i mean sftp
[14:21] <giovani> well typically that's not done directly in /var/www
[14:22] <giovani> you might create a directory like /var/www/user/
[14:22] <giovani> and then chown that directory for your user
[14:23] <orudie> giovani, oh so i have /var/www/site1 , so i can do chown username /var/www/site1 ? wont this mess things up with apache's permissions ?
[14:23] <yann2> yes it would
[14:23] <giovani> orudie: you can have your user, and apache's group own the directory
[14:24] <yann2> giovani > you could put www-data in the group that owns the directory
[14:24] <yann2> in the group of the directory I meant
[14:24] <giovani> yann2: I just said that
[14:25] <yann2> ok I misunderstood :) thought you told him to put www-data as group
[14:25] <yann2> sorry
[14:25] <giovani> I did, I must have just misunderstood you :)
[14:25] <giovani> why create ANOTHER group?
[14:25] <yann2> :) I would create a group like "website1", and put user and www-data in it
[14:26] <giovani> what's the advantage of that, in this situation?
[14:26] <yann2> more flexibility if there are several people working on the website?
[14:26] <yann2> could give access to some people to one website but not the other one
[14:26] <yann2> non sense if he is the only user :)
[14:26] <giovani> ok ... he hasn't said anything about that
[14:27] <giovani> but yes, in that situation, that would be more flexible, it's just more complex if it's not required
[14:27] <yann2> I always found web permissions to be extraordinary complex and unstatisfying :(
[14:27] <orudie> yann2, can you help me create a group ?
[14:28] <orudie> yann2, that will let me do what you are talking about
[14:29] <giovani> yann2: linux permissions being very lacking don't help -- which is why real ACLs are usually brought in :)
[14:44] <ScottK-palm> ivoks: We're looking at leaping to clamav 0.95 before release. There's a draft package in ubuntu-clamav PPA. Could you test it with amavisd-new?
[14:45] <ivoks> i might...
[14:46] <ivoks> i just can't tell when :)
[14:46]  * ivoks whishes cloning is allowed :)
[14:46] <ScottK-palm> I probably have about two days to decide.
[14:48] <ScottK-palm> ivoks: I don't know anyone else I'd trust to do it and I'm pretty tied up working on porting libclamav rdepends.
[14:48] <ivoks> i'll test it
[14:49] <ScottK-palm> ivoks: Thanks.
[14:49]  * ScottK-palm gets back to $WORK.
[16:14] <Fenix|work> Greetings
[16:14] <Fenix|work> I need some suggestions with TAR
[16:15] <Fenix|work> I have an old version of tar that doesn't support inline bzip and gzip, that also splits archives once they hit 2048 MB, and the folder I'm trying to archive is 2078 MB.
[16:16] <Fenix|work> how can I tar and bzip simultaneously?
[16:18] <giovani> Fenix|work: run "tar --version" for me
[16:18] <giovani> I don't know what "old version" means exactly
[16:18] <Fenix|work> not supported :)
[16:18] <Fenix|work> hehe
[16:18] <Fenix|work> that old
[16:18] <giovani> you're running ubuntu server?
[16:18] <Fenix|work> I run several... but this one is not.
[16:18] <giovani> this is #ubuntu-server
[16:19] <Fenix|work> it's an antiquated BSD derivative.
[16:19] <giovani> you'd need to read the manpage for the version you have
[16:19] <Deeps> hi, i'm having a problem with an old version of winzip ;)
[16:19] <giovani> as for bziping ... you can simply tar it and pipe that to bzip
[16:20] <Fenix|work> giovani, I do, and I have... but there are bright minds here and it appeared noone was doing anything 'pressing' so I thought to ask.
[16:20] <giovani> but beyond that ... clearly you'd have to refer to documentation that came with your version of tar ... it's not ubuntu, and it's not supported here
[16:20] <Deeps> tar to stdout, pipe to bzip
[16:20] <giovani> so the manpage makes no mention of file limit?
[16:20] <Fenix|work> nope
[16:20] <Deeps> have bzip create the file rather than tar
[16:20] <giovani> Deeps: sounds like the advice I just gave :)
[16:20] <Deeps> indeed
[16:21] <Deeps> hopefully you can advise me with my winzip problem next ;)
[16:21] <Fenix|work> giovani, to give Deeps credit, you mentioned to tar it and pipe into bzip... he suggested just to bzip without the tar
[16:21] <ivoks> use cpio
[16:21] <ivoks> don't use tar if it's old
[16:21] <Fenix|work> Deeps, I'd be glad to help you with your winzip problem.
[16:21] <ivoks> move it to another machine and tar it :D
[16:21] <Deeps> Fenix|work: umm, actually, i suggested the exact same thing that giovani did
[16:22] <Fenix|work> I was just thinking about mounting it via NFS to one of my ubuntu boxes
[16:22] <giovani> heh
[16:22] <Fenix|work> I missed the tar to stdio... just saw the 'have bzip create the file
[16:23] <Deeps> bzip doesn't create archive files
[16:23] <giovani> Deeps: I believe you have to replace the winzip flux capacitor
[16:23] <Deeps> which is why you tar it first
[16:24] <Deeps> if you tar to stdout, it shouldn't be splitting anything as it's not creating any files, it's simply being piped to bzip to create instead
[16:24] <Deeps> (which is what giovani suggested :))
[16:25] <giovani> of course, that relies on tar supporting STDOUT redirection, which, considering it doesn't support printing its version, might be a stretch
[16:25] <Fenix|work> poke fun that the poor soul who has to administer some old piece of crap...
[16:25] <giovani> however, it MIGHT evade the 2GB limit
[16:25] <giovani> depending on its cause
[16:25] <Deeps> for all we know, the system's so old it doesn't support files > 2gb ;)
[16:25] <Fenix|work> 2.6GB of source code should compile pretty small
[16:26] <Fenix|work> compress
[16:26] <Fenix|work> jeeze
[16:26] <Fenix|work> what is wrong with my brain today
[16:26] <ivoks> sounds usuall to me :)
[16:26] <giovani> all that dust you've been breathing in that's been stuck in that computer since the 1980s
[16:26]  * Deeps gets back on with his windows MCE install
[16:26] <Deeps> although given that it's off-topic hour, anyone know a linux alternative that'll work with an xbox360?
[16:26] <giovani> "work with"?
[16:27] <Deeps> the 360 has a MS-bodged upnp-av stack
[16:27] <Deeps> so it'll only read networked media if it's coming from WMPv11 or a WinMCE (XP-MCE, Vista)
[16:27] <giovani> talk to the folks at LinuxMCE
[16:27] <Fenix|work> Deeps, have you visited the xbox-linux.org site?
[16:28] <giovani> #linuxmce
[16:28] <giovani> good project
[16:28] <giovani> also http://smart-home-blog.com/archives/836
[16:28] <Deeps> Fenix|work: just did, thats for running linux on the xbox, not reading network media from an xbox360 (totally different machine)
[16:28] <Deeps> giovani: ta
[16:28] <giovani> xbmc seems to have some support
[16:29] <Deeps> yeah, all this stuff's for the xbox, not the xbox360
[16:29] <Deeps> nm, </offtopic>
[16:29] <giovani> Deeps: just talk with #xbmc and #linuxmce
[16:30] <giovani> they'll know more than us
[16:30] <Deeps> aye
[16:30] <Deeps> < Deeps> although given that it's off-topic hour || was the only reason i asked ;
[16:30]  * Fenix|work thinks most people knows more than him
[16:30] <Deeps> ;)
[16:30] <giovani> most people can barely operate a computer
[16:30] <giovani> so, given that you know what "tar" is ... I figure you're already in the top 1%
[16:31] <Fenix|work> I get paid to barely operate several servers... the advantages of knowing that little bit extra.
[16:31] <Fenix|work> They get their retribution by giving me this crusty old BSD derivative called QNX.  And not even the new version.
[16:32] <ivoks> qnx?
[16:32] <ivoks> ah lol
[16:32] <giovani> QNX rocks!
[16:32] <Fenix|work> I'm having fun trying to port over GCC 3
[16:32] <Fenix|work> so I stand a chance at porting over some more up-to-date tools.
[16:34] <Fenix|work> giovani, 6 yeah... 4, not so much from an administrative point of view
[16:34] <giovani> you should run the "QNX is cool!" application
[16:35] <giovani> http://upload.wikimedia.org/wikipedia/en/f/fd/Qnx_floppy.gif
[16:35] <giovani> right next to "Towers of Hanoi"
[16:35] <giovani> :)
[16:35] <Fenix|work> hehe
[16:35] <Deeps> if anyone's interested, the correct answer to my question was GeeXboX uShare ;)
[16:35] <Fenix|work> Deeps, will keep that in mind
[16:35] <giovani> Deeps: yeah, I figure I have to use MS MCE
[16:35] <giovani> to get all the features I need
[16:36] <giovani> nobody else supports QAM decryption with CableCards
[16:36] <Deeps> fun
[16:37] <giovani> because Linux = evil
[16:37] <giovani> clearly
[16:38] <Fenix|work> linux = scarey... like 'the earth is flat' scarey.
[16:38] <Fenix|work> most devs don't want to fall off the edge of the earth, so they stay home
[16:38] <giovani> exactly
[16:39] <Fenix|work> and management doesn't want to use linux because they think they have to release their source code.
[16:39] <giovani> haha
[16:40] <ivoks> Fenix|work: talk to them
[16:40] <ivoks> take the lead
[16:40] <Fenix|work> they'll tell me... what's tar?
[16:40] <ivoks> tar is a program we use every day - on linux it works, here it doesn't
[16:40] <ivoks> and i have to backport real tar, which takes couple of hours
[16:41] <ivoks> that's why you have to pay me more
[16:41] <ivoks> simple as that :)
[16:52] <Fenix|work> ivoks, and where is 'here'?
[16:52] <Fenix|work> :)
[16:53] <ivoks> Fenix|work: at your company
[16:53] <Fenix|work> ?
[16:53] <ivoks> you said management doesn't know what tar is and are affraid of linux
[16:54] <ivoks> just let them know that with linux everything would be cheaper, and you'll get the green light
[16:56] <ivoks> time to go...
[17:12] <ZipmaO> Someone think that they can help me with a mail-sending batch script not running correctly when run as a cron job?
[17:13] <psyferre> hey folks, I've got some servers that a pair of gigabit ethernet adapters each.  I bonded the nics and just found that they are all negotiating a 10 mb connection instead of 1000.  Can anyone give me a shove in the right direction to fixing that?  My google-fu is failing me here... i must be searching for the wrong things
[17:13] <acicula> let me check my magic 8 ball...( just ask your question)
[17:14] <ZipmaO> acicula?
[17:14] <giovani> psyferre: you use ethtool to try and negotiate at 1000mbps?
[17:15] <psyferre> giovani: i'd been looking at mii-tool, at the -F options, but they only appear to support up to 100baseT
[17:15] <psyferre> giovani: looking at ethtool now
[17:16] <acicula> ZipmaO: just ask your question, or describe the problem, if someone knows they'll give you an answer
[17:17] <psyferre> giovani: looks like ethtool -s bond0 speed 1000 is all i need, correct?
[17:17] <giovani> psyferre: try it :)
[17:18] <psyferre> giovani: :D  sorry, i'm a *nix novice and am trying to solve a production server problem quickly... we didn't realize the problem until an hour ago and are frantically trying to resolve it :)
[17:19] <psyferre> giovani: i'll try to find something "safe" to try it on
[17:19] <giovani> psyferre: the reason I say try it ... is because I haven't had the problem before -- I'm giving you my best advice
[17:19] <giovani> but I can't be sure of what will work
[17:19] <psyferre> giovani: i understand, thank you very much for the advice :)
[17:20] <giovani> you can run ethtool bond0
[17:20] <giovani> to find out some basic info
[17:20] <giovani> that's harmless
[17:21] <psyferre> giovani: okay, thank you :)
[17:21] <greenfly> I'm not sure that ethtool will be effective against the bond0 device, it may have to be run against the individual nics
[17:21] <giovani> that's a good point, greenfly
[17:21] <psyferre> hmmph.  "No data available"
[17:21] <greenfly> another issue is that I thought that gigabit ports required autoneg
[17:22] <giovani> since it's interacting directly with the MII
[17:22] <greenfly> so if you are getting 10mbit it's possible the switchport isn't set up properly
[17:23] <giovani> just confirming what greenfly said -- yes, autoneg is required for 1000Mbps (had to look it up)
[17:23] <psyferre> greenfly: i guess that's possible, though most of the switch ports are setup exactly the same way
[17:23] <greenfly> so I'd be looking at the switch ports first
[17:23] <greenfly> and make sure they are set to gig and autoneg
[17:23] <giovani> however, it seems a number of PHYs support forcing 1000
[17:23] <giovani> but it's non-standard
[17:24] <greenfly> because otherwise you'll ultimately have to set your server's nics to autoneg in which case they'll possibly negotiate down to 10mbit again
[17:25] <psyferre> greenfly, giovani: yes, the switch ports are set to autonegotiate and max capacity
[17:25] <greenfly> maybe try hard-coding the switch ports themselves to gig?
[17:25] <psyferre> they currently report 1000 mbps full duplex on those two ports
[17:25] <greenfly> are they actually gig ports?
[17:25] <giovani> what indicated to you that you were neged at 10Mbps?
[17:25] <mathiaz> sommer: is https://wiki.ubuntu.com/JauntyServerGuide up-to-date wrt to the sections that need to be reviewed?
[17:27] <greenfly> psyferre: if you /did/ want to hard-code an ethernet port to 1000 and turn off autoneg this is how you would do it (as root):
[17:27] <psyferre> if i run mii-tool -v bond0 it reports the link speed at 10mb
[17:27] <greenfly> psyferre: ethtool -s eth0 speed 1000 duplex full autoneg off
[17:27] <greenfly> don't run it against the bond0 interface, but test eth0
[17:28] <psyferre> greenfly: okay
[17:28] <greenfly> psyferre: note that sometimes when I've run ethtool it hasn't disrupted service--other times it has
[17:28] <greenfly> also, this won't persist after a reboot so ideally you'll figure out some way for autoneg to work
[17:29] <dustin_> how do I activate my ftp server on ubuntu server 8.10? is there a page I can visit?
[17:29] <sommer> mathiaz: yes, it is now
[17:29] <greenfly> dustin_: there are a few guides around but the main way is to figure out what ftp service you want to run and use the package manager to install it
[17:29] <mathiaz> sommer: thank ya
[17:29] <psyferre> greenfly: okay, i wonder... when i created the bond0 interface i used this line from a tutorial: options bonding mode=0 miimon=100
[17:29] <psyferre>  ... maybe there's another option i should have used?
[17:30] <greenfly> psyferre: no, that doesn't affect the speed of the interface, just how it's bonded and what timeout it uses to determine when to failover
[17:30] <psyferre> greenfly: okay, thank you
[17:30] <greenfly> but I wouldn't run miitool or ethtool tests against bond0
[17:31] <psyferre> greenfly: is there a better way that you would recommend to find at what speed the bond is operating?
[17:31] <dustin_> greenfly: which ftp service is easiest to configure from command line?
[17:31] <greenfly> psyferre: either ethtool against eth0 and eth1 (or whatever your two nics are) or actual speed test (ie using rsync or scp to transfer a file)
[17:32] <psyferre> greenfly: they both report 1000baseT full duplex
[17:32] <genii> balance-rr often confuses other machines you connect to
[17:32] <greenfly> psyferre: then it sounds to me like your interfaces are actually at the correct speed
[17:32] <psyferre> greenfly: according to ethtool anyway
[17:33] <giovani> well, ethtool should be reading directly from the chipset
[17:33] <genii> Doesn't the bond interface use a pseudo intel e100 driver ?
[17:33] <dustin_> greenfly: pureftpd or proftp? which has the least setup?
[17:34] <dustin_> greenfly: I know how to use both with gui tools but not command line
[17:34] <greenfly> dustin_: if either is packaged in main it should have a pretty straightforward setup
[17:34] <jmedina> dustin_: use pure-ftpd
[17:34] <greenfly> if you just want a simple one, try to find one that can use local unix accounts
[17:34] <giovani> genii: if it did, how would it supply more than 100Mbps from multiple bonded 100Mbps interfaces?
[17:35] <genii> giovani: Yes, thats just what I was thinking about
[17:35] <giovani> but we know that it does ...
[17:35] <jmedina> pure ftpd is controlled arguments in the command line, and you can use puredb to create virtual users, you can control quotas, bw limits, access by hours.
[17:38] <dustin_> jmedina: where can I find a man online for pure-ftpd? this will reduce my questions in chat ;)
[17:39] <giovani> dustin_: first google hit for "pure-ftpd"
[17:40] <jmedina> dustin_: you can read pure-ftpd(8) and for ubuntu pure-ftpd-wrapper (8)
[17:40] <dustin_> doing that atm but I am getting a lot of roundy-rounds ;(
[17:43] <psyferre> giovani, greenfly: I am utterly failing at getting a transfer speed out of scp... could you give me a hint?  I tried -v and got loads of debugging messages, but i don't see anything that indicates the speed of the transfer
[17:43] <giovani> psyferre: when it's transfering a fle it gives the speed on the right, afaik
[17:43] <greenfly> yeah same here
[17:43] <greenfly> otherwise you could use rsync with --progress
[17:44] <psyferre> giovani: heh, i see nothing that isn't directly in front of my face, that is.  *sigh*  Sorry about that...  27.3MB/s
[17:44] <psyferre> it was a 28 mb file... maybe i should try something larger?
[17:44] <giovani> psyferre: that's definitely not 10Mbps :)
[17:44] <giovani> yes, something larger would help
[17:44] <psyferre> giovani: yup! :)  at least i know that much :)
[17:45] <giovani> dd if=/dev/zero of=/testfile bs=1024k count=512  --  that should do it
[17:48] <giovani> keep in mind, scp has significant overheard
[17:52] <psyferre> hmm... just transferred an ubuntu iso... transfer speed hovers around 20 mbps
[17:53] <giovani> you mean 20MBps?
[17:53] <giovani> you reported 27.3MBps just a minute ago
[17:53] <giovani> that's very different from 20Mbps
[17:53] <psyferre> yes, sorry... lazy shift key
[17:53] <psyferre> :)
[17:54] <greenfly> that's still more than 100Mbit
[17:55] <psyferre> that's true... i've never been good at thinking in megabit terms... so i must be good to go!
[17:57] <PhotoJim> psyferre: divide by 10 for MB from Mb... it's not exact.  but it'll get you in the ballpark.
[17:58] <PhotoJim> psyferre: it worked in the modem days.  with start-stop bits, it was 10 bits per byte.
[17:58] <PhotoJim> psyferre: with overhead, that overstates it a little but it's still reasonable.
[17:58] <psyferre> PhotoJim: thanks :)
[17:59] <psyferre> greenfly, giovani, and everyone else who commented: Thank you very much for helping a novice figure out what the heck is going on.  I really appreciate it.
[18:19] <Iceman_B^Ltop> is there any difference in TIA-568B and TIA-568A wired cabling, besides the pin order?
[18:20] <Iceman_B^Ltop> they should perform equally good, right ?
[18:24] <genii> Iceman_B^Ltop: Yup. Just use same order on both ends
[18:24] <christian_> hello
[18:25] <genii> Iceman_B^Ltop: I generally use B
[18:25] <christian_> Somebody use a mail server with multiples domains??
[18:26] <giovani> christian_: of course, it's a common setup
[18:28] <christian_> giovani do you have a mail server with postfix???
[18:28] <giovani> christian_: yes
[18:28] <christian_> and various domains??
[18:28] <giovani> yes ...
[18:28] <christian_> I have a mail server
[18:28] <jmedina> I use postfix with virtual domains in ldap and mysql
[18:29] <christian_> I do not understand how to use ldap and mysql
[18:29] <christian_> in my mail server
[18:30] <giovani> christian_: neither are required for virtual domains
[18:30] <giovani> but postfix provides great documentation on setting it up, if you'd like
[18:31] <christian_> yes i view this information, but i dont understand how to use my domain1, with my domain2
[18:31] <christian_> I have squirrelmail
[18:31] <christian_> an d the users how to check your mails
[18:32] <jmedina> christian_: with simple plain setup you map mails address to local users, and if you want foo@domain1.com and foo@domain2.com with different mailbox, you need to create to different users and user a map
[18:33] <jmedina> if you want both domains go to the same mailbox, just add domain2 to mydestination
[18:34] <christian_> which is it the setup??
[18:34] <jmedina> for more info read Postfix Virtual Domain Hosting Howto: http://www.postfix.org/VIRTUAL_README.html
[18:34] <christian_> I read about the configuration of postfix
[18:35] <jmedina> I use postfix+mysql for virtual hosting for different customers
[18:35] <Iceman_B^Ltop> genii: I'm looking at a factory sealed cable that says 568-A but apprantly is wired up as 568-B
[18:35] <Iceman_B^Ltop> but on both ends, so that shouldnt be ap roblem
[18:36] <giovani> Iceman_B^Ltop: yep, a non-issue
[18:36] <Iceman_B^Ltop> okay
[18:36] <giovani> 568-B is far more common
[18:37] <giovani> -A is considered obsolete
[18:37] <mathiaz> kirkland: does kvm/libvirt support snapshot?
[18:37] <jmedina> christian_: for a simple setup without mysql or ldap this howto looks good:
[18:37] <jmedina> http://www.akadia.com/services/postfix_separate_mailboxes.html
[18:38] <kirkland> mathiaz: yes, much better in kvm-84
[18:38] <mathiaz> kirkland: is this feature available from virsh?
[18:39] <jmedina> how does kvm handles snapshots?
[18:39] <mathiaz> kirkland: here is my scenario:
[18:39] <christian_> jmedina, What for use mysql for clients
[18:39] <christian_> is ts neccesary?
[18:39] <kirkland> mathiaz: i have not idea about virsh
[18:39] <jmedina> you dont use mysql for clients, you only store mail accounts in database
[18:40] <kirkland> mathiaz: let's talk to aliguori in #ubuntu-virt
[18:40] <jmedina> I prefere mysql because you can use a web based frontend like postfixadmin
[18:40] <kirkland> mathiaz: doh... he just checked out
[18:40] <kirkland> mathiaz: here is fine
[18:40] <mathiaz> kirkland: I'd like to run my jaunty base vm all the time (named j-base) and when I need to create a test vm based on jaunty, I would run a command (create_vm.sh j-base t-dovecot) that will snapshot the j-base vm and create the t-dovecot vm
[18:40] <jmedina> with postfix admin manage virtual domains, different admins, mail quotas, mail forwarding, aliases
[18:40] <mathiaz> kirkland: and then I would ssh into t-dovecot
[18:41] <mathiaz> kirkland: do all my testing, and when I'm done I would just delete_vm.sh t-dovecot
[18:41] <mathiaz> kirkland: for now I'm using lv to hold the j-base filesystem and lvm snapshot to handle the snapshoting
[18:41] <mathiaz> kirkland: however I can only create a snapshot if the j-base vm is *not* running
[18:42] <mathiaz> kirkland: for consistency
[18:42] <kirkland> mathiaz: see -snapshot in http://manpages.ubuntu.com/manpages/jaunty/en/man1/qemu.1.html
[18:42] <mathiaz> kirkland: which means that my j-base vm doesn't run most of the time.
[18:46] <mathiaz> kirkland: thanks for the pointer. I'm gonna have to think about this a bit more.
[18:46] <kirkland> mathiaz: mee too ....
[18:48] <kirkland> mathiaz: i think using that -snapshot option to kvm, you should be able to master off of your base vm, and snapshot your testing to an auxilliary file
[18:49] <mathiaz> kirkland: right. That seems like a good option.
[18:49] <mathiaz> kirkland: however how would handle a live vm running from the master file?
[18:50] <mathiaz> kirkland: could suspending the master vm work?
[18:50] <mathiaz> kirkland: take a snapshot of the root block device and boot from there?
[18:51] <mathiaz> kirkland: in my current setup I'm also doing that, except that the master vm is always off.
[18:51] <mathiaz> kirkland: and I need to boot once in a while to update the system correclty.
[18:51] <mathiaz> kirkland: to boot the master vm
[18:51] <mathiaz> kirkland: I would like to avoid that
[18:51] <kirkland> mathiaz: hmm, there is a "saveback" command you can issue
[18:52] <kirkland> mathiaz:         Ctrl-a s
[18:52] <kirkland>             Save disk data back to file (if -snapshot)
[18:53] <mathiaz> kirkland: right - could the guest issue a saveback command?
[18:53] <mathiaz> kirkland: it seems that the guest is the one that knows when it's safe to be snapshotted
[18:54] <mathiaz> kirkland: you don't want to take a snapshot of the master vm in the middle of an apt-get upgrade
[18:54] <kirkland> mathiaz: right
[18:55] <kirkland> mathiaz: looks like you want this ctrl-a s command when you *know* you want to saveback
[18:56] <mathiaz> kirkland: right - something like a checkpoint command
[19:07] <Iceman_B^Ltop> giovani / genii: Im posting about my problem on the Ubuntu forums. My server keeps dropping SSH connections, and I found that internet connections lag out too at that point
[19:07] <giovani> Iceman_B^Ltop: what makes you think it's ubuntu-related?
[19:07] <giovani> you're probably suffering bad packet loss
[19:09] <Iceman_B^Ltop> giovani: I had Ubuntu 8,10 desktop on that same machine up till a week ago
[19:09] <Iceman_B^Ltop> same hardware, except the hdd
[19:09] <Iceman_B^Ltop> no problems at all
[19:09] <giovani> well ... things can change, cables can be bad, hardware can go bad
[19:09] <Iceman_B^Ltop> except that Ibex Desktop has a GUI which I dont use on a headless machine, and it ate all 256 megs of ram
[19:09] <giovani> ubuntu server and ubuntu desktop are almost identical at lower levels
[19:10] <giovani> but, alright
[19:10] <Iceman_B^Ltop> giovani: how high is the change of that coinciding with the switch to a different OS ?
[19:10] <giovani> it's not a different OS
[19:10] <giovani> I'd say it's almost nil
[19:10] <giovani> the ethernet driver will be the same, unless you were using an old kernel before, and have updated now
[19:10] <giovani> is it possible that it's related to the server kernel? yes ... but I'd think it's damn unlikely
[19:11] <Iceman_B^Ltop> I have not the slightest idea. I though ti would be my network at first
[19:11] <Iceman_B^Ltop> take a look here if you want http://ubuntuforums.org/showpost.php?p=6985576&postcount=55
[19:11] <Iceman_B^Ltop> and the tread itself I posted a small update after that
[19:12] <giovani> I'd forget this application-specific diagnosis
[19:12] <giovani> do a long ping test
[19:12] <giovani> and establish that packet loss is the issue
[19:13] <jmedina> Iceman_B^Ltop: could you pastebin the output from: ip -s link
[19:13] <Iceman_B^Ltop> i'll try
[19:15] <Iceman_B^Ltop> http://pastebin.ubuntu.com/141603/
[19:17] <Iceman_B|SSH> server here /o/
[19:18] <Iceman_B^Ltop> connection dropped...
[19:18] <Iceman_B^Ltop> there it goes
[19:38] <billyk> how can I list all jpeg files in the home folder without being in that directory?  (tried ls -aR /home/*.jpg and it didnt work for me).  A step further, how could I list only jpg's with an underscore in the filename e.g. *_*.jpg
[19:38] <billyk> sorry, I meant home folder and subdirectories
[19:39] <sommer> billyk: probably something like find /home/$user -name "*.jpg"
[19:40] <billyk> I should have said I'm trying to use this with mogrify
[19:41] <billyk> I can do mogrify *.jpg if i'm in that directory, but I have a bunch of subdirectories I want to resize images in in multiple home directories
[19:42] <Deeps> find /home/$user -name "*.jpg" | while read file; do mogrify $file; done
[19:43] <giovani> Deeps: don't you think using -exec would be better?
[19:43] <billyk> awesome
[19:45] <billyk> noob question but what does $user do?
[19:45] <billyk> and $file
[19:45] <billyk> variables?
[19:45] <billyk> like a shell script?
[19:46] <giovani> a shell script is just what's interpreted by bash
[19:46] <giovani> everything you run in bash is a script
[19:46] <friartuck> billyk good intro: http://tldp.org/LDP/abs/html/
[19:46] <giovani> $user there is not a defined variable -- I think he just used it as a placeholder for you to fill in
[19:46] <giovani> $file is a variable, as is referenced b the while look
[19:46] <giovani> loop*
[19:47] <friartuck> I think it's $USER and not $user
[19:47] <giovani> well, for the current user, sure
[19:47] <billyk> will that do all the user accts?
[19:47] <giovani> who knows if he wants that :)
[19:47] <giovani> billyk: no
[19:47] <giovani> you'd need to wrap it in a for loop
[19:47] <giovani> for all the dirs in /home
[19:48] <billyk> just all subdirectories in the home folder
[19:48] <billyk> no easier way to do that than a loop?
[19:48] <giovani> sure, just back out the find execution to /home
[19:48] <giovani> that'll apply to any directory in home
[19:51] <billyk> cool thanks!
[19:52] <billyk> gonna go read that bash guide now
[19:52] <Deeps> giovani: could be, i like while loops ;)
[19:52] <billyk> so I can the *.jpg in quotes is a regex?
[19:53] <giovani> no, that's not regex
[19:53] <billyk> so could I put "*_*.jpg"
[19:53] <billyk> oh
[19:53] <Deeps> thats still not regex, but yes
[19:53] <giovani> regex would be something like ".*?.jpg"
[19:54] <Deeps> ".*_.*\.jpg"
[19:54] <giovani> but yeah, what you want to do will work
[20:11] <MagicFab> dendrobates, :)
[20:41] <antdedyet> win 26
[20:41] <antdedyet> lose 27, heh
[21:09] <genii> Probably if the master interface already has an IP from same dhcp server, likely
[21:10] <genii> (since MAC would not change)
[21:11] <antdedyet> anyone here got a canonical partner sales contact for a partner?
[21:11] <antdedyet> ours is out of office
[21:12] <antdedyet> and the temporary counterpart has been unresponsive
[21:26] <billyk> bash syntax question - this obviously doesnt work, but it probably best explains what I'm trying to do- if (! mogrify -identify 1.jpg | grep 800x600) (newline) then mogrify -resize 800x600 1.jpg (newline) fi
[21:27] <billyk> mogrify -identify 1.jpg | grep 800x600 only outputs data if the image is the right size.  I want the -resize command to only be run if it's not the right size
[21:28] <billyk> for some reason mogrify -resize still changes an image's hash even if it's already the right resolution (bad for rsync)
[21:31] <jesperronn> Hey anybody able to help me with a preseed question? (isolinux.cfg)
[21:32] <jesperronn> I'm currently working on creating an unattended preseeded ubuntu server install
[21:32] <giovani> billyk: try using || between the grep command and the second morgrify command
[21:32] <giovani> it means the 3rd command will only run if the grep fails
[21:33] <billyk> cool
[21:33] <jesperronn> However, first thing that comes up (in front of the installer menu) is Language selection. Question is How do I remove that language selection? Which command could I add to isolinux.cfg?
[21:33] <jesperronn> Here is my current isolinux.cfg:
[21:33] <jesperronn> include menu.cfg
[21:33] <jesperronn> default Brownpaper
[21:33] <jesperronn> prompt 0
[21:33] <jesperronn> timeout 0
[21:33] <jesperronn> gfxboot bootlogo
[21:33] <jesperronn> label Brownpaper
[21:33] <jesperronn>   menu label ^Brownpaper customized installation
[21:33] <jesperronn>   kernel /install/vmlinuz
[21:34] <jesperronn>   append file=/cdrom/brownpaper.seed locale=en_US console-setup/layoutcode=us initrd=/install/initrd.gz quiet --
[21:34] <giovani> jesperronn: FAR too much pasting -- use pastebin next time
[21:34] <jesperronn> (sorry for the many lines) -- thanks for tip @giovani
[21:36] <giovani> billyk: that work out ok?
[21:36] <jesperronn> Here it is in pastebin: http://pastebin.com/d2e568e5
[21:37] <jesperronn> My challenges:  1) surpass the Language selection menu. 2) making menu item "brownpaper start automatically" if possible.
[21:38] <giovani> jesperronn: your question is pretty specific, and not common knowledge for someone to have -- so wait around
[21:39] <billyk> giovani: yeah.  Thanks! :-)  if the grep doesnt fail though, it outputs the result of that command to the terminal.  will that be okay for a shell script?  or do I need > /dev/null or something?
[21:39] <jesperronn> @giovani: thanks for your tip! I presume this is the best forum for the question even it's specific. Any links to documentation/api or examples is appreciated
[21:40] <giovani> billyk: || is not similar to | -- || = OR  and | = pipe
[21:40] <giovani> so, nothing is being passed to the last command
[21:40] <giovani> it's just only being run if grep fails
[21:40] <giovani> if you wanted to run a command only if grep succeeded you'd use &&
[21:41] <giovani> jesperronn: the wiki, google, and ubuntuforums.org probably have a good bit of info on the topic
[21:42] <giovani> billyk: and just if you're curious, the way that bash knows whether or not grep "succeeded", it's based solely on exit status -- it doesn't read grep's output or anything else
[21:43] <billyk> ah
[21:43] <billyk> trying to digest all that :-)
[21:43] <giovani> billyk: yeah ... don't worry about digesting it all at once
[21:43] <giovani> I'm really far from a bash expert -- you just pick up a few things every time you try something new
[21:43] <giovani> mastering piping and output/input redirection are the most important bash skills
[21:44] <giovani> in my opinion
[21:44] <billyk> yeah, it's obviously really useful
[21:44] <giovani> you feel comfortable with those?
[21:45] <billyk> not yet
[21:45] <giovani> i.e. < and > and >> and | ?
[21:45] <billyk> haha
[21:45] <giovani> well, and 2> :)
[21:46] <giovani> ok, quick recap ... `programname < filename` takes everything in the file 'filename' and sends it to the input of 'programname'
[21:46] <billyk> mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png still shows grep's output
[21:46] <giovani> billyk: "shows" you mean it prints to the console?
[21:46] <giovani> is that a problem?
[21:46] <billyk> will it be if I have that line in a .sh?
[21:47] <giovani> it'll print to the console ... nothing bad
[21:47] <giovani> you can fix that though if you need
[21:47] <billyk> okay.  when you execute a shell script from cron though, where would that output go?
[21:48] <Deeps> email
[21:48] <giovani> most people would send console output to /dev/null (basically, discard it) instead of printing it when using cron, so that it doesn't get emailed back to the user
[21:48] <giovani> mogrify -identify logo.png | grep 668x476 > /dev/null || mogrify -adaptive-resize 668x476! logo.png
[21:48] <giovani> should do it
[21:48] <giovani> try it out
[21:49] <giovani> the other option (specifically with grep) is to run it with the -q option
[21:49] <giovani> it suppresses all output
[21:50] <giovani> mogrify -identify logo.png | grep -q 668x476 || mogrify -adaptive-resize 668x476! logo.png
[21:50] <billyk> okay
[21:50] <billyk> why doesnt it work with > /dev/null at the end?
[21:50] <giovani> but that'll only work with grep -- not all apps have options to not output anything -- so knowing about > /dev/null is important
[21:50] <billyk> yeah
[21:51] <giovani> billyk: why doesn't what work?
[21:51] <billyk> mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png > /dev/null doesnt suppress output
[21:51] <giovani> because > /dev/null is applying to the command to the left of it
[21:51] <giovani> which, in your case, is mogrify, not grep
[21:51] <giovani> so it needs to go after grep -- since it's grep that has the output you want to suppress
[21:52] <billyk> ooh
[21:52] <billyk> I thought the output was just piped to the -resize command
[21:52] <giovani> billyk: nope, remember || is NOT a pipe
[21:53] <giovani> it's a special OR operator, despite looking similar to pipe :)
[21:53] <giovani> so, because it's not a pipe, grep's output is going directly to the console
[21:53] <giovani> (unless you redirect it with > /dev/null)
[21:56] <giovani> billyk: make sense? or still not clear?
[21:57] <billyk> giovani: no, I got it :-)
[21:57] <giovani> awesome :)
[21:58] <billyk> now I'm curious about the 2> though
[21:58] <giovani> ah, well, that's simple enough to cover
[21:58] <billyk> is that on http://tldp.org/LDP/abs/html/ ?
[21:58] <giovani> so, when we say "output" we mean STDOUT
[21:59] <giovani> and when we say "input" we mean STDIN
[21:59] <giovani> so, STDOUT is >
[21:59] <giovani> STDIN is <
[21:59] <giovani> there's one more ... STDERR -- which is 2>
[21:59] <giovani> which is supposed to only be used for error-related info, and not general output
[21:59] <billyk> ok, remember some of that from basic C programming
[22:01] <billyk> so STDOUT is what's output to the terminal, or what's passed in a pipe? or both?
[22:01] <giovani> STDOUT by default goes to the terminal, unless it's redirected with > or |
[22:02] <giovani> > being used to output to files, and | to pass it to the STDIN of the next application after the pipe
[22:02] <billyk> cool
[22:02] <giovani> 2> takes just the STDERR, and outputs it to a file
[22:03] <giovani> in many cron jobs, people want to either collect both info and error messages in one place, or discard them both, they do this with `program &> filename`
[22:04] <billyk> if you use 2> where does the stdout go?
[22:04] <giovani> whereever you instruct it to
[22:04] <giovani> i.e. `programname 2> myerror.log`
[22:04] <billyk> so I can do command -argument  2> error.log > output.txt ?
[22:05] <giovani> yep
[22:05] <giovani> or, let's say, for example, you wanted to pipe both STDERR and STDOUT to another program
[22:05] <giovani> you'd use redirection to accomplish that
[22:06] <giovani> `programname 2>&1 | secondprogram`
[22:06] <giovani> 2> clearly takes STDERR and then pushes it into STDOUT
[22:06] <giovani> and then pipe takes all STDOUT (which now includes STDERR) and passes it to STDIN of secondprogram
[22:07] <billyk> can the secondprogram differentiate the STDERR from the STDIN?
[22:07] <giovani> nope
[22:07] <giovani> the &1 may seem arbitrary, but, in reality, each of the three file descriptors (STDIN, STDOUT, and STDERR) have numbers, 0, 1, and 2
[22:08] <giovani> so 1> is the same as > which is STDOUT
[22:08] <giovani> and 2> is STDERR
[22:10] <billyk> cool
[22:12] <christian_> hi giovani...
[22:12] <christian_> hlep me please
[22:12] <billyk> it might be pointless to do this, but how would you save stderr to a file and then pipe stdout to a command?
[22:13] <christian_> i cant do the email server with two domains
[22:18] <giovani> billyk: in that case, you'd use the 'tee' command
[22:19] <giovani> which both reads its STDIN, writes it to a file, and also sends it to STDOUT
[22:19] <giovani> so, `programname | tee outputfile | secondprogram`
[22:20] <giovani> would take the STDOUT from 'programname', write it to 'outputfile' and also send it to 'secondprogram'
[22:21] <giovani> now I'm off
[22:21] <giovani> later
[22:25] <billyk> giovani: Thanks so much!
[22:32] <MatBoy> damn I have a problem
[22:33] <billyk> MatBoy: what is it?
[22:35] <MatBoy> billyk: I love myself.... :/
[23:15] <baffle> dustin__: Your screen profiles rock btw. I've used screen for over 10 years, but never got around to actually making myself a proper profile. :)
[23:37] <Iceman_B|SSH> hmm, capturing from my laptop only reveals SSH packets...
[23:37] <jmedina> Iceman_B|SSH: sitll problems netwok problems?
[23:38] <Iceman_B^Ltop> jmedina: yup, still
[23:39] <jmedina> Iceman_B^Ltop: please paste output from "ip -s link"
[23:39] <cjwatson> sigh, if only jesperronn had stuck around another hour I could have answered his question
[23:40] <cjwatson> (the answer is to put a language code of your choice, e.g. "en", in /isolinux/lang on the CD)
[23:40] <PhotoJim> baffle: thanks for mentioning those profiles.  I had no idea they existed.  I'm going to install them and play with them.
[23:43] <dustin__> ok kinda embarrased but I didnt know I had a profile??? :S
[23:44] <dustin__> or did baffle have the wron guy?
[23:45] <Iceman_B^Ltop> jmedina: http://pastebin.ubuntu.com/141730/
[23:46] <PhotoJim> the right Dustin is on as user Kirkland
[23:46] <jmedina> Iceman_B^Ltop: looks fine, no errors, dropeed or overrun
[23:47] <kirkland> PhotoJim: ?
[23:47] <Iceman_B^Ltop> jmedina: okay
[23:47] <baffle> dustin__: Maybe the wrong guy. :-)
[23:47] <dustin__> baffle: where can I go to see that great profile I never made?
[23:47] <dustin__> :)
[23:48] <dustin__> brb I is gonna fix my name :D
[23:48] <baffle> dustin__: I assumed you were Dustin Kirkland.
[23:48] <mds58> ahhh so much better
[23:49] <PhotoJim> kirkland: Baffle was commenting that he really likes your screen-profiles package.
[23:50] <baffle> mds58: But you should apt-get install screen-profiles then. :)
[23:51] <Iceman_B^Ltop> jmedina: well, I have no clue then. apart from installing Ubuntu desktop, and seeing wether or not the problems cease
[23:51] <Iceman_B^Ltop> if they dont, it might be hardware
[23:52] <Iceman_B^Ltop> I have a tcpdump output as well
[23:52] <jmedina> Iceman_B^Ltop: have you tested in a livecd?
[23:52] <kirkland> PhotoJim: oh, sweet
[23:52] <kirkland> baffle: thanks!
[23:53] <Iceman_B^Ltop> jmedina: can't say I have
[23:53] <Iceman_B^Ltop> but the server is running headless, can I still use a live cd then ?
[23:53] <Iceman_B^Ltop> or do I really need a screen
[23:55] <Iceman_B^Ltop> and keyboard
[23:56] <PhotoJim> Iceman_B^Ltop: screen and keyboard are still useful.  if things go wrong, it is often useful to be able to do a console login from the machine itself.
[23:56] <baffle> kirkland: Tried looking into 256 color profiles? Nice color shading etc.
[23:57] <baffle> kirkland: As in http://www.frexx.de/xterm-256-notes/
[23:58] <Iceman_B^Ltop> PhotoJim: just the 2 things I dont have