[00:07] <swharper> when i follow the instructions outlined in 4.1 of the server guide i get an error saying no root partition is defined
[00:12] <swharper> http://dl.dropbox.com/u/3136063/ubuntu1.jpg
[00:12] <swharper> this is the current partition scheme
[00:13] <swharper> the sandisk is the installer usb stick
[00:28] <SpamapS> swharper: after you split the physical drives up, you need to create md's
[00:29] <SpamapS> swharper: I don't see RAID instructions here https://help.ubuntu.com/10.04/serverguide/C/index.html
[00:29] <swharper> yeah im in the process of doing that now…  but for some reason im only seeing the 5gb partitions, except in 2 instances
[00:29] <swharper> https://help.ubuntu.com/11.10/serverguide/C/serverguide.pdf
[00:29] <swharper> section 4.1 advanced install
[00:30] <SpamapS> Heh wow the numbering on that is weird
[00:30] <swharper> on the partition table?
[00:30] <SpamapS> swharper: no in the PDF
[00:31] <swharper> oh
[00:31] <SpamapS> 4.1 is not really the section number
[00:31] <SpamapS> Anyway
[00:31] <SpamapS> Ok so you need to do the 'Configure Software RAID' step
[00:33] <atruno-> does zoneedit cost money to host a website with your own domain ?
[00:35] <swharper> oddly this is what i get when i try to configure the sw raid
[00:35] <swharper> http://dl.dropbox.com/u/3136063/ubuntu2.jpg
[00:35] <swharper> for some reason only 2 of the drives show the large partition
[00:35] <swharper> s
[00:39] <SpamapS> swharper: did you possibly not mark it as 'use as RAID' when you created the partitions?
[00:39] <swharper> they're all marked as raid...
[00:39] <swharper> in the first screenshot
[00:40] <SpamapS> swharper: weird
[00:41] <SpamapS> swharper: possibly a bug w/ > 1TB partitions
[00:42] <SpamapS> swharper: I notice the drive model is different on sdf ..
[00:42] <SpamapS> swharper: maybe the WDC's have different geometry that isn't playing nice?
[00:43] <SpamapS> swharper: I have to leave, but my suggestion would be to try 999GB
[00:44] <philipballew> Is there any way to set up ssh when the network I am on wont allow for any ports to be opened?
[00:45] <SpamapS> philipballew: ssh out to a box and forward back..   ssh -R 2222:127.0.0.1:22 my-remote-box
[00:46]  * SpamapS disappears
[00:50] <swharper> for some reason the bootable flag won't stay on either
[00:50] <swharper> once i go into the raid config page
[00:51] <qman__> these days the bootable flag is largely irrelevant
[00:54] <qman__> I had a lot of trouble with the 11.10 installer trying to use disks that had been in a fakeraid
[00:54] <qman__> I ended up having to zero the whole disks before dmraid would behave and stop screwing things up
[00:59] <swharper> thats what im gonna try to do now
[01:00] <swharper> i want to 0 everything and start over
[01:00] <swharper> but these disks have been partitioned a bazillion times
[01:01] <swharper> can this be done from the installer?
[01:02] <qman__> technically yes but I'd suggest booting something else
[01:02] <qman__> like recovery mode and drop to shell, or systemrescuecd
[01:02] <swharper> hm
[01:03] <swharper> all ive got here is my install stick
[01:03] <qman__> well, back the installer up to before partitioning
[01:03] <qman__> then press alt+F2
[01:03] <qman__> and pressing enter should give you a shell
[01:04] <swharper> ok
[01:04] <swharper> done
[01:04] <swharper> fsck not found...
[01:05] <qman__> fsck doesn't have anything to do with zeroing disks
[01:05] <qman__> but that shell might be a busybox, I don't remember
[01:05] <qman__> which is pretty much useless
[01:06] <swharper> it is
[01:07] <qman__> yeah, you'll have to boot recovery mode
[01:07] <qman__> reboot, and when it gives you options, choose repair a broken system
[01:07] <qman__> then when it gives you more options, choose to drop to a shell in the live environment
[01:08] <swharper> k
[01:08] <qman__> the versions in lucid and earlier are annoying but work
[01:08] <qman__> the newer ones may have been fixed, I don't know
[01:10] <swharper> hmm one of the options is "assemble raid array"
[01:10] <swharper> from rescue mode
[01:10] <qman__> don't do that
[01:10] <swharper> "device to use as root file system"
[01:11] <qman__> you don't want to load any of the stuff on your disks, you just want the installer environment
[01:11] <swharper> so probably sda1? which shoudl be the usb disk
[01:12] <swharper> that or do not use a root file system
[01:12] <qman__> worth trying I guess
[01:12] <qman__> do not use
[01:12] <qman__> that's what you're after
[01:12] <swharper> k
[01:12] <swharper> ok dropped into the shell
[01:12] <swharper> its busybox though :P
[01:12] <qman__> no, it should be dash
[01:12] <qman__> and '/bin/bash' should work
[01:13] <swharper> hm
[01:13] <qman__> well, it did in older versions
[01:13] <qman__> I honestly have not used the current versions, most of my systems are lucid
[01:13] <qman__> in any case, it's not needed
[01:13] <qman__> if dd is there you're good
[01:13] <swharper>  '/bin/bash not found'
[01:13] <qman__> dd if=/dev/zero of=/dev/sd? bs=2M
[01:13] <qman__> where sd? is the disk you want to zero
[01:14] <swharper> right
[01:14] <swharper> cool
[01:14] <swharper> thanks
[01:14] <twb> qman__: in examples recommend /dev/sdz9 to avoid users idiotically copy-and-pasting what you write
[01:14] <twb> *to avoid explosions when
[01:15] <qman__> valid point, just me thinking in shell
[01:15] <twb> btw, boot flag is important to /usr/lib/syslinux/mbr.bin
[01:15] <swharper> damn, no diskutil
[01:16] <qman__> pretty sure that command doesn't expand as-is though
[01:16] <swharper> how can i get a list of drives
[01:16] <qman__> fdisk -l
[01:16] <swharper> th
[01:16] <swharper> x
[01:16] <twb> Also if his problem is that he lands in initramfs after mdadm finds /dev/md2p1 instead of /dev/md0 and /dev/md1, the problem is that lucid defaults to 0.9 mdadm on-disk format; manually create the array using 1.x format and it'll be fine
[01:16] <twb> fdisk -l is worse than /proc/partitions IMO
[01:18] <twb> qman__: btw als re "zeroing disks", you probably only need to zero the first and last few blocks -- although doing the latter in dd is a bit fiddly.
[01:18] <swharper> itll take awhile to zero 1.5tb, yes?
[01:19] <twb> swharper: yes, like an hour
[01:19] <swharper> k
[01:19] <swharper> fml
[01:19] <swharper> :)
[01:19] <swharper> 7 hrs
[01:19] <qman__> twb, while that usually works, dmraid is particularly thick and you have to find the magic places where it defines the fake raid
[01:19] <qman__> and in that case it's simpler just to zero it
[01:19] <twb> qman__: by dmraid do you mean the mdadm parts of d-i partman?
[01:19] <qman__> possibly
[01:20] <twb> qman__: or more like mdadm --assemble --scan
[01:20] <qman__> whichever is used to detect fakeraids and assemble them
[01:20] <qman__> used to be dmraid -ay
[01:20] <twb> Well fakeraid (as distinct from mdadm) can FOAD
[01:20] <qman__> agree
[01:20] <twb> Like CCISS or whatever,  I hate that stuff, don't use it
[01:21] <twb> If your RAID card didn't cost hundreds of dollars and include a BBU or a flash-type thingo, don't use it
[01:21] <qman__> but someone decided that when I say "No" to "RAID arrays found, do you want to assemble these arrays" I actually meant yes
[01:21] <twb> qman__: lame
[01:21] <twb> qman__: if it were me I'd go buy a 4port pcie sata card and ignore the southbridge ports :P
[01:21] <qman__> because they show up in the partitioner
[01:21] <twb> You probably need to blcaklist the cciss driver or something
[01:21] <qman__> which wouldn't be a problem, except that it completely breaks the partitioner
[01:22] <qman__> it doesn't even use the chipset
[01:22] <qman__> it reads the data and assembles it in software
[01:22] <qman__> I
[01:22] <qman__> I've recovered data from nvraids and such this way with non-fakeraid controllers
[01:23] <qman__> I know for a fact it can do this with nvraid and AMD raid, not sure if intel's stuff works or not but it tries
[01:24] <patdk-lap> qman__, heh? dmraid offers a wipe raid id from drive option
[01:25] <patdk-lap> cciss driver is a real raid
[01:25] <patdk-lap> I forget what the fakeraid one is called
[01:27] <qman__> didn't know it could do that
[01:27] <qman__> but finding that out otherwise would have required reading the (rather large) manual for a piece of software I don't intend to use
[01:28] <qman__> I start all the zeroings and walk away, it's done like 20 minutes later
[01:30] <patdk-lap> 20min?
[01:30] <patdk-lap> what are you using? 80gig drives?
[01:31] <twb> patdk-lap: it's not real if I need a special bloody driver to create /dev/cciss0 instead of /dev/sda
[01:31] <patdk-lap> looks like -E
[01:32] <qman__> the last time, they were 250s
[01:32] <twb> (Well, OK technically it probably is h/w raid, just the raid card doesn't bother to emulate a SATA bus.)
[01:32] <qman__> 20 minutes may be slightly exaggerated, but it wasn't enough time to cause an issue
[01:32] <patdk-lap> twb, that is true, but what hardware raid does bother to?
[01:33] <twb> patdk-lap: you're not making me feel better :-/
[01:34] <lifeless> twb: sata attached raid cabinets do :)
[01:34] <twb> Haha, SANs exporting themselves as a big iSCSI or smoething ;P
[01:35] <patdk-lap> hmm, interesting
[01:35] <patdk-lap> my old adaptec ones show up as sda
[01:35] <twb> patdk-lap: yeah that's what I expect
[01:35] <patdk-lap> but that is just cause the adaptec driver displays it to linux as a scsi device
[01:35] <twb> Oh.
[01:35] <patdk-lap> ya, used to all my hp's just doing cciss
[01:36] <patdk-lap> but then, used to freebsd, and freebsd never does anything consistant :)
[01:36] <patdk-lap> every driver names it after itself :)
[01:36] <patdk-lap> that really annoyed me changing nics
[01:36] <qman__> the hardware controller has to be really good for me to bother using it, because mdadm is so featureful and compatible
[01:36] <twb> qman__: hear hear
[01:37] <twb> It shits me that $sales can't convince $customer to go mdadm
[01:37] <patdk-lap> qman, for hardware raid, it's all about the bbwc
[01:37] <twb> Because the world has taught $customer that only hw raid is any good
[01:37] <patdk-lap> if you don't get bbwc, mdadm is the way to go
[01:37] <twb> +1
[01:37] <patdk-lap> a person I deal with, had some servers colo
[01:37] <patdk-lap> and they reinstalled them with dmraid
[01:38] <patdk-lap> webserver couldn't even handle the traffic of a single user
[01:38] <patdk-lap> reinstall with mdadm, they where fine
[01:39] <twb> haha
[01:41] <qman__> yeah, the write cache is the only good reason to use hardware, and since most linuxes and such have much better disk caching in general than windows, it's not even that painful to not have
[01:41] <patdk-lap> depends what you write
[01:41] <patdk-lap> lots of fsync calls, you need that write cache
[01:41] <qman__> yeah
[01:41] <patdk-lap> but if not, it doesn't matter
[01:42] <patdk-lap> but these days, if you do lots of fsyncs your normally looking at ssd instead :)
[01:43] <qman__> that was one of the things I first noticed the very first time I used linux
[01:43] <qman__> that the disk was not thrashing itself to death, and the system freezing waiting on disk operations
[01:43] <qman__> system not freezing*
[01:44] <twb> if you do a lot of fsyncs you need your code fixed
[01:44] <patdk-lap> a database?
[01:44] <patdk-lap> you always fsync each transaction
[01:45] <patdk-lap> be it to a temp commit log, or the table itself
[01:45] <twb> >hand waving<
[01:46] <twb> This is not my area of expertise; I just heard a lot of yelling because people were Doing It Wrong and when they use a modern fs their shit explodes
[02:33] <iggi__> anyone know of a place to get support for ZeroC ICE in IRC?
[04:00] <swharper> from what im reading it'll take a little over a day to zero a 1.5tb drive :P
[04:00] <swharper> x7
[04:00] <swharper> blah
[04:20] <dork> anyone know of any issues w/ oneiric server failing boot after initrd saying "Mount: too many levels of symbolic links"
[04:28] <twb> swharper: are you doing it now, with dd?
[04:28] <twb> swharper: send it a USR1 and it'll give you a progress report
[04:28] <twb> Hmm, caveat: busybox dd might not...
[04:43] <dork> any ideas? getting "Mount: too many levels of symbolic links" when in recovery kernel, regarding /run
[04:43] <dork> this box hasn't been touched since the last wave of problems 2 weeks ago
[04:44] <dork> no promising search results
[04:50] <qman__> fsck?
[04:50] <dork> clean
[04:50] <qman__> if it's not mountable, not many other options
[04:50] <dork> actually it's about /run
[04:50] <dork> too many levels of symbolic links
[04:50] <dork> some loop going on here
[04:51] <qman__> does it work if you load a different kernel maybe?
[04:51] <qman__> or perhaps boot live and chroot, then deal with it?
[04:52] <dork> yeah i just odnt know how to deal with it
[04:52] <dork> i can chroot in
[04:52] <dork> basically /run is symlinking to /var/run
[04:52] <dork> and when i cd to either i get that error
[04:53] <qman__> might have managed to have inifinitely self-referencing symlinks
[04:53] <twb> dork: /var/run moved to /run because stupid new init systems can't manage to even mount /var without having dbus running first
[04:54] <twb> dork: depending on your release and if you have upgraded from an older one, maybe your box didn't handle the transition well
[04:54] <qman__> I don't have any newer-than-lucid systems to look at
[04:54] <dork> the upgrade was horrible
[04:54] <dork> i had a whole bunch of problems spent 16 hours at the dc
[04:54] <dork> now i'm here again
[04:54] <dork> after the box hasn't been touched
[04:55] <dork> though this problem didn't exist until days after i managed to iron out the upgrade related problems
[04:55] <dork> any suggestion on the best approach for this
[04:56] <qman__> unfortunately I don't have any experience with this and I don't have time to get out my laptop (only system running oneric) and cross-check things with you
[04:57] <swharper> can i send a USR1 in mid process?
[04:57] <qman__> but that's where I'd start, reference a working system and see what those directories look like
[04:57] <dork> http://uksysadmin.wordpress.com/2011/10/14/upgrade-to-ubuntu-11-10-problem-waiting-for-network-configuration-then-black-screen-solution/
[04:57] <twb> swharper: sure
[04:57] <twb> swharper: but if the client doesn't like it, it'll bomb
[04:57] <swharper> ack
[04:57] <swharper> eff it
[04:57] <twb> Shrug
[04:57] <qman__> I use kill -SIGUSR1 to get dd status fairly often
[04:57] <swharper> im doing one of the drives using diskutility on my mac
[04:58] <twb> If I were you I'd just blat the start and end of disk
[04:58] <twb> qman__: yes but does that work with busybox dd
[04:58] <swharper> and its progress is showing 1 day, 13 hrs
[04:58] <qman__> that I don't know
[04:58] <qman__> that seems way too long, even for 1.5TB disks
[04:58] <swharper> its doing 3 passes
[04:58] <swharper> over usb2
[04:59] <qman__> yeah, that's a bad plan
[04:59] <qman__> last one I did was 250GB disks on SATA2
[04:59] <qman__> and you only need 1 pass
[04:59] <swharper> ah
[04:59] <swharper> i guess i could cancel this one
[04:59] <qman__> nothing is going to mistakenly read old data after a single zero pass
[04:59] <twb> swharper: don't do USB2
[04:59] <qman__> and even data recovery tools are going to be hard pressed to get much after it
[04:59] <swharper> disk utility gave me 5 options i believe....
[04:59] <twb> At least get esata
[05:00] <qman__> in short
[05:00] <twb> If you're trying to wipe the drive, use an angle grinder not dd
[05:00] <swharper> well the server running dd is esata
[05:00] <qman__> if you're worried about data passed a single zero pass, you should just destroy the disks physically
[05:00] <twb> Right
[05:00] <qman__> past*
[05:00] <swharper> ok
[05:00] <swharper> there was another option that did 25 passes
[05:01] <swharper> glad i didnt choose that :)
[05:01] <qman__> options for the paranoid, that haven't been relevant since disks were < 1GB
[05:01] <swharper> ok i stopped it
[05:01] <swharper> on the mac
[05:01] <swharper> ill redo with 1 pass
[05:01] <qman__> no need, it's probably good
[05:02] <qman__> I just know that the first 200MB is not enough to clear out an intel fakeraid
[05:03] <qman__> and I only suggested that solution under the impression that it wouldn't take more than a couple hours for all of the disks, if run simultaneously
[05:05] <swharper> hm
[05:05] <swharper> well dd has been running for almost 3
[05:05] <qman__> send a siguser1 and find out how many blocks it's done
[05:06] <qman__> even if it stops it, you can start again at that point with switches, or just leave it be and move on
[05:07] <swharper> while thart proc is running in the same shell?
[05:07] <qman__> no, need a second shell
[05:07] <qman__> well
[05:07] <swharper> im in recovery mode...
[05:07] <qman__> unless you ctrl z, bg 1, kill -siguser1 pid
[05:07] <qman__> and that's provided all those are there, which they may not be in a busybox
[05:09] <swharper> id have to start it over then, yes?
[05:10] <qman__> dd supports arguments which tell it where to start and finish
[05:10] <swharper> alright i stopped it
[05:10] <qman__> if it does die, you'll see the spot where it stopped
[05:11] <swharper> disk utility is giving me 12 hours on the usb2 cartridge
[05:12] <qman__> USB is really slow
[05:12] <swharper> yeah
[05:12] <qman__> USB 2.0 runs at 480mbps data rate, but actual throughput is much less
[05:12] <swharper> now that that proc is stopped can i see how far dd got?
[05:13] <qman__> ctrl z?
[05:13] <swharper> yeah
[05:13] <qman__> that just pauses
[05:13] <swharper> hm
[05:13] <swharper> says stopped
[05:13] <qman__> you must bg 1 or fg 1 to resume
[05:13] <qman__> in background or foreground respectively
[05:13] <qman__> in background allows you to run other commands, such as kill -siguser1
[05:13] <cwillu_at_work> just bg or fg will work
[05:13] <swharper> bg 1 says no such job
[05:14] <cwillu_at_work> just do bg, if there's a job that can be backgrounded, it'll be backgrounded
[05:14] <swharper> bg worked
[05:14] <qman__> do you know the pid of the dd process?
[05:14] <swharper> now kill - siguser1 pid?
[05:14] <qman__> without that space, yes
[05:15] <qman__> kill -siguser1 pid
[05:15] <swharper> right, ok
[05:15] <qman__> it should cause it to display the position
[05:16] <swharper> bad signal name 'siguser1'
[05:16] <cwillu_at_work> "killall -USR1 dd" or "kill -USR1 <pid>"
[05:17]  * cwillu_at_work hands qman__ a proofreader
[05:17] <qman__> the implementation must be different from systemrescuecd
[05:17] <qman__> because the things that I said are true in systemrescuecd
[05:17] <cwillu_at_work> then systemrescuecd made gratuitous changes to how things work
[05:18] <swharper> 510082
[05:18] <cwillu_at_work> (or more likely, has kill as a shell builtin, that doesn't match bash's or the standard kill binary)
[05:18] <qman__> it should say blocks in, blocks out, bytes transferred
[05:19] <cwillu_at_work> swharper, what was the dd line you ran originally?
[05:19] <swharper> now 511328+0 records out
[05:20] <cwillu_at_work> 511328 * 512byte blocks, assuming the default wasn't changed on the command line
[05:20] <qman__> I told him to use bs=2M, for performance reasons
[05:20] <cwillu_at_work> okay, so 511328 * 2M
[05:21] <cwillu_at_work> it's written a terabyte
[05:21] <swharper> dd if=/dev/zero of=/dev/sdh bm=2M
[05:21] <swharper> was the original
[05:21] <cwillu_at_work> swharper, how big is the drive?
[05:22] <swharper> 1.5tb
[05:22] <cwillu_at_work> okay, it's 2/3's done
[05:22] <swharper> ok
[05:22] <swharper> cool
[05:22] <cwillu_at_work> swharper, also, you meant bs=2M, right?
[05:22] <cwillu_at_work> (bm isn't a thing)
[05:22] <swharper> yes, sorry
[05:23] <cwillu_at_work> I usually use bs=1M, just so that the numbers are directly meaningful :p
[05:23] <swharper> i can have dd running on all these drives simultaneously
[05:23] <cwillu_at_work> yep
[05:23] <cwillu_at_work> but if they're all connected via usb, you're not gonna get done any faster
[05:23] <swharper> no…they're not in the server
[05:23] <qman__> it will if they're over SATA though
[05:23] <swharper> they're all sata
[05:23] <swharper> i yanked one out and put it in a cradle
[05:24] <swharper> connected to my laptop
[05:24] <qman__> because with SATA, it's one disk per channel, unless you've got a multiplexer
[05:24] <cwillu_at_work> okay;  just rerun it as "dd if=/dev/zero of=/dev/sdwhichever bm=1M &" for each one
[05:24] <cwillu_at_work> and then "killall -USR1 dd" will spit out the numbers for each one
[05:25] <swharper> how do i make it a background process from the get go
[05:25] <cwillu_at_work> swharper, run what I told you :p
[05:25] <qman__> the ampersand on the end
[05:25] <swharper> ah, right
[05:25] <swharper> thanks
[05:27] <qman__> anyway, difference noted
[05:27] <qman__> I usually do this kind of thing from systemrescuecd because it's convenient
[05:32] <swharper> ok, got em all hummin now
[05:32] <swharper> awesome
[09:14] <Daviey> hey adam_g o/
[09:18] <Modris> hi, i install ubuntu server on "hyper v 2008 r2" with legacy NIC., but my ping responce time is dramatically from 1 to 2000ms all time
[09:18] <Modris> ubuntu desktop on the same hyper-v respond nice 1ms
[09:20] <Modris> i'm not linux expert. just need configure firebird on linux, i take ubuntu desktop and all is good, now i want make it on server edition and ... see previus post my problem.
[09:23] <Daviey> Modris: Is hyperv host overcomitted?
[09:23] <koolhead17> hi all
[09:26] <Modris> excuse my bad english... overcomitted is what? that is standart hyper-v
[09:29] <Modris> i try http://www.panterlo.com/2010/10/10/ubuntu-10-10-and-hyper-v-r2/ but without success... now i go back to standart installation
[09:31] <lynxman> morning o/
[09:32] <koolhead17> hola lynxman :D
[09:33] <lynxman> koolhead17: hey :)
[09:38] <Daviey> Modris: sorry, i mean - is hyperv host server doing too much?
[09:38] <Daviey> Do you have too many virtual machines, and not enough resources to go around?
[09:44] <Modris> Daviey: No, hyper-v is in idle, also work ubuntu-desktop and ping is <1ms all time, and one xp_test_pc and pint to it is <1ms too
[09:45] <Daviey> Modris: we don't really test against hyper-v, so we kinda lack the experience and potential issues.  This means we can't be a great deal of help.  Sorry :/
[09:47] <Modris> ok, thanks... then You suggest go-google? and search or maybe hyper-v irc, but i think they are dont specialize on ubuntu :-)
[09:55] <Daviey> Modris: I'd be suprised if other distro / OS's didn't see the same behaviour TBH
[09:57] <Modris> Daviey: i think problem is with nic drivers... if ping response time with ubuntu-desktop also was bad, then i just take it and don't search for solution, but with desktop edition all is right and that's why i keep looking for solutions
[09:57] <Daviey> Modris: Hmm, can you not provide a different virtual nic?
[09:58] <Daviey> (kvm you can do this, so assume you can with hyper-v?)
[09:59] <Modris> inb hyper are only two nic - 1) network adapter 2) legacy network adapter i try both but by default ubuntu-server can see only legacy network adapter
[10:01] <Modris> maybe need add network adapter too and then give drivers by hand... but driver in linux for me is not clear enought
[10:02] <Modris> in http://www.panterlo.com/2010/10/10/ubuntu-10-10-and-hyper-v-r2/  are talking about synt adapters, but ... i try step-by-step without success
[10:02] <Modris> what virtualization platform You use for test ubuntu guests?
[10:21] <rbasak> Is the desktop kernel usable on server? Might that work?
[10:27] <Modris> rbasak: maybe, how know that? for desktop i use 11.04, but for server 11.10 maybe that is point of solution?
[10:35] <Modris> What changes is between ubuntu 10.04 and 11.10 who can affect /etc/initramfs-tools/modules in http://blog.allanglesit.com/2010/05/ubuntu-and-hyper-v-the-paths-to-enlightenment/ are solultion for network problems, but it don't work for 11.10
[10:35] <Modris> i dont try it for 10.04, but will to figure out work or not with it.
[10:57] <RoyK> http://i.imgur.com/AIMWw.jpg
[11:53] <AdvoWork> on my one system i access files by http://IP:8080/dir/dir etc  but on my other one :8080 doesnt work, i edited /etc/apache2/ports.conf and changed the listen to 8080 but now it says a file I know is correct is not actually there, any suggestions please? tried editing /etc/apache2/sites-available/default and changed virtual hosts to <VirtualHost *:8080> and added NameVirtualHosts *:8080 and restarted apache but then get: [warn] NameVirtual
[11:53] <AdvoWork> Host *:80 has no VirtualHosts
[11:54] <AdvoWork> actually, the file now works since doing that last bit, but i still get the warn message
[11:57] <koolhead17> https://help.ubuntu.com/10.04/serverguide/C/httpd.html
[12:49] <adac> Hi guys. I was wondering on LTS server versions... I have a lot of updates now shown...but since months there has been noone marked as "critical" anymore. Does ubuntu not support or do packages not being flagged with "critical" at ubuntu at all?
[12:50] <soren> What do you mean "marked as \"critical\""?
[12:52] <adac> soren, I guess critical should mean apckages that have a security sissue fixed
[12:53] <soren> But where is it marked as critical? Where do you see this mark?
[12:55] <adac> soren, it normaly shoudl be set somewhere when you do a apt-get update
[12:55] <adac> then for example wehn you login via ssh it shows you
[12:55] <adac> if i remember correctly
[12:55] <adac> (since it was long time there was any critical anymore)
[12:56] <adac> at least it is like that in debian
[12:59] <adac> 23 packages can be updated.
[12:59] <adac> 19 updates are security updates.
[12:59] <adac> soren, ^^
[12:59] <adac> when you login via ssh
[13:02] <adac> soren, http://superuser.com/questions/199869/check-number-of-pending-security-updates-in-ubuntu
[13:05] <soren> Sure. "Security updates".
[13:06] <soren> adac: If there are known security issues, they get fixed. There are (awesome) people assigned to take care of just that.
[13:06] <adac> soren, is "security" and "critical" upgrade a difference?
[13:06] <adac> soren, yes but my point is that i get noticed about if there are security upgrades
[13:06] <soren> adac: If you don't see any security updates, you either don't have any packages installed that have required updates, you're offline, or you're running an unsupported version of Ubuntu.
[13:07] <soren> adac: Of course it makes a difference.
[13:07] <soren> adac: You asked specifically about a type of updates marked as "critical".
[13:07] <adac> yes i thought this might is the same
[13:07] <adac> ok i see so i hve to check also for securtiy upgrades
[13:08] <soren> adac: It *is* the same, but AFAIK, nothing on ubuntu server calls them "critical" updates rather than "security" updates.
[13:09] <adac> soren, in the nagiso plugin they are called critical
[13:09] <adac> so tehrerfor i might confused this
[13:09] <soren> adac: Hence my question, "What do you mean \"marked as \\"critical\\"\""?
[13:09] <adac> "official nagios plugin"
[13:09] <adac> soren, yeah i got it thank you!
[13:09] <soren> We have an offical nagios plugin?
[13:09] <soren> Wow.
[13:09] <soren> I didn't realise.
[13:10] <adac> soren, someon of packed it. so its original packed ubuntu nagios plugin
[13:10] <adac> someone of you
[13:10] <adac> there are a lot of plugins who are not in this package
[13:10] <adac> i would rather cal them unofficial therefore
[13:14] <adac> soren, so what command does show me the "security" updates pending?
[13:40] <spiekey> Hello!
[13:41] <spiekey> i have a QLogic FC Hostbus adapter and a SAN attached to it.
[13:41] <spiekey> i would like to use multipath and i think therefore i need device-mapper
[13:41] <spiekey> but my devices wont get listed in /dev/mapper/    :-(
[13:41] <spiekey> any ideay why? Do i have to use dmsetup?
[13:44] <soren> adac: Might I ask why it's important? Do you not want general updates, but only security updates?
[13:45] <adac> soren, exactly. my nagios installation shoudl only warn me if there are security upgrades available. I dont care about other ones (since this would then result in nearly daily notifications)
[13:46] <soren> adac: Then why don't you disable them?
[13:47] <zul> good morning
[13:49] <adac> soren, I first need to know with which comand line command i can show if there are security updates. then i can answer your last question
[13:50] <soren> adac: You're holding my answer hostage until you get an answer to a question that doesn't apply at all once you disable the non-security updates? Seriously?
[13:52] <adac> soren, well I need to check first if tehre are any security updates at the moment. then i can tell you if the "orignal" nagios plugin simply fail to detect them.
[13:52] <adac> then i can anser you if your idea would solve my probelm when using the original nagios plugin (the spippet in apt-check)
[13:52] <adac> shipped
[13:53] <soren> adac: If you don't need non-security updates, just disable them, and then any update "apt-get upgrade" suggests will be security updates. Simple.
[13:53] <soren> I don't know what the nagios plugin does to filter out non-security updates.
[13:54] <adac> soren, you eman like disable them in the sources.list?
[13:55] <soren> yes
[13:58] <adac> soren, i thin that wouldnÄt still show me the security upgrades, since apt-cron already downloaded all packages to upgrade
[13:58] <adac> can i clean this somehow?
[13:58] <adac> apt-get clean all
[13:59] <adac> yes that did the trick
[13:59] <adac> seems that there are 64 securty upgrades
[13:59] <adac> 46 sorry
[14:01] <adac> soren, oh lol! now also nagios shows me 46 critical upgrade
[14:01] <adac> s
[14:01] <adac> how is that possible
[14:07] <adac> soren,  with only security repo i have 64, and with the others additinaly enabled i have 54. when i have only securtiy upgrades enabled nagios does complain. when I have all enabled (all repos) then nagios doesn't. Maybe because he fetches the security packages from another repository?
[14:07] <soren> As I said: I don't know what the nagios plugin does to count those updates.
[14:09] <adac> soren, yeah i see
[14:10] <adac> soren, still i would like to know if there is a command that shows me avaliable security upgrades
[14:11] <soren> Your nagios plugin.
[14:17] <adac> soren, lol
[14:18] <adac> soren, no seriosly via ssh  login on another private server also the different types of updates are shown. what command is used on that?
[14:32] <soren> adac: "shown"? You mean in the info you see at login?
[14:33] <adac> soren, right
[14:33] <soren> adac: landscape-sysinfo
[14:34] <adac> soren, tank you!
[14:34] <adac> thank
[14:34] <jpds> soren getting tanked, bad idea.
[14:35] <zul> Daviey: ping
[14:35] <soren> jpds: :)
[14:35] <Daviey> zul: hola
[14:35] <zul> Daviey: so im thinking about nova/swift/glance/keystone SRU
[14:36] <Daviey> eeek
[14:36] <zul> so there isnt a tarball for 2011.3.1 so i was thinking of doing a snapshot
[14:36] <zul> so something like 2011.3.1~gitXXXX
[14:36] <zul> what do you think?
[14:37] <zul> and document the shit out of everything
[14:37] <Daviey> adac / soren : Or, sudo /etc/update-motd.d/90-updates-available ?
[14:37] <adac> jpds, hehe. what does soren do when he is tanked?
[14:38] <Daviey> zul: Is it worth finding out WHY there isn't a snapshot?
[14:38] <zul> Daviey: because they dont do stable releases
[14:38] <soren> Daviey: Oh, it's a separate script now?
[14:38] <adac> Daviey, don't have this binary
[14:38] <zul> ttx: ^^^
[14:38]  * soren is living in the past
[14:38] <Daviey> but otherwise, yeah - that versions tring seems safe.
[14:38] <adac> or script
[14:38] <zul> soren: derpa derpa
[14:39] <soren> zul: derka derka derka
[14:39] <Daviey> soren: you aren't still wearing flares, and sporting a mullet are you?
[14:39] <zul> i still laugh when i see that
[14:40] <ttx> zul: the only 2011.3.1 being considered is Keystone so far
[14:40] <ttx> doesn't mean you can't release 2011.3+chuck
[14:41] <ttx> instead of 2011.3.1~chuck
[14:41] <soren> Daviey: Not that far in the past, no.
[14:41] <ttx> (nova code still shows 2011.3 as version, not 2011.3.1)
[14:41] <zul> how about 2011.3.1+git20111117
[14:42] <soren> No. Not +.
[14:42] <ttx> zul: because there is no such thing as nova 2011.3.1
[14:42] <ttx> and there might never be
[14:42] <zul> how about 2011.3.1~git<git hash>
[14:42] <ttx> 2011.3+git20111117
[14:42] <soren> What ttx says.
[14:42] <Daviey> 2011.3+gitFOO sounds better i guess
[14:43] <zul> k
[14:49] <Daviey> zul: fancy rebasing https://launchpad.net/ubuntu/+source/nova/2011.3-0ubuntu6.1, and re-uploading for SRU?
[14:50] <zul> Daviey: thats the plan
[14:50] <jamespage> roaksoax: $insert_cobbler_system_definitions in /etc/cobbler/dnsmasq.template does not appear to be working in oneiric
[14:50] <jamespage> do I need to poke something to make it setup static entries for systems I have configured?
[14:51] <roaksoax> jamespage: what are you configuring on each of the systems?
[14:52] <jamespage> roaksoax: I'm configuring an IP address for each of the interfaces registered for a given system
[14:54] <roaksoax> jamespage: /var/lib/cobbler/cobbler_hosts
[14:54] <roaksoax> jamespage: so they appear there ^^
[15:30] <stgraber> hallyn: so, looking at what we need to do to get rid of lxcguest this cycle :) on top of getting Daniel's patch in the kernel (shutdown/reboot) and mountall modified to be LXC aware, we also need to do something about lxc-is-container and the consoles spawned by upstart
[15:31] <stgraber> I can't remember us discussing these two other things at UDS
[15:31] <stgraber> I think it might be worth renaming lxc-is-container to some kind of universal is-container command that'd return lxc / libvirt-lxc / openvz / ... depending on what's in use
[15:32] <stgraber> and move that to some core packages
[15:32] <stgraber> (or merge into a similar command, not sure if we already have something like that for VMs)
[15:34] <stgraber> for consoles, I think it'd be interesting to make upstart a bit more clever so it doesn't spawn gettys on non working devices and spawns a getty on /dev/console if it's a container
[15:41] <hallyn> stgraber, yes the console thing occurred to me ont he flight out from orlando
[15:41] <hallyn> for lxc-is-container, i think in the session i said that would be the one thing left in lxcguest
[15:42] <hallyn> but i suppose we can add something to either upstart or coreutils instead
[15:42] <hallyn> the console thing gets interesting (in a bad way :) when you try to fire up lxc with smoser's cirros, btw
[15:42] <stgraber> I'd really like to see lxcguest go away completely, otherwise people will expect lxc-is-container to be around and will fail when they don't use our template :)
[15:43] <hallyn> and so yes, if we could find an intelligent way to fire up getty on consoles which happen to be up, that'd be neat
[15:44] <hallyn> stgraber, hm, i see, our work items in the blueprint are insufficient for that
[15:44] <hallyn> and hwo did you end up owning the mountall one?  (not that i mind :)
[15:44] <stgraber> because mountall is a foundations team thing :)
[15:45] <hallyn> yay
[15:45] <stgraber> so it should be easier for me to nag jhunt_ about it :)
[15:49] <hallyn> noted those into the blueprint so i don't forget again
[15:49] <stgraber> thanks
[15:51] <EMKO> i made a new user with root how do i add this user so it can use sudo i tried to add it to group admin but that doesnt exist
[15:57] <roaksoax> zul: just want to run it with you so I'm not installing files that shouldn't be
[15:57] <roaksoax> zul: http://paste.ubuntu.com/741305/
[15:57] <zul> roaksoax: what is this?
[15:57] <roaksoax> zul: cobbler
[15:58] <roaksoax> zul: lp #891527 let me to find other missing files
[15:58] <zul> roaksoax: looks good?
[15:58] <zul> er...looks good
[15:58] <roaksoax> zul: alright ;)
[15:59] <EMKO> so when i use sudo do i use the roots password or the account im loged in with?
[16:35] <hallyn> zul, are you able to run the libvirt qa regression test on precise and have it pass?
[16:35] <zul> hallyn: i havent tried recently
[16:36] <hallyn> I've tried on a host and in a vm.  Using the oneiric version of libvirt on precise!  still get a heap of failures
[16:36] <hallyn> zul, do you have something you can try on?
[16:36] <hallyn> i'd like confirmation that i'm nto going nouts,
[16:36] <zul> hallyn: i can later this afternoon
[16:36] <hallyn> but i'm thinking it's some other change now
[16:36] <hallyn> ok, thanks
[16:36] <hallyn> i'll sit on the libvirt merge in the meantime
[16:38] <zul> is it a big one?
[16:40] <hallyn> well it's to 0.9.7-2...
[16:40] <hallyn> not particularly big,
[16:40] <hallyn> and it doesn't introduce NEW failures over any other libvirt in my tests.  but if it did, they might be getting masked by all my inexplicable failures
[16:41] <hallyn> you think i should just push?
[16:41] <zul> hehe
[16:41] <hallyn> I was using it on mjy laptop with no probs
[16:41] <zul> ok ill take a stab at it this afternoon
[16:41] <hallyn> ok
[16:41] <hallyn> i wonder if the cahnge about default admin users may have affected it
[16:44] <hallyn> hm, lemme try the proposed precise package on the oneric vm
[16:45] <hallyn> you know i've never used synergy before.  i'm loving it.  of course it's completely insecure, but that's the thrill isn't it :)
[16:52] <hallyn> zul, if you also want to run against the merged pkg, it's in ppa:serge-hallyn/virt and source is at http://people.canonical.com/~serge/libvirt_0.9.7-2ubuntu1-package.tar.gz
[16:53] <zul> k
[17:04] <Zanzacar> I have been trying to use screen for multiple tty sessions and I am kind of getting lost in it does anyone have any other recommendations?
[17:04] <JanC> Zanzacar: byobu makes using screen easier
[17:05] <robbiew> huats: ping
[17:05] <Zanzacar> JanC: I will look into that
[17:06] <huats> hey robbiew !
[17:06] <huats> how are you ?
[17:06] <robbiew> good :)
[17:13] <micahg> zul: is there a reason cobbler doesn't use distro-info and hard codes release names?
[17:14] <zul> micahg: no there isnt, it was a redhat project
[17:15] <micahg> ok, well, FYI, distro-info will probably be something that's SRUd, so things taking advantage of it can have an up to date release list
[17:50] <hallyn> Daviey, are you around today?
[17:54] <zul> Daviey: lemme know when you are around as well..
[17:57] <negronjl> SpamapS: ping
[17:58] <SpamapS> negronjl: pong, sup?
[17:58] <negronjl> SpamapS: Re: https://bugs.launchpad.net/bugs/720302  Can you tell me what would be a use case scenario for this ?
[18:02] <SpamapS> negronjl: ceph is one example..
[18:03] <SpamapS> negronjl: there are some actions that can only happen once per cluster.
[18:03] <SpamapS> negronjl: so a leader is needed to only have those actions happen on the leader.
[18:03] <negronjl> SpamapS: ...enough said ... I totally get it now
[18:03] <SpamapS> negronjl: you can fake it now with relation-list and sorting..
[18:03] <SpamapS> negronjl: but with leader election, we'd provide a way to detect that you're the leader *before* any peer relationships were established.
[18:04] <negronjl> SpamapS: perfect ... that would be awesome to have.
[18:04] <SpamapS> negronjl: and perhaps more important, hooks for when leader changes.
[18:13] <hallyn> ahs3, I've sent an email inquiring about the xml file copyrights.  Meanwhile http://people.canonical.com/~serge/netcf-0.1.9-package-v2.tar.gz should address the other concerns and is lintian-approved :)
[18:14] <ahs3> hallyn: groovy.  i'll try to take a look later today (/me is arguing with virtio right this minute...)
[18:27] <malac0da> can anyone gimme a hand with setting up apache?
[18:28] <SpamapS> malac0da: can you maybe be more specific what you want to do with apache?
[18:28] <malac0da> im having 2 issues
[18:28] <adam_g> zul: hi
[18:29] <malac0da> i have it set to just an index of files(not the directory i want which is problem 1 but i can live with) but it wont let me download the files it says its forbidden
[18:29] <zul> adam_g: hola
[18:30] <SpamapS> malac0da: probably because the files are not accessible by the 'www-data' user which apache runs as.
[18:30] <malac0da> so the solution being?
[18:31] <adam_g> zul: hey, regarding bug #891445 is there any reason why the sysv init script from squid was converted to upstart job for squid3, rather than the init script from squid3?
[18:31] <zul> adam_g: not really
[18:31] <malac0da> the folder it is set to access is the default /var/www but i wanted to move it to /home/user/www
[18:31] <malac0da> but for some reason it wont
[18:33] <zul> adam_g: patches accepted
[18:33] <malac0da> I added DocumentRoot... and <Directory ... and still goes to default location
[18:34] <adam_g> zul: cool. thanks. i might touch up the squid3 upstart some more if you dont mind
[18:34] <zul> adam_g: i dont
[18:51] <malac0da> Sooo...I guess I will just go somewhere else then?
[18:51] <SpamapS> malac0da: we're busy people, please be patient. :)
[18:52] <SpamapS> malac0da: after changing configs, did you reload the server configs? (sudo service apache2 reload) ?
[18:52] <malac0da> yeah
[18:52] <genii-around> malac0da: For webserver specific help there is also##httpd
[18:53] <genii-around> Er #httpd   rather
[18:53] <raubvogel> Is nscd being started by upstart or what? 11.04
[18:53] <malac0da> should the DocumentRoot be in apache2.conf?
[18:54] <raubvogel> Or insserv
[18:54] <raubvogel> malac0da: https://help.ubuntu.com/community/ApacheMySQLPHP https://help.ubuntu.com/10.04/serverguide/C/httpd.html
[18:54] <filo1234> hi
[18:54] <SpamapS> raubvogel: insserv is not run in Ubuntu
[18:55] <SpamapS> raubvogel: most likely nscd is started by upstart's sysv compat mode calling its start script in /etc/init.d
[18:56] <raubvogel> SpamapS: and it seems to be calling it in the wrong order if I want to have kerberos, ldap, and autofs (for NFS)
[18:57] <SpamapS> raubvogel: nscd is just an enhancement is it not?
[18:57] <SpamapS> raubvogel: so what you probably want is for it to start very very late
[18:58] <raubvogel> SpamapS: That is exactly what I had in mind.
[18:58] <raubvogel> I would want to do, say update-rc.d nscd defaults 10 10
[18:58] <raubvogel> but it seems that is not the proper way
[18:58] <raubvogel> i.e I am supposed use upstart
[18:59] <SpamapS> raubvogel: looks like the init.d script just has it starting at the normal "20" position.
[18:59] <SpamapS> raubvogel: 10 10 ? no you'd want like, 90
[19:00] <raubvogel> SpamapS: I threw that out there because when I check rcN.d, everything is either 01, 02, or 03
[19:00] <raubvogel> Nobody in the later positions
[19:00] <SpamapS> raubvogel: are you running insserv then?
[19:00] <SpamapS> raubvogel: its not supposed to run, but it will cause exactly that to happen
[19:01] <SpamapS> lrwxrwxrwx 1 root root  14 Nov 17 11:00 S20nscd -> ../init.d/nscd
[19:01] <raubvogel> SpamapS: I hope not. But then again I am not the only one who can rooy there
[19:01] <SpamapS> Thats what it normally looks like.
[19:01] <SpamapS> raubvogel: what version of Ubuntu ?
[19:01] <raubvogel> 11.04
[19:01] <SpamapS> raubvogel: yeah something's wrong if you have 01 02 03 ...
[19:03] <raubvogel> I agree; i checked my own desktop and it has sane values like yours
[19:04] <adam_g> zul:  in case you were going to do it, please hold on that squid3 merge request
[19:04] <SpamapS> raubvogel: do you have /etc/init.d/.legacy-bootordering ?
[19:04] <SpamapS> raubvogel: thats the file that controls whether or not insserv is used
[19:04] <zul> adam_g: ack
[19:04] <SpamapS> raubvogel: to be fair, insserv *can* sort of work w/ upstart. Its just that it usually doesn't ;)
[19:05] <raubvogel> Yep. That file is there
[19:05] <raubvogel> What is the best way to deal with upstart then?
[19:05] <EMKO> if php-fpm runs as user www-data what does that mean
[19:05] <SpamapS> raubvogel: in your case, I think you probably just need to manually make it 90 so it starts late
[19:06] <matrix3000> im getting excited for 12.04 :)
[19:06] <SpamapS> EMKO: need more context.. ??
[19:06] <EMKO> what can user www-data do? im just so confused with this user permissions stuff
[19:07] <EMKO> coming from windows to linux :(
[19:08] <raubvogel> SpamapS: update-rc.d  is a valid tool?
[19:09] <SpamapS> EMKO: www-data just isolates the web server from all the other users on the system. So you can grant read access to the web server for files you want the webserver to be able to show, for instance.
[19:09] <SpamapS> raubvogel: definitely!
[19:09] <raubvogel> EMKO: It is wise to seperate services and have non root users running said services
[19:09] <EMKO> so when i upload site files like say index.php  i have to make this file belong to www-data ?
[19:10] <raubvogel> EMKO: it depends on what the file does. As a hard and fast rule yes.
[19:10] <SpamapS> EMKO: no, you just have to make it readable by www-data .. so you can make its group www-data and do 'chmod g+r index.php'
[19:11] <raubvogel> If the file/directory does not need to be changed by web, read-only will do
[19:11] <SpamapS> EMKO: sites I've run in the past had all the code owned by a 'publisher' user, group owned by 'www-data', that way the publisher was the only one who could write to the code
[19:12] <EMKO> oh
[19:18] <malac0da> i got it move the folder where its looking but still doesnt let it download anything
[19:20] <SpamapS> malac0da: make sure that www-data can access /home/foo/whatever
[19:21] <SpamapS> malac0da: note that /home/user may not be world-accessible, even if /home/user/www is
[19:21] <SpamapS> malac0da: ls -ld /home/user .. is it world executable ?
[19:22] <malac0da> it shows the list of files in the directory and will display the index.html but wont let its download from there to my computer
[19:22] <malac0da> are you looking for the drwxr-xr-x ?
[19:23] <SpamapS> malac0da: ahh
[19:23] <SpamapS> malac0da: the files themselves, are they readable by the webserver?
[19:23] <SpamapS> malac0da: note that there may be some logs in /var/log/apache2 that will help
[19:23] <malac0da> yes i just cant copy them from where they are to here
[19:23] <SpamapS> malac0da: are you *sure* they are readable by the webserver? being able to list the dir doesn't count as being readable
[19:24] <malac0da> well it opened html file and displayed it if i added index.html
[19:25] <EMKO> so i should make the www folder where i keep my site under home username? and add this user to www-data and when i make files give group +r so www-data user can read these files?
[19:27] <SpamapS> malac0da: it opened index.html , but I assume you want to download some other file.
[19:27] <malac0da> yeha
[19:28] <SpamapS> malac0da: those other files might not be readable by www-data.. please make *sure* they are. Can you ls -l the dir and paste bin it? (hint: apt-get install pastebinit ; ls -l | pastebinit) ;)
[19:32] <malac0da> so inside the www i should do the ls -l correct
[19:32] <malac0da> cuz it only gave me -rw-------- 1 root root ...
[19:33] <SpamapS> heh
[19:33] <malac0da> cant find pastebinit either
[19:34] <malac0da> probably dont have the repository
[19:34] <SpamapS> malac0da: its in universe, been around forever
[19:34] <Zanzacar> Hi I would like to track my childs web usage, I was thinking about using wireshark but does anyone have any recommendations?\
[19:34] <SpamapS> malac0da: anyway, -rw--------- root root means *not* accessible by www-data
[19:35] <SpamapS> Zanzacar: dansguardian
[19:35] <EMKO> so when i do chmod g+r this gives all the groups that his user is in the ability to read the file?
[19:35] <malac0da> ah so how can I change that
[19:35] <patdk-wk> wouldn't enforcing the usage of a proxy bebetter?
[19:35] <SpamapS> EMKO: yes
[19:35] <SpamapS> EMKO: wait, no
[19:35] <patdk-wk> I personally don't believe in tracking, think it's more of a pain than it's worth, if you can't trust them on the internet, you shouldn't let them on
[19:35] <SpamapS> EMKO: chmod g+r gives the group that owns the file, access to read the file
[19:36] <patdk-wk> same goes for life for that matter
[19:36] <patdk-wk> if they can't access it at home, they will access it elsewhere
[19:37] <SpamapS> Its a touchy issue and there are people on all sides of the argument
[19:37] <EMKO> how do i make a file thats owned by a group then?
[19:37] <SpamapS> having an 8 year old.. I just make sure I'm around when he is browsing the web at home.
[19:38] <SpamapS> EMKO: chgrp groupname filename
[19:38] <malac0da> so thats how I solve my problem?
[19:40] <JanC> I agree about the "being around" stance, plus make sure they understand they can ask questions about whatever weird things they might find...  ;)
[19:40] <Daviey> hallyn: here
[19:40] <Daviey> zul: here
[19:40] <SpamapS> malac0da: chown -R user.www-data /home/user/www
[19:41] <SpamapS> malac0da: followed by
[19:41] <SpamapS> malac0da: chgrp -R g+r /home/user/www
[19:41] <SpamapS> malac0da: that should do it
[19:41] <zul> Daviey: essex has a couple of bugs but at least it runs now :(
[19:41] <patdk-wk> guess I'm the evil one?
[19:41] <patdk-wk> my son browses the web for hours on end using his ipad
[19:41] <Daviey> zul: That could be worse :)
[19:42] <patdk-wk> he is 4years old
[19:42] <patdk-wk> but normally sticks to youtube
[19:42] <zul> Daviey: api-paste.ini was out of date as well
[19:42] <malac0da> second one i got an invalid group
[19:43] <Daviey> zul: ah, we had that last cycle. :/
[19:43] <Daviey> zul: We probably need a test case to check that
[19:43] <zul> Daviey: well now its pulling the one from the source rather than in debian
[19:44] <Daviey> zul: uh?
[19:44] <EMKO> SpamapS: so i have to do this everytime i put a file on the server?
[19:45] <zul> Daviey: there was a debian/api-paste.ini nova ships its own etc/nova/api-paste.ini which is better tracked so we are going to ship that one instead
[19:46] <hallyn> Daviey, were you going to retry the syslog-ng upload, or was there a complication?  (since rmadison shows the 3.3.1 source pkg :( )
[19:46] <SpamapS> EMKO: pretty much. :)
[19:46] <SpamapS> EMKO: note that you can always add www-data to the group that you want to give file ownership to.
[19:47] <malac0da> SpamapS: the second command didnt work...says invalid group: 'g+r'
[19:48] <SpamapS> malac0da: oops, I meant chmod, not chgrp
[19:49] <EMKO> so what did i do by adding the user im creating the files with to the www-data group?
[19:49] <malac0da> alright that worked
[19:49] <malac0da> no anything i put there shoudl be able to be downloade
[19:49] <malac0da> d
[19:52] <matrix3000> any documentation i should read to cache ldap authentication information on the local machine so that if the ldap server becomes offline that the user can still login to their machine
[19:52] <SpamapS> EMKO: nothing really
[19:53] <SpamapS> EMKO: you want the other way around, you want www-data to be added to the user's group.
[19:53] <SpamapS> EMKO: you want to enable www-data to access your files.
[19:53] <EMKO> ohh
[19:53] <SpamapS> malac0da: not if you put root owned files there again
[19:53] <Daviey> hallyn: ah, yes - you repacked it right?
[19:54] <hallyn> Daviey, well I let bzr bd do it for me
[19:54] <EMKO> ok things makes a lot more sence thanks SpamapS
[19:55] <hallyn> http://people.canonical.com/~serge/syslog-ng-merge-3.3.1.tar.gz
[19:55] <malac0da> when I move stuff into there or upload via ftp how can i avoid that?
[19:56] <malac0da> or is it just easier to remember those two commands everytime i add something?
[19:58] <SpamapS> EMKO: note that you have to stop/start apache to get it to pick up any new group ownership.
[19:58] <SpamapS> malac0da: you should *not* be transferring files as root. ;)
[19:59] <SpamapS> malac0da: I'd suggest that you follow EMKO's lead.. create a user that will own the web content, and add the www-data user to that group.
[19:59] <geolr> Hi all, I intend to use my old eee 4g netbook as a homeserver for a while, and I a not very experienced. Now, I want to serve files from an external USB harddisk. From way back when I started using linux I recall editing fstab. Is that a good idea in order to make the usb-disk using always the very same mountpoint? Thx a lot!
[19:59] <malac0da> I cant even login as root at this point so i dont know how i am doing it
[20:00] <malac0da> i am logging into it via the same user the the folder is under
[20:02] <malac0da> nvm i can still access root was just doing it wrong
[20:02] <malac0da> but im not logged in as root when transferring it
[20:04] <malac0da> or is it because the user i was using has admin rights?
[20:07] <EMKO> im using nginx so i would just make nginx run as user www-data and i should be fine
[20:10] <SpamapS> EMKO: it runs as www-data by default in the packaged version
[20:10] <EMKO> mine ran as nginx
[20:10] <EMKO> php-fpm ran as www-data
[20:12] <EMKO> or i can just add nginx to the group and it should work with both right?
[20:12] <Daviey> roaksoax: How does your new fence agent work?
[20:12] <roaksoax> Daviey: same as all fence-agents :)
[20:12] <SpamapS> EMKO: well with fpm .. you don't need nginx to be able to access the files
[20:12] <roaksoax> Daviey: bzr branch lp:~andreserl/+junk/random
[20:12] <SpamapS> EMKO: since its talking to php-fpm over a socket
[20:12] <Daviey> roaksoax: what out of band power method is it using?
[20:13] <ErtanERBEK> Hi Everyone
[20:13] <EMKO> oh
[20:13] <roaksoax> Daviey: the fence-agent is for a sentry switch CDU
[20:13] <ErtanERBEK> can I use CPU limit with Ubuntu Virtualization ?
[20:13] <roaksoax> Daviey: fence_cdu -a <IP or host> -n <id on power device> -l <user> -p <pass> -o <action: on|off|status>
[20:13] <Daviey> roaksoax: ahhh
[20:13] <Daviey> thanks
[20:14] <roaksoax> Daviey: so basically, a template has to be added into cobbler to do that, and that's it
[20:14] <roaksoax> Daviey: I'm gonna add that later on as I wanna submit the fence-agent to upstream
[20:15] <malac0da> well i give up for now
[20:15] <malac0da> thanks for the help
[20:15] <RoyK> ErtanERBEK: you can only limit the number of cores available to the guesst
[20:16] <ErtanERBEK> RoyK, I know. But I need Core based MHZ limit
[20:16] <ErtanERBEK> can I use
[20:17] <ErtanERBEK> some time  any Virtual Quest have some software problem and use CPU Full speed
[20:17] <loxs> is there some (easy) way to have MTA (postfix) users in a text file or sqlite database? I don't really like the idea to manage mysql only for 2 users
[20:19] <raubvogel> loxs: I have setup many postfix thingies without ever touching mysql
[20:20] <raubvogel> kerberos and ldap yes but no mysql
[20:20] <RoyK> ErtanERBEK: then go get an IBM POWER7 machine - those support partial cpu allocation (down to 1/10 cpu iirc)
[20:20] <raubvogel> loxs: virtual domain or system users?
[20:21] <RoyK> ErtanERBEK: but you probably won't be able to use anything like intel or amd to do that
[20:21] <loxs> raubvogel, well, I don't really like having the same password/username for both things
[20:21] <loxs> (security reasons)
[20:21] <raubvogel> loxs: then do virtual host approach and be done with it
[20:21] <ErtanERBEK> RoyK, I am use ome server both AMD or Intell
[20:21] <raubvogel> postfix+dovecot should do the trick together
[20:21] <ErtanERBEK> sorry
[20:22] <raubvogel> at least that is what we do ;)
[20:22] <RoyK> ErtanERBEK: then you won't be able to allocate anything less  than 1 CPU
[20:22] <ErtanERBEK> I am use oem server both AMD and INTEL
[20:22] <loxs> raubvogel, but most (all?) guides I see involve mysql/postgres for doing virtual users
[20:22] <loxs> raubvogel, and yes, I installed dovecot-postfix (the package) but now can't see where to go for this thing
[20:23] <ErtanERBEK> RoyK, yes, you are true
[20:23] <raubvogel> loxs: did you tell postfix to use dovecot for auth?
[20:23] <raubvogel> (sasl, tls that thingie)
[20:23] <ErtanERBEK> RoyK, I can only separate Core based
[20:23] <RoyK> ErtanERBEK: that's a processor issue (or lack of functionality) - IIRC IBM is the only one supporting that
[20:23] <loxs> raubvogel, not yet, but I have a guide that gives like 10 commands to do that.
[20:23] <ErtanERBEK> I know that
[20:24] <ErtanERBEK> but if I use brand based Server then I can use VmWare or Windows HyperV
[20:25] <ErtanERBEK> yes I know Ubuntu Server Power Full system and  have many unic feature
[20:26] <ErtanERBEK> but I thik some development for Virtualization system
[20:26] <raubvogel> loxs: this is kinda how they play together: http://wiki.dovecot.org/HowTo/PostfixAndDovecotSASL
[20:26] <geolr> Hi all, where to start: How to mount a usbdisk such that I can use it to serve files with samba? Thx!
[20:26] <ErtanERBEK> RoyK, Thank you for your help..
[20:26] <loxs> raubvogel, thanks, I'll go read that
[20:26] <luciano_> geolr: you can moun it and then add it to smb.conf
[20:26] <RoyK> ErtanERBEK: you can't do sub-cpu virtualisation in software without a rather bad penalty - essensially, you don't want that...
[20:26] <luciano_> as a normal share
[20:27] <raubvogel> loxs: this is the first thing I read when I setup postfix and virtual hosting : http://www.howtoforge.com/linux_postfix_virtual_hosting
[20:28] <raubvogel> It is a bit dated but should give you an idea of how to setup virtual hosting/domains
[20:28] <loxs> thanks
[20:29] <raubvogel> If you are going to have a file holding passwords, a la /etc/shadow, dovecot can read it
[20:29] <raubvogel> postfix could not care less; it only wants to know where to put mail, which domains it should answer for, and who are the valid users
[20:29] <ErtanERBEK> RoyK, maybe Ubuntu Server team add VirtualBox system to Ubuntu Server
[20:29] <raubvogel> For authentication, you let dovecot do it for you
[20:30] <ErtanERBEK> then we can use CPU based limit
[20:30] <raubvogel> since ti can authenticate against local file, pam, ldap, kerberos, and even dilled pickles
[20:30] <RoyK> ErtanERBEK: you can use virtualbox on ubuntu
[20:30] <ErtanERBEK> But I can't with server system
[20:31] <RoyK> why not?
[20:31] <ErtanERBEK> do you know , how to install ubuntu server gNome GUI
[20:31] <RoyK> no need for that - all you need is are the x libs
[20:31] <ErtanERBEK> realy
[20:31] <RoyK> really...
[20:31] <ErtanERBEK> this is good news
[20:31] <RoyK> it's like that for all X software
[20:32] <RoyK> and you can batch-start virtualbox VMs with VBoxManage
[20:32] <loxs> raubvogel, thank you. That really helped me.
[20:32] <ErtanERBEK> but I can't use Virtual Machine Manager with Virtualbox, right ?
[20:32] <RoyK> but then - I didn't know you could do cpu limitations with vbox...
[20:32] <RoyK> ErtanERBEK: yes, you can
[20:32] <RoyK> ErtanERBEK: just use remote X
[20:32] <raubvogel> loxs: cool. If you have questions, ask them away
[20:33] <RoyK> ErtanERBEK: is your client linux or windows?
[20:33] <ErtanERBEK> Linux
[20:33] <ErtanERBEK> I am alrady use Ubuntu 11024 with Unity
[20:33] <RoyK> then just ssh into the machine and try to launch xeyes or something
[20:33] <RoyK> it should forward X11 over ssh automatically
[20:34] <ErtanERBEK> so, what we need
[20:34] <ErtanERBEK> Intall Ubuntu Server
[20:34] <ErtanERBEK> Install X
[20:34] <RoyK> ErtanERBEK: ssh into the box, start xeyes, if it's not installed, install x11apps (IIRC)
[20:34] <RoyK> not X
[20:34] <RoyK> just the x libs
[20:34] <ErtanERBEK> Install openssh
[20:34] <RoyK> not X itself
[20:34] <ErtanERBEK> only libs right
[20:34] <RoyK> X itself is an X server made for showing stuff on a local display
[20:35] <RoyK> ErtanERBEK: just do what I just said - ssh into the box, start xeyes
[20:35] <RoyK> if it's not installed, install what it told you to install
[20:35] <ErtanERBEK> ok, I understand it
[20:35] <RoyK> then log out and in again and try once more
[20:36] <ErtanERBEK> you mean remote login right
[20:36] <ErtanERBEK> I understand you
[20:36] <ErtanERBEK> this is realy good think
[20:36] <RoyK> I mean 'ssh user@ip-or-hostname-of-your-server
[20:37] <adam_g> zul: do we care if squid3's default out-of-the-box config doesn't match the functionality of the original squid(2) pkg?
[20:37] <adam_g> roaksoax: ^
[20:37] <matrix3000> ssh -x user@hostname ?
[20:37] <matrix3000> doesn't that get you x
[20:38] <zul> adam_g: i guess we can test it properly next week when we have the orchestra stuff up
[20:39] <adam_g> zul: well, orchestra ships with its own squid configuration (which needs to be updated for squid3, btw).. so that wont tell us much
[20:39] <roaksoax> adam_g: not really, afaik it should be compatible already, isn't it?
[20:39] <ErtanERBEK> matrix3000, and RoyK I understand thank you Dear Friend
[20:39] <Daviey> adam_g: do you know what config changes need to happen for squid -> squid3 transition?
[20:39] <adam_g> the main difference that ive noticed so far between the two, is squid3 doesn't populate /var/spool/squid with its directory structure out-of-the-box
[20:39] <roaksoax> adam_g: and squid3's one thing, while orchestra's squid config is other thing
[20:39] <Daviey> mvo might also like to know for squid-deb-proxy.
[20:39] <RoyK> matrix3000: from the manual      -x      Disables X11 forwarding.
[20:40] <adam_g> Daviey: started working on some bugs on squid3, gonna check orchestra after thats done
[20:40] <RoyK> matrix3000: -X will forcibly enable it - normally it should be enabled by default
[20:40] <geolr> luciano_: you think the mount will be the same upon next reboot
[20:40] <zul> bbl
[20:40] <ErtanERBEK> -X      Enables X11 forwarding.  This can also be specified on a per-host basis in a configuration file.
[20:40] <luciano_> geolr: no.. only if you define in fstab
[20:40] <ErtanERBEK> upper case
[20:40] <luciano_> define it *
[20:40] <adam_g> roaksoax: no, orchestra now depends on squid3 and its squid conf is not compatable, and orchestra's postinst needs fixing (was gonna get to these later today)
[20:41] <matrix3000> sorry ment X
[20:41] <matrix3000> wasn't maying attention to caps
[20:41] <RoyK> ErtanERBEK: in /etc/ssh/ssh_config or in $HOME/.ssh/config, set ForwardX11 yes
[20:41] <KHendrik> ok this might be the single most stupid question of the day but what could cause a server installation to take 10 hours and make it in general react very slow? (acer h341 intel atom D410 raid 10 (4x1TB) 2 GB RAM)
[20:41] <ErtanERBEK> RoyK, I konow It
[20:42] <RoyK> KHendrik: faulty hardware? :)
[20:42] <RoyK> KHendrik: btw, I'd strongly suggest using dedicated data drives instead of mixing root and data - REALLY!
[20:42] <RoyK> KHendrik: if you only have room for 4 drives, use an USB pen for the root
[20:43] <KHendrik> why?
[20:43] <roaksoax> adam_g: ok, let me know how it goes
[20:43] <RoyK> KHendrik: it REALLY helps whenever the shit hits the fan or the day you want to extend the raid set to have it on whole drives and not partitions, beleive me on this...
[20:43] <KHendrik> if the root pendrive fails everything would fail that#s kind of what i don#t want
[20:44] <RoyK> KHendrik: also, with 4 drives, you can choose RAID-6 if you're paranoid, so that _any_ two drives can die, and not just one on each side of the mirror
[20:44] <RoyK> or you can use raid-5 if you want the extra space...
[20:45] <RoyK> KHendrik: with the raid on whole drives, you can replace each drives with a bigger one, one by one, and once all are replaced, poing, and you have more room to resize2fs the filesystem on md0
[20:46] <RoyK> KHendrik: to do that with partitons, you'll have to struggle through a few of Dante's hells unless you're a partition table expert :P
[20:47] <sixstring> How do I change runlevels on an /etc/init.d/ script? I'm used to chkconfig, but it's not on my system. (Ubuntu oneiric) Googly isn't understanding me today.
[20:48] <KHendrik> i only have 2 partition per hdd 2GB of Swap the rest is ext 4 and i map all the 2gb to a raid 10 and the rest also as a separate raid 10 with ext4
[20:48] <RoyK> yes
[20:49] <RoyK> meaning the md device resides on partitions
[20:49] <RoyK> which is exactly what I would recommend against
[20:50] <sixstring> Freaky. I can "apt-get install chkconfig", but it's way different from Centos5. I think I can grok the docs now, anyway.
[20:50] <RoyK> KHendrik: get a couple of usb plugs for the root, you probably won't read much from them anyway, and the only writes that go there are logs and perhaps swap (if you've designed the system badly or are abusing it for things it shouldn't do)
[20:51] <sixstring> Or...maybe using chkconfig on Ubuntu doesn't make any sense. Because "chkconfig -l jenkins" is returning "0:off  1:off  2:off  3:off  4:off  5:off  6:off". How the heck is it starting automatically, then? I am totally confused.
[20:53] <RoyK> sixstring: I guess chkconfig may not be compatible with upstart
[20:53] <RoyK> nope - doesn't seem so
[20:54] <RoyK> meaning - if the service is started by a SysV script, chkconfig will tell you, if it's started by upstart, it won't
[20:54] <sixstring> RoyK: I'm just now finding hints about that on Google. Thanks.
[20:54] <sixstring> http://slashzeroconf.wordpress.com/2008/02/16/chkconfig-for-ubuntu-sysv-rc-conf/ seems to be pointing the right direction.
[20:56] <sixstring> RoyK: If you know off the top of your head, you'll probably save me a quarter hour on Google: Which tool can I use on ubuntu to set runlevels?
[20:59] <RoyK> sixstring: generally, you don't change runlevel, but if you need to, frankly, I don't remember how...
[20:59] <sixstring> OK, thanks, RoyK.
[20:59] <guntbert> sixstring: runlevel have almost no meaning on ubuntu
[20:59]  * sixstring tosses a virtual donut to RoyK.
[20:59] <guntbert> !runlevel
[20:59] <guntbert> sixstring: use telinit
[21:00]  * sixstring scratches his head.
[21:00] <RoyK> default runlevel is set in /etc/init/rc-sysinit.conf
[21:01] <sixstring> I dunno. This is going to take me a bit to understand. I'll dig into /etc/init/rc* to see if that makes sense. The only drawback to linux is that various flavors do things differently.
[21:01] <sixstring> But I guess multiple flavors is a good thing, right?
[21:01]  * sixstring tosses guntbert a virtual donut of a different flavor.
[21:02] <RoyK> sixstring: solaris 9 and solaris 10 are quite different in this aspect as well :P
[21:03] <RoyK> and good old sysv scripts don't take dependencies into account, which is somewhat non-optimal
[21:03] <sixstring> Thanks for the education, fellas. I'll have to punt for today. Maybe I'll figure it out tomorrow. :)
[21:14] <Skaag> is there a key I can press during boot time to load into single user mode?
[21:15] <guntbert> Skaag: from the grub menu, yes
[21:15] <Skaag> I just see the ubuntu logo, no grub menu
[21:15] <Skaag> how do I invoke the grub menu?
[21:16] <Skaag> if I press ESC, the logo vanishes and I see the normal boot text (services being started, etc)
[21:16] <guntbert> Skaag: press <shift> durin boot until the menu appears
[21:16] <Skaag> awesome. thanks :)
[21:17] <guntbert> Skaag: you're welcome :-)
[21:21] <Skaag> I have another problem with a certain type of older SuperMicro servers I have, where the ubuntu prompt is showing up garbled
[21:22] <Skaag> the detected resolution is 648x483 or so
[21:22] <Skaag> those numbers seem suspicious to me. I would imagine it should be 640x480, but it's not. Is it possible Ubuntu is trying to set a strange video mode that the remote KVM chip can not deal with?
[21:23] <Skaag> And if that is the case, how do I tell ubuntu not to mess with the graphics card?
[21:25] <args[0]> how can I check the temperature of my CPU?
[21:25] <RoyK> args[0]: lmsensors?
[21:25] <args[0]> thanks RoyK I'll look into that
[21:26] <hallyn> zul, how did the qa regression testing go?
[21:27] <zul> hallyn: havent gotten to it yet battling essex ill look at it tonight
[21:27] <hallyn> ok, thx
[21:35] <ErtanERBEK> RoyK,
[21:35] <ErtanERBEK> I am try now
[21:35] <ErtanERBEK> system download 260 MB lib :D
[21:35] <ErtanERBEK> with Virtualbox-4.1 ( Oracle Version )
[21:36] <RoyK> ErtanERBEK: you'll need some libs, yes
[21:37] <ErtanERBEK> this is not important if change my system stability
[21:39] <RoyK> ErtanERBEK: it won't effect stability
[22:01] <loxs> hmm, in what package is the   dovecotpw command?
[22:16] <ErtanERBEK> RoyK, Thank you for your good information
[22:16] <ErtanERBEK> I installed VirtualBox
[22:16] <ErtanERBEK> and now
[22:16] <ErtanERBEK> working properly
[22:16] <ErtanERBEK> I can use CPU MHZ limit
[22:16] <ErtanERBEK> Thank you
[22:39] <adam_g> zul: i just pushed some more changes to the branch linked on Bug #891445
[22:39] <adam_g> zul: should be good to upload if you want to review
[22:57] <jdstrand> zul: feedparser review completed
[22:57] <jdstrand> zul: it looks good, but you'll want to enable the test suite in the build. see my comment in the bug
[22:57] <jdstrand> zul: bug #879520
[23:08] <zul> jdstrand: cool i will do so
[23:08] <zul> adam_g: ill take a look
[23:28] <jdstrand> zul: squid3 promoted, squid demoted, seeds adjusted and bugs marked Fix Released
[23:29] <jdstrand> zul: fyi, squid3 has a bunch of these in the build log:
[23:29] <jdstrand> g++: warning: switch '-fhuge-objects' is no longer supported
[23:29] <jdstrand> seems innocuous, but I thought you might want to know