[00:09] <lil_cain> is there any way to see what packages depend on a given package?
[00:11] <ajmitch> apt-cache rdepends package
[00:13] <lil_cain> ah, cool, thanks
[01:25] <FFF666> someone reads what I white?
[01:27] <FFF666> Can I ask something about UEC?
[01:38] <FFF666> hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall.
[01:38] <FFF666> I can run an image and login it via ssh, but only from the pc that I've the cloud controller.
[01:38] <FFF666> The problem is that I can't ping or login ssh the image from a pc of the LAN.
[02:04] <FFF666> hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. I can run an image and login it via ssh, but only from the pc that I've the cloud controller.The problem is that I can't ping or login ssh the image from a pc of the LAN.
[02:08] <ScottK> !weekend | FFF666
[02:14] <FFF666> uu, ok I'll ask tomorrow
[04:44] <amstan> hey guys, have you ever experienced a system freeze on high file transfer speeds
[04:45] <amstan> on either gigabit ethernet or local hard to hard drive
[06:35] <SpamapS> amstan: I have, but only with unreliable hardware. ;)
[08:27] <netritious> Hi, I extended a volume group spanning two soft raid devices (/dev/md0 and /dev/md1).  I want to reduce the volume group by removing /dev/md1 and tried using 'vgreduce raid /dev/md1' which returns 'Physical volume "/dev/md1" is still in use.' I'm stuck and can't seem to figure it out. I'm using Hardy Server i386. http://pastebin.com/hkQ4D68A
[08:38] <ivoks> netritious: if you want to reduce the size, you have to unmount the partition
[08:38] <ivoks> reducing the size of filesystem isn't as simple as extending
[08:40] <netritious> ivoks: Thank you. I knew I was missing something. Will give that a try.
[08:50] <soren> ivoks: He's wanting to remove a pv from a vg, not reduce the size of a filesystem, I think.
[08:50] <soren> netritious: You probably want to look at pvmove. It lets you move the used PE's from one PV to another.
[08:51] <soren> netritious: So, in your case, you'd run "pvmove /dev/md1".
[08:51] <soren> netritious: When that's done, you should be able to run your vgreduce command.
[08:56] <netritious> soren, ivoks: My system still is not cooperating with me. http://pastebin.com/iHjqYPhB Where to go from here? Maybe detach the drives physically from the system and then try again?
[08:58] <ivoks> back :)
[08:58] <netritious> wb ivoks :)
[08:58] <ivoks> soren: well, he should reduce fs before reducing lv/vg, right? :)
[08:59] <ivoks> netritious: you can't unmount partitions you are using :)
[08:59] <ivoks> pvmove is good suggestion
[09:00] <ivoks> netritious: let's start from begining
[09:00] <ivoks> netritious: you have volume group that uses /dev/md0 and /dev/md1, right?
[09:00] <netritious> ivoks: yes, and ty for your patience :)
[09:01] <ivoks> phone
[09:01] <netritious> np..I'm here all night
[09:02] <soren> ivoks: Well, maybe. Depends on whether they'd all fit on the single md device.
[09:03] <netritious> Would more detail help?
[09:04] <soren> netritious: Yikes, not pvremove. pvmove.
[09:05] <netritious> soren: ok, I think I have some explaining to do. :)
[09:07] <netritious> I have five drives; 1x160GB IDE that is LG 'system'. Then there are 4x1TB drives. sda and sdb make up /dev/md0 which I added a few days ago. sdc and sdd make up /dev/md1 which I added about an hour ago.
[09:08] <ivoks> ok
[09:08] <netritious> I initially created LG raid on /dev/md0 and extended it to /dev/md1 (just an hour or so ago)
[09:09] <netritious> I performed 'umount -a' which has unmounted LG 'raid' which is on /dev/md0 and /dev/md1
[09:09] <ivoks> and now you want to take /dev/md1 out of LF?
[09:09] <ivoks> LG
[09:10] <netritious> out of LG 'raid' yes
[09:10] <ivoks> then what soren said
[09:10] <ivoks> pvmove
[09:10] <ivoks> pvmove -v /dev/md1
[09:11] <netritious> k..will try now, and again thanks for your patience...I am really new to mdadm and lvm.
[09:12] <ivoks> vgreduce raid /dev/md1
[09:13] <ivoks> pvremove /dev/md1
[09:13] <ivoks> that should be it
[09:13] <soren> What's an LG?
[09:13] <ivoks> logical group, probably :)
[09:14] <soren> And what's that really?
[09:14] <soren> An LV or a VG?
[09:14] <ivoks> or south korean manufacturer :)
[09:14] <ivoks> VG
[09:14] <netritious> soren: you're right..I should have used VG
[09:14] <ivoks> i'm not sure you'll be able to move everything
[09:14] <ivoks> #
[09:14] <ivoks>   PV Name               /dev/md1
[09:15] <ivoks> #
[09:15] <ivoks>   Allocatable           yes (but full)
[09:15] <soren> I'm very confused by all of this:
[09:16] <soren> netritious: You have 1 160 GB drive and 4 1 TB drives, and then you speak of sda, sdb, sdc, and sdd. I'm not sure how to map the drive sizes to the device names.
[09:16] <soren> ...especially since there are 5 drives, but only 4 device names mentioned.
[09:16] <ivoks> soren: he has two volume groups
[09:16] <ivoks> soren: 160GB is system, 1TB are /dev/md0 and /dev/md1 in raid VG
[09:16] <soren> Also, if an "LG" really is a VG, it doesn't really make sane that a "umount -a" unmounts an LG that is called raid.
[09:17] <ivoks> soren: http://pastebin.com/hkQ4D68A
[09:18]  * soren doesn't get it at all :)
[09:18] <ivoks> 'pvdisplay' is relevant part :)
[09:19] <soren> Ah, there's and sde as well.
[09:19] <soren> ...which is the 160 GB drive.
[09:19] <ivoks> soren: in system VG
[09:19] <netritious> soren: correct
[09:19] <ivoks> so, ignore it :)
[09:19] <netritious> exactly :)
[09:19] <soren> Oh, and "raid" is the name of the volume group?
[09:19] <netritious> Apologies for the confusion
[09:19] <netritious> soren: yes
[09:19] <soren> You don't unmount volume groups.
[09:20] <soren> So this makes no sense: 08:09 < netritious> I performed 'umount -a' which has unmounted LG 'raid' which is on /dev/md0 and /dev/md1
[09:20] <ivoks> of course, it's not mounted at all
[09:20] <soren> ivoks: Precisely.
[09:20] <ivoks> soren: it probably unmounted LVs on raid VG
[09:21] <soren> ivoks: You don't mount LV's either :)
[09:21] <ivoks> :p
[09:21] <soren> You mount filesystems on LV's.... and the VG doesn't care either way.
[09:21] <soren> Anyways...
[09:21] <netritious> Ah, more confusion from me again..I unmounted /mnt/storage which is mounted on /dev/mapper/raid-storage
[09:21] <soren> netritious: I stand by my instructions, nevertheless. "pvmove /dev/md1" (add a -v for more info, if you please).. then do "vgreduce" like you did.
[09:21] <ivoks> there we go :)
[09:22] <soren> ...and then pvremove if you're feeling bored.
[09:26] <netritious> soren: http://pastebin.com/jGgFy9ce
[09:29] <netritious> soren: 'system' is a volume group and 'raid' is a volume group, and nothing is mounted on a logical volume in volume group raid. So I'm still missing something.
[09:30] <netritious> ivoks: You had just come and gone... http://pastebin.com/jGgFy9ce
[09:31] <Jeeves_> Anyone here with clue where to report broken jigdo images on the releases mirrors?
[09:31] <soren> Jeeves_: #ubuntu-release, probably.
[09:32] <soren> netritious: Could you try "pvdisplay -m /dev/md1"?
[09:33] <ivoks> i guess he has a filesystem of 2TB on /dev/mapper/raid-storage
[09:33] <ivoks> netritious: when you added /dev/md1, did you extend logical volume and filesystem to 2TB?
[09:33] <Jeeves_> soren: Thanks
[09:33] <netritious> soren: http://pastebin.com/1eHhQnNv
[09:34] <ivoks> backups?
[09:35] <soren> ivoks: Do you still have the first pastebin thing open? It's expired so I can't go and see.
[09:35] <ivoks> soren: nope :/
[09:35] <ivoks> oh, wait...
[09:36] <ivoks> soren: http://pastebin.com/eYpAUkSz
[09:36] <soren> ivoks: Great, thanks.
[09:36] <soren> Oh, that just had pvdisplay :(
[09:37] <soren> netritious: Can't help you. You've spent all the space on md1 on an lv called "backups".
[09:37] <netritious> I am trying to look in my bash_history to find all the commands I've ran and in what order
[09:38] <ivoks> if you don't need backup, delete that LV :)
[09:38] <soren> That's not really important, is it?
[09:38] <netritious> no, backups has nothing in/on it...was what I intended to remove from the volume group 'raid'
[09:38] <ivoks> netritious: you can't remove disk which is filled with data, without losing that data
[09:39] <netritious> There is nothing in 'backups'
[09:39] <ivoks> netritious: then delete that LV
[09:39] <soren> netritious: Oh, hang on.
[09:39] <soren> netritious: You just want to remove that lv? Nothing else?
[09:39] <soren> netritious: You don't actually want to remove md1 from the vg?
[09:39] <netritious> yes, just that LV, not 'storage'
[09:40] <netritious> wait, let me be more specific..
[09:40] <soren> Yes, please :)
[09:40] <ivoks> but that's not what you said before :)
[09:40] <Wilco_> Hello everyone
[09:41] <netritious> ivoks: Apologies for the confusion..still a n00b with lvm...
[09:41] <Wilco_> Guys we are going to deploy around 100 Ubuntu Server in virtualization - which ubuntu server would you go for? 9.10 or 8.04.4 LTS?
[09:41] <ivoks> it's easy partition -> PV -> VG -> LV -> FS
[09:41] <Wilco_> i need to understand how stable is 9.10 vs 8.04.4
[09:42] <soren> netritious: It's much easier if you explain what you want to achieve rather than what you think you need to do to achieve it.
[09:42] <soren> Wilco_: Why?
[09:42] <ivoks> Wilco_: real question is what do you need server for
[09:42] <ivoks> servers
[09:42] <ivoks> coffe
[09:42] <soren> Wilco_: There's a new LTS coming out on Thursday. You may want to wait for that.
[09:42] <Wilco_> we are building some kind of service which mostly run tomcats
[09:42] <soren> Please accept my condolences.
[09:43] <Wilco_> so im going to have like 100 VMs running tomcats
[09:43] <Wilco_> the question which ubuntu to use
[09:43] <ivoks> 10.04 :)
[09:43] <Wilco_> i need something super stable
[09:43] <ivoks> oh, 8.04 then
[09:43] <netritious> soren: k, I want to remove /dev/md1 from logical group 'raid' and create another volume group just for /dev/md1
[09:43] <ivoks> :)
[09:43] <ivoks> but it has old tomcat
[09:43] <soren> netritious: Why?
[09:43] <Wilco_> we are going to use the tomcat from tomcat web site
[09:43] <ivoks> 10.04 should be stable
[09:43] <Wilco_> and the java vm from sun
[09:44] <ivoks> then use 8.04
[09:44] <Wilco_> ok
[09:44] <soren> ivoks: Hm? why?
[09:45] <netritious> soren: I don't want to span the logical volume across /dev/md0 and /dev/md1
[09:45] <ivoks> soren: cause it's 'super stable' :)
[09:45] <Wilco_> i noticed that 9.10 when installed installs EXT 4
[09:45] <Wilco_> is ext 4 stable enough ?
[09:45] <ivoks> you can change that to any FS you want
[09:45] <Wilco_> these will be production servers so i really need something that wont dye on me
[09:45] <Wilco_> dye=die
[09:45] <Wilco_> yeah
[09:46] <Wilco_> i was just suprised when i saw it default to ext4
[09:46] <soren> netritious: Why?
[09:46] <ivoks> the issues i have with 9.10 on servers is upstart
[09:46] <netritious> soren: Why not?
[09:46] <Wilco_> explain ?
[09:46] <Wilco_> i use VMware to run the ubuntus and they load up pretty fast
[09:46] <netritious> soren: is that a bad idea?
[09:46] <Wilco_> even vmware-tools works fine
[09:46] <soren> netritious: Maybe. Hence the "why"?
[09:47] <Wilco_> soren: would it be wise to use the latest LTS
[09:47] <Wilco_> coming on thu ?
[09:47] <soren> Wilco_: Usually, yes.
[09:47] <Wilco_> hmm
[09:47] <Wilco_> dilemmas dilemmas
[09:47] <soren> Wilco_: You can install it now and start testing.
[09:47] <soren> Wilco_: No need to wait.
[09:47] <netritious> soren: It just makes sense to me to only have to deal with one volume group per raid device.
[09:48] <Wilco_> soren - our machines are very simple. a tomcat, java jdk, apache, some php and thats it
[09:48] <soren> netritious: Ok. Well, if that's wha tyou want, that's fine. I just don't see the point.
[09:48] <ivoks> Wilco_: on 9.10, nfs is started before network, for example
[09:48] <Wilco_> ivoks: thats lame - dont you get a timeout here?
[09:48] <ivoks> it's not the problem for services
[09:48] <soren> 9.10 had a lot of problems like that. They should all have been addressed in 10.04.
[09:49] <ivoks> problem is when your /home is NFS :)
[09:49] <Wilco_> hehe
[09:49] <Wilco_> and in 8.04 its ok ?
[09:49] <ivoks> then boot fails
[09:49] <ivoks> 8.04 and 10.04 are ok
[09:49] <Wilco_> great.
[09:49] <Wilco_> ok - another issue that im puzzled with is MYSQL Enterprise 5.1 - (not the one that comes off the ubuntu) - do yo uknow if its runs in 64bit on ubuntu 8.04?
[09:49] <ivoks> i would suggest 10.04 then
[09:50] <ivoks> just make sure your app works with php 5.3
[09:50] <Wilco_> they have a weird matrix
[09:50] <Wilco_> of supported OS
[09:50] <Wilco_> but they are vague on ubuntu
[09:50] <ivoks> don't know
[09:50] <ivoks> i never understood people that but mysql... :/
[09:50] <ivoks> buy
[09:50] <netritious> soren: To be completely honest I probably don't need LVM at all...I'm really not doing anything with it unless I span the two RADI1 devices, correct?
[09:50] <soren> netritious: The awesomeness of lvm is not just that you can dynamically resize LV's, but also that you can have a VG that spans multiple LV's. Unless you're expecting to move one of the raid sets to another machine, I just don't see the point in splitting them into two VG's.
[09:51] <ivoks> have one VG or don't have VG at all
[09:51] <ivoks> (in this case)
[09:51] <netritious> soren: I'm trying to account for that exactly...in the event this old hardware gives out on me.
[09:51] <Wilco_> ivoks: we need to for support purposes
[09:51] <soren> netritious: Then you would take one of the raid sets and put in one box and the other raid set in another?
[09:51] <netritious> the drives are new, but the motherboard/ram/cpus are 8+ years old
[09:52] <ivoks> Wilco_: still... :)
[09:52] <netritious> soren: can't until I get more hardware
[09:52] <soren> netritious: I'm asking..
[09:53] <soren> netritious: If your server exploded, would you move one of the raid sets to one machine and the other raid set to another?
[09:53] <soren> netritious: Or would you move them both to the same box?
[09:53] <netritious> soren: very possibly
[09:53] <netritious> the first
[09:53] <soren> netritious: Ok. Then just split them. You're just losing a lot of flexibility that way, but ok.
[09:53] <netritious> soren: I just want the option
[09:55] <netritious> soren: splitting them makes it less flexible in the event I have to move to two different machines? Just trying to make sure I understand.
[09:56] <netritious> soren: or are you saying I have less flexibility with LVM in general by not spanning across the raid devices?
[09:56] <soren> netritious: No, splitting them means that you now have to worry about where your data is kept.
[09:56] <soren> netritious: One of the great things about lvm is that you just have a great big blob of storage and don't need to worry about where the data is.
[09:57] <soren> You just say "hey, I need an LV that's 400
[09:57] <soren> GB for my databases" and it gives it to you.
[09:57] <soren> It may span multiple PV's or not. You don't need to know.
[09:57] <soren> You just know that you get a blob of storage you can use.
[09:57] <soren> If you want to extend it later on, you just do.
[09:58] <soren> ...and that may expand it into another pv. You don't need to know.
[09:58] <soren> If you split them, you always need to consider whether you're good on space on this VG or if you'd rather put it on the other VG or whatnot.
[09:59] <netritious> soren: that makes sense
[10:00] <netritious> soren: that was kind of my point about using LVM in the first place...It's probably not needed with what I think I want out of it, but on the other hand if I need to move each volume group to two different servers and then span across more physical or RAID devices it would be optimal to split it now..I'm guessing at that of course.
[10:01] <netritious> but 'sotrage' and 'backup' have distinct purposes, but atm 'storage' is acting as both, which is why I wanted to add more drives in the first place
[10:06] <Wilco_> guys any of you use landscape to manage patch management on multiple ubuntu servers
[10:06] <Wilco_> we are considering getting it so we have a baseline of patches across all servers
[10:07] <RoyK> Wilco_: we have it on test, considering it
[10:07] <RoyK> it seems to have its limitations, but looks good nevertheless
[10:07] <Wilco_> i only need it to patch the servers :)
[10:07] <Wilco_> have you tried it ?
[10:08] <Jeeves_> We've tried it
[10:08] <RoyK> Wilco_: for just patching servers, landscape might be overkill :)
[10:08] <Jeeves_> Couldn't really find an advantage over our existing Nagios, Munin, and 'apt-get upgrade' strategy.
[10:09] <Wilco_> Jeeves_:  apt-get upgrade is an issue
[10:09] <Jeeves_> Wilco_: How?
[10:09] <RoyK> there's stuff like cron-apt
[10:09] <Wilco_> because in different times you may not have the same patch level across servers
[10:09] <Wilco_> i must have all my servers using the same updates
[10:09] <lil_cain> just update them all at the same time.
[10:09] <stgraber> unattend-upgrade configured to update everything + a nagios check to make sure there isn't any package "blocked" by dpkg is usually working perfectly fine
[10:09] <Jeeves_> Wilco_: Than you should always apt-get upgrade at the same time
[10:10] <RoyK> Wilco_: landscape is convenient for that, yes, especially with a large number of servers
[10:10] <Wilco_> but on production critical servers you sometimes cant
[10:10] <Wilco_> i gonna have like 200 servers in a year
[10:10] <Wilco_> i need ORDER :)
[10:10]  * Jeeves_ gives Wilco_ cfengine and autoapt.pl
[10:10] <Wilco_> Jeeves_: what are these?
[10:11] <Wilco_> scriptz?
[10:11] <RoyK> Wilco_: landscape costs $150 per server (though less for non-commercial and less with many servers). with 200 servers, I'd say it might be worth it
[10:11] <Wilco_> yeah they giviing me superb price
[10:11]  * stgraber is managing something between 600 and 1k Ubuntu servers (mostly Ubuntu containers with vserver/openvz + some vmware/kvm VMs + some physical hosts)
[10:11] <Jeeves_> Wilco_: cfengine is a scriptinglanguage which allows you to configure servers out of svn
[10:11] <Wilco_> price is not the issue
[10:11] <lil_cain> I'd just use puppet, or cluster-ssh.
[10:11] <Jeeves_> autoapt.pl install's software when you want it, and configures it unattended
[10:11] <RoyK> Wilco_: then go for it - less hassle than using cfengine and the likes
[10:11] <Wilco_> ok
[10:11] <Wilco_> ic
[10:11] <Wilco_> thats great support here guys :)
[10:12] <Wilco_> stgraber:  how do you patch your 1000 servers?
[10:12] <Wilco_> does anyone here have some script that hardens an ubuntu server?
[10:12] <Wilco_> or at least checks that its hardened?
[10:12] <Jeeves_> Nope
[10:12] <RoyK> 'harden'?
[10:12] <Wilco_> security
[10:12] <Wilco_> scans the os
[10:13] <Wilco_> turns of what ever requires turns off
[10:13] <stgraber> Wilco_: bcfg2 (similar to puppet) which make sure package lists + files match what we have in our repo every hour + unattended-upgrade at fixed time every night for the upgrades + munin/nagios to make sure everything is consistent (including package version)
[10:13] <RoyK> it should be pretty decent out of the box
[10:13] <Jeeves_> Wilco_: It doesn't start anything that you didn't install :)
[10:13] <RoyK> Wilco_: so long as you don't install unneeded stuff, it should be quite hardened by default
[10:13] <ivoks> Wilco_: and you buy support, pih... :)
[10:13] <ewook> Wilco_: there's tons of ways to do that.
[10:13] <Wilco_> i just install openSSH server
[10:13] <RoyK> ufw enable might help too
[10:14] <RoyK> Wilco_: and fail2ban
[10:14] <Wilco_> ic
[10:14] <ewook> Wilco_: propper iptables setup, rkhunter and fail2ban and patching is a good start.
[10:14] <Wilco_> ok
[10:14] <ewook> and I see some already said that :P
[10:14] <Jeeves_> Did rkhunter stop whining about ssh allready?
[10:14] <Wilco_> i wont be using iptables - gonna use vShield Zones
[10:14] <ewook> Jeeves_: no :)
[10:15] <Wilco_> i'll check rkhunter and fail2ban
[10:15] <Jeeves_> ewook: Than it still sucks :)
[10:15] <ewook> Jeeves_: or, depends what the wining is about :P
[10:15] <Wilco_> stgraber: very intresting
[10:15] <ewook> Jeeves_: you meen the blacklisted sums?
[10:15] <ewook> mean
[10:15] <Jeeves_> ewook: No. The root-login stuff.
[10:15] <Wilco_> stgraber: what happens when a new kernel releases - do you update it too ? dont you restart the server in this scenario ?
[10:16] <ewook> Jeeves_: I've never seen that one on a ubuntu.. or so I think.
[10:16] <Wilco_> stgraber: you said "our repo" - you have your own repository ? and if so why ?
[10:16] <ewook> Wilco_: yes you do. but that is planned maintenence to take down a server :)
[10:16] <lil_cain> Jeeves_: What root login stuff?
[10:16] <Jeeves_> lil_cain: 'You are allowing a root login! That is very dangerous!'
[10:16] <lil_cain> Wilco_: If you're managing >3/4 machines that run anything non-standard, it's worth having your own repo.
[10:17] <ewook> Jeeves_: there's no root enabled in my configs
[10:17] <Wilco_> ok
[10:17] <Wilco_> i dont.
[10:17] <ewook> Jeeves_: so you might wanna check your setup :P
[10:17] <Wilco_> i just use ubuntu main
[10:17] <stgraber> Wilco_: as I said, most of these are containers, containers use the host kernel and don't have their own. For physical and VMs, we upgrade the kernel and we have hooks in /etc/kernel.d/ to send e-mail in cases where there's a custom module that need manual building + maintenance window. For others, it really depends on the support contract we have with the customer regarding security, in some cases we reboot, in some cases we wait for anot
[10:17] <Wilco_> very simple
[10:17] <lil_cain> Jeeves_: Ah. I always thought that was a stupid default on the part of Ubuntu.
[10:17] <Wilco_> stgraber: nice
[10:18] <Jeeves_> ewook: We allow root-logins by default. Because it makes stuff much easier for us.
[10:18] <ewook> Wilco_: having your own repo is the same thinging as having a WSUS for windows-machines.
[10:18] <stgraber> Wilco_: we use the official archive + a ppa + an internal repository for non-distributable packages. "repo" in my previous sentence was refering to a configuration repository describing all "profiles" (package lists, configuration files lists, ...) that's in our case a bzr branch.
[10:18] <Wilco_> am i right that the only time you need to reboot the server is when the kernel/drivers update?
[10:18] <Jeeves_> Root still doesn't have a password, only pubkeys
[10:18] <ewook> Jeeves_: well then, it's not rkhunter's fault :P
[10:18] <lil_cain> Wilco_: when libc6 does as well.
[10:18] <Wilco_> ok
[10:18] <Jeeves_> ewook: It is, when I tell it not to whine about it and it still does :)
[10:19] <ewook> Jeeves_: oh. gotcha. :P
[10:20] <ewook> Jeeves_: I'd be happy if there's a slipstreamed and 'secured' way that it rehashes itself after a apt-upgrade
[10:20] <Wilco_> guys maybe its not relevant - a friend of mine asked me to help after he compailned that his server was hacked. i was able to confirm taht his server was hackde (root) - i could see the entire history the hacker did i.e in bash_history
[10:20] <Wilco_> however we noticed that teh server loads up with promiscious mode on
[10:20] <Wilco_> at boot time
[10:20] <Wilco_> any ideas how to turn that off :) ?
[10:20] <Jeeves_> Disconnect the server, and use the console?
[10:21] <JanC> Wilco_: wipe & reinstall the whole thing
[10:21] <Wilco_> yeah thats what i told him
[10:21] <Wilco_> but it was interswting
[10:21] <ewook> I love the word promiscious, it's such a durty word :P
[10:21] <Wilco_> to understand
[10:21] <Wilco_> the moment redhat starts it says entering promiscuous mode on eth0
[10:22] <Wilco_> i even tried forcing the network confgiguration to turn of promisucous - no worky
[10:22] <Jeeves_> Wilco_: So how many passwords does the hacker have by now?
[10:22] <Wilco_> millions :)
[10:22] <Jeeves_> And how many other boxes are rooted by now?
[10:22] <Wilco_> i guess all of t hem
[10:22] <Wilco_> i told him to wipe it out
[10:22] <Wilco_> he is really fucked
[10:23] <Jeeves_> Wilco_: It's a good thing that he's not a customer of ours. His box would be offline by now
[10:23] <Wilco_> heehe
[10:23] <Jeeves_> or is he!?
[10:23] <Jeeves_> Is he in .nl?! :)
[10:23] <Wilco_> thats happens when people put a server on the internet without a firewall
[10:23] <Jeeves_> Wilco_: Bullshit
[10:23] <Wilco_> his machine was open for port 22
[10:23] <Wilco_> they scanned it
[10:23] <Jeeves_> My server has been online without a firewall for years
[10:24] <Wilco_> with SSH open?
[10:24] <Jeeves_> yes
[10:24] <Wilco_> they scanned
[10:24] <Wilco_> brute forced attack it
[10:24] <Wilco_> found root password
[10:24] <JanC> that's when inexperienced people put a server on the internet without firewall  ;)
[10:24] <Wilco_> boom
[10:24] <Wilco_> server hacked
[10:24] <RoyK> Wilco_: use fail2ban
[10:24] <Jeeves_> JanC: Indeed :)
[10:24] <RoyK> and secure passwords
[10:24] <Jeeves_> Wilco_: Use sane passwords
[10:24] <Wilco_> yeah
[10:24] <Wilco_> hehehe
[10:24] <Wilco_> sure
[10:24] <Jeeves_> 'koekje123'
[10:24] <Jeeves_> 'toor'
[10:24] <Wilco_> you know hwo many admins use
[10:24] <Wilco_> 123456!
[10:24] <Wilco_> :)
[10:24] <Jeeves_> 'thisisapassword'
[10:24] <ewook> Jeeves_: I'd call you brave, if you didn't patch at least the services running on it..
[10:24] <Wilco_> os q2w3e4
[10:24] <Wilco_> :)
[10:25] <RoyK> 'secret'
[10:25] <JanC> I don't use passwords to login
[10:25] <Jeeves_> ewook: Obviously, I patch my servers
[10:25] <Wilco_> ok
[10:25] <RoyK> Wilco_: and don't allow root login over ssh
[10:25] <JanC> don't allow them
[10:25] <Wilco_> hehehe
[10:25] <Wilco_> RoyK: agree
[10:25] <Jeeves_> And you shouldn't call me brave, but stupid :)
[10:25] <Wilco_> when it comes to security im a real bitch
[10:25] <RoyK> Wilco_: but do use fail2ban
[10:25] <Wilco_> i'll check fail2ban
[10:25] <ewook> Jeeves_: I didn't wanna say out loud myself ;)
[10:25] <Wilco_> ok does anyone here knows of a wrapper for tomact?
[10:26] <Wilco_> a good one that loads it up as a service properly
[10:26] <Jeeves_> ewook: :)
[10:26] <Jeeves_> Wilco_: You mean 'apt-get install tomcat' ?
[10:26] <Wilco_> no
[10:26] <Jeeves_> Ow, that does start tomcat as a service :)
[10:26] <Wilco_> i use tomcat off apache tomcat org
[10:26] <Wilco_> i use their latest and greatest
[10:26] <Wilco_> i need a wrapper for it to load up as as service
[10:27] <Jeeves_> s/and greatest//
[10:27] <Wilco_> i.e service tomcat stop/start
[10:27] <Jeeves_> I think it's included
[10:27] <Wilco_> nah
[10:27] <Jeeves_> Otherwise, just use the wrapper out of the ubuntu package
[10:27] <Wilco_> tomcat has a start.sh script
[10:27] <Wilco_> but its lame
[10:27] <Wilco_> hmm
[10:27] <Wilco_> thats an idea!
[10:28] <ivoks> Wilco_: 'super stable' is oposite of 'latest and greatest'
[10:28] <L-----D> hi
[10:28] <L-----D> I'm thinking make a custom verison of ubuntu server as a NAS, I wonder anyone has done some work like this, I don't wanna duplicate work
[10:28] <Wilco_> ivoks:  well tomcat wise i dont care
[10:29] <Wilco_> useually tomcat is highly stable
[10:29] <Wilco_> super stable from my point = kernel + os does not panic on you
[10:29] <ivoks> ok
[10:29] <Jeeves_> L-----D: openfiler?
[10:30] <L-----D> Jeeves_, I know openfiler and freenas
[10:31] <L-----D> Jeeves_, I'm trying something like nas+media server
[10:32] <Jeeves_> L-----D: Doens't mythtv do that?
[10:33] <L-----D> currently I have samba and mediatomb running well on 9.10
[10:34] <L-----D> i'm thing intergrate them into a disk, some sort of customer disk
[10:35] <L-----D> and with a web-ui
[10:38] <Wilco_> guys thanks a lot for your help and assitance
[10:41] <Jeeves_> np!
[11:01] <netritious> soren, ivoks: I was able to accomplish what I was after. Part of the issue was /dev/raid/backups spanned from /dev/md0 on to /dev/md1, and that /dev/raid/backups existed at all. I was able to remove the logical volume 'backups' and remove /dev/md1 from the 'raid' volume group. Thanks again for your patience, help and advice.
[11:01] <netritious> Of course part of the problem is I still a n00b. :)
[11:14] <RoyK> !webmin
[11:14] <RoyK> !ebox
[11:16] <Slidey> where can i get uec-publish-tarball from ?
[12:27] <Wilco_> are you guys seriious ?
[12:27] <Wilco_> webmin not supported?
[12:29] <_ruben> yup
[12:31] <_ruben> any recommendations for decent disk benchmarking tools, tried iometer and bonnie++ so far
[12:32] <ivoks> there's a blueprint for maverick; webmin review
[12:35] <acalvo> hi
[12:35] <acalvo> is anyone using Zend Optimizer 3.3.9 under x86_64 servers? Apache2 seems to crash with this extension
[12:39] <Wilco_> guys
[12:39] <Wilco_> i dont get - i just installed webmin latest 1.5.20 - works like a charm
[12:39] <Wilco_> they dont say any word that is not supported
[12:39] <Wilco_> i use webmin mainly for its monitoring options
[12:39] <Wilco_> its very light weight
[12:42] <Wilco_> guys
[12:42] <Wilco_> well the ubuntu support forum says webmin wins hands down over ebox
[12:43] <Wilco_> :(
[12:45] <ivoks> 'they' are who?
[12:46] <ivoks> 13:39 < Wilco_> they dont say any word that is not supported
[12:46] <ivoks> whoa are 'they'?
[12:47] <ivoks> problem with webmin is that it deletes any changes sysadmin does
[12:47] <ivoks> so you can manage stuff only with webmin, you can not make personal changes trough CLI
[12:47] <ivoks> webmin is for those that think they are linux sysadmins
[12:48] <ivoks> at least, that was webmin's behaviour last time i checked it
[12:53] <Wilco_> hmm
[12:53] <Wilco_> ok
[12:53] <Wilco_> well it worked so far for me in may time on redhat/centos
[12:53] <Wilco_> guys - how can i force someone to change password after they login
[12:54] <Wilco_> i want to create a user and once the user login it is required to change his password
[13:04] <cemc> I have a question ;) if there's no root password, and somehow it happens that ubuntu won't boot, because say it can't mount a partition, and it craps out, will it ask me for the root pwd, like say centos does? it's not that common but it could happen. I never had that on ubuntu
[13:34] <Italian_Plumber1> cemc it will probably prompt you for a password, you hit enter, and you're done
[13:35] <Italian_Plumber1> ^^ speculation
[13:35] <Italian_Plumber1> that was speculation
[13:46] <zul> mdeslaur: ping
[13:47] <mdeslaur> zul: what's up?
[13:47] <zul> mdeslaur: do you know anything about opie?
[13:48] <zul> mdeslaur: bug 569292
[13:50] <mdeslaur> zul: The seed he was given is "co5482" and the seed he types is "co54", so of course it gives an error message
[13:50] <mdeslaur> let me answer in the bug
[13:51] <zul> mdeslaur: k thanks
[13:51] <mdeslaur> oh hold on
[13:52] <FFF666> hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall.
[13:52] <FFF666> I can run an image and login it via ssh, but only from the pc that I've the cloud controller.
[13:52] <FFF666> The problem is that I can't ping or login ssh the image from a pc of the LAN.
[13:52] <FFF666> Is there any kind of firewall or something that I have to configure to allow connections?
[13:52] <mdeslaur> zul: hmm...for some reason his second opiepasswd didn't generate a correct response
[13:56] <ivoks> oh, jolly good
[13:56] <ivoks> bug 374589
[13:56] <zul> eh?
[13:57] <ivoks> samba gets started before vmware, obviously
[13:57] <zul> oh jaunty :)
[13:57] <zul> pppthpp ;)
[13:57] <ivoks> jaunty?
[13:58] <zul> ivoks: oh its karmic...yeah it needs to be converted to upstart
[13:58] <zul> which was done in lucid
[13:58] <ivoks> :)
[14:00] <zul> ivoks: so that bug should probably marked as fixed released
[14:00] <ivoks> hm, yes... i guess
[14:01] <ivoks> but if we always fix in newer version, can we really claim that the version-1 is supported?
[14:03] <zul> well not really, but thats what SRU are for
[14:03] <FFF666> hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. I can run an image and login it via ssh, but only from the pc that I've the cloud controller.
[14:03] <FFF666> The problem is that I can't ping or login ssh the image from a pc of the LAN. Is there any kind of firewall or something that I have to configure to allow connections?
[14:06] <zul> ivoks: we are going to have a session about this at UDS
[14:06] <zul> i think
[14:06] <ivoks> great
[14:08] <screen-x> FFF666: complete guess, but do you have routing enabled on the clc?
[14:09] <FFF666> rounting enabled? I dont know
[14:09] <FFF666> where  can I configre that?
[14:34] <zul> ttx: can you have a look at #569248
[14:34] <ttx> bug 569248
[14:37] <ttx> zul: I'm not sure I get it
[14:38] <zul> ttx: neither do I...ill wait for hggdh and get him to add more info
[14:40] <ttx> zul: I triaged it
[14:42] <ivoks> hell of a day
[14:43] <mathiaz> ivoks: o/
[14:43] <ivoks> mathiaz: hi
[14:43] <ttx> ivoks, mathiaz: o/
[14:43] <ivoks> \o/
[14:44] <mathiaz> ttx: anything release critical?
[14:44] <zul> hey mathiaz
[14:44] <kirkland> ttx: dude, this raid issue is a pretty serious regression
[14:45] <ivoks> i don't understand how it works in VM, but not on real hardware
[14:49] <cemc> and I can't reproduce it
[14:52] <hggdh> ttx, zul: I expected (perhaps wrongly) that once I 'euca_conf --deregister-nodes <node>' it would not be shown anymore
[14:52] <zul> hggdh: ah...
[14:52] <zul> hggdh: i deferred to ttx since Im not that informed on eucalyptus
[14:53] <ttx> kirkland: move to -devel and explain ?
[14:54] <ttx> hggdh: right, I just wasn't sure I got it right. can't reproduce from where I am this week
[14:55] <cemc> kirkland: did you reinstall (frsh install) or upgrade?
[14:56]  * ttx reads
[14:57] <hggdh> ttx: I just got back to it, but I have a feeling this may been caused by a mis-configuration on topo4, on the rig
[14:57] <hggdh> looking at it right now (and will discuss with mathiaz)
[14:57] <hggdh> kirkland: lp:~hggdh2/+junk/uec-qa
[14:57] <ttx> hggdh: ideally we would reproduce on a sensible configuration and be able to forward the bug to eucaguys
[14:58] <hggdh> ttx: yes, I agree. any of the topologies in the righ would be good enough, except for this mis-conf right now
[14:59] <ttx> hggdh: but that's definitely possible... given that we don't deregister that often.
[15:02] <ttx> kirkland: looks like most people can't reproduce your issue though
[15:04] <ttx> kirkland: and AFAICR the iso tests for beta2 and RC were OK on RAID, if you except jdstrand's issue with raid juggling
[15:04] <ttx> (unfortunately we can't access the old milestones test results anymore)
[15:07] <kirkland> ttx: okay
[15:08] <kirkland> ttx: cemc did reproduce it
[15:08] <kirkland> ttx: looks like it's dependent on your drive geometry
[15:08] <kirkland> cemc: reinstall
[15:10] <bogeyd6> On 9.10 doing the VM install, What is "Basic Ubuntu Server"
[15:10] <ttx> kirkland: oh, I missed that comment, then
[15:11] <ttx> cemc: can't find the comment where you reproduced Dustin's issue ?
[15:12] <kirkland> ttx: cemc pinged me privately here
[15:12] <ttx> ah
[15:12] <kirkland> ttx: i've asked cemc to add a note to the bug/mailing list
[15:19] <ttx> kirkland: bring the discussion to #ubuntu-release when you have sufficient view on the issue... Make everyone comment on the bug otherwise it will be impossible to follow
[15:19] <kirkland> ttx: agreed, thanks.
[15:20] <jiboumans> morning folks
[15:20] <cemc> kirkland: i DIDN'T reproduce it
[15:21] <kirkland> cemc: oh
[15:21] <kirkland> cemc: okay, sorry
[15:21] <ttx> cemc: hah, so at that point two commenters on the bug that can't reproduce the issue, and only Dustin having it
[15:21] <ttx> and KVM not reproducing it
[15:21] <kirkland> ttx: right, weird
[15:21] <cemc> ttx: seems so
[15:21] <ttx> kirkland: so I won't stop the presses right now, with your permission :)
[15:22] <kirkland> ttx: yeah, please don't
[15:22] <ttx> jiboumans: good afternoon from cloudy England
[15:22] <kirkland> jiboumans: howdy
[15:22] <Jeeves_> ashclouds?
[15:22] <RoyK> is Xen 4 in 10.04?
[15:22] <ivoks> ttx: frenchman in england?
[15:22] <zul> RoyK: no
[15:22] <jiboumans> ttx: good morning from a sunny and fantastic SFO
[15:23] <zul> hey jiboumans
[15:23] <ivoks> i expect snow today :)
[15:23] <mathiaz> kirkland: ttx: should I try to reproduce the bug on some hardware?
[15:23] <RoyK> bummer
[15:23] <mathiaz> kirkland: ttx I may be able to get some hardware from the certificate lab
[15:23] <RoyK> from the Xen 4 release notes "Better performance and scalability: 128 vcpus per guest, 1 TB of RAM per host, 128 physical CPUs per host (as a default, can be compile-time increased to lots more)."
[15:23] <mathiaz> kirkland: ttx: or the office
[15:23] <kirkland> mathiaz: yeah, please!
[15:23] <ttx> mathiaz: that would be great
[15:23] <ttx> mathiaz: at least to build some confidence
[15:23] <RoyK> not that I need 128 cores and 1TB RAM for a guest, but looks cool
[15:23] <kirkland> mathiaz: ttx: i was just going to ask if there was a qa/test resource who could try this in the lab
[15:23] <ttx> mathiaz: you should have a candidate up soon
[15:24] <ivoks> take care guys
[15:24] <ttx> ivoks: at Canonical offices for release sprint
[15:24] <ivoks> see you tomorrow
[15:24] <jcastro> dendrobates: hey, wanna do a UDS plenary on cassandra or something?
[15:24] <ttx> mathiaz: ~1hour
[15:25] <kirkland> ttx: i'm going to try and test in kvm with virtual disks identical in size to my real ones
[15:26] <Daviey> kirkland: you know you can use sdisk to extract the partition details, and push it back into sdisk to create them?
[15:26] <kirkland> Daviey: hmm, no i've never used sdisk
[15:32] <ttx> Daviey, kirkland: you mean sfdisk
[15:32] <kirkland> ttx: never used that either
[15:33]  * ttx remembers using that in his old Gentoo days
[15:35] <Daviey> ttx: yes, sfdisk - sorry
[15:36] <cemc> kirkland: you've reinstalled several times and every time the same result? what did you answer when asked 'boot with degraded raid' ?
[15:36] <kirkland> cemc: "yes, boot degraded"
[15:36] <kirkland> cemc: i reinstalled many times, same result
[15:36] <kirkland> cemc: i zero'd the disk between some tries, even
[15:37] <cemc> mhm
[15:37] <Daviey> kirkland: put a qcow on loopback, sfdisk -d /dev/sda > layout.out -> sfdisk /dev/loop0 < layout.out
[15:37] <cemc> kirkland: no swap, no separate boot partition? nothing 'fancy' ?
[15:37] <Daviey> actually, you could pipe it straight in
[15:40] <kirkland> cemc: correct, correct, correct;  500GB disks
[15:40] <kirkland> cemc: how big were your disks?
[15:40] <cemc> 80GB ...
[15:40] <cemc> and they were sync'ing after reboot, which I find a bit odd
[15:40] <kirkland> cemc: ttx: I just started an install in kvm with 500GB backing disks; so far it looks like mine on real hardware (with this extra garbage raid partition)
[15:41] <kirkland> cemc: that's standard
[15:41] <kirkland> Daviey: thanks
[15:41] <cemc> kirkland: do you need 1TB for that? :)
[15:42] <kirkland> cemc: not with qcow2 sparse files :-)
[15:42] <kirkland> cemc: i will, if the raid sync's fully
[15:42] <kirkland> cemc: but i'll kill the vm before that happens (i hope)
[15:42] <cemc> right
[15:42] <cemc> :)
[15:42] <kirkland> cemc: for fun, try:
[15:43] <cemc> kirkland: I thought I'll try it too in a KVM with 2x500GB like you
[15:43] <kirkland> cemc: sure
[15:43] <kirkland> cemc: actually, try it with 250GB
[15:44] <kirkland> cemc: let's try to get the size down and see if ther'es a breakpoint somewhere
[15:44] <cemc> can you paste me the kvm command what you use?
[15:44] <kirkland> cemc: sure
[15:45] <kirkland> kvm-img create -f qcow2 500G a.img
[15:45] <kirkland> kvm-img create -f qcow2 500G b.img
[15:45] <kirkland> kvm -m 512 -hda a.img -hdb b.img -cdrom ../iso/lucid-server-amd64.iso
[15:48] <cemc> installing
[15:51] <hggdh> mathiaz: when you can, we need to chat about the rig
[15:51] <mathiaz> hggdh: sure - I'
[15:51] <mathiaz> hggdh: sure - I'll ping you in a couple of minutes
[15:52] <cemc> kirkland: is it just me, or is the formatting reaaaly slow?
[16:06] <RoyK> hi all. according to http://hub.opensolaris.org/bin/view/Community+Group+xen/guest-compatibility, ubuntu has no paravirt kernel support for xen - is this still the case for 10.04?
[16:08] <kirkland> cemc: yes, formatting takes a long time
[16:09] <cemc> kirkland: it's ridiculous ;) it's going for like 20 mins
[16:09] <Italian_Plumber1> my vmware server serial numbers don't work anymore :(
[16:09] <kirkland> cemc: yeah; well, you could speed it up some by using virtio for disks
[16:10] <cemc> kirkland: that's what I'm doing right now
[16:10] <kirkland> cemc: you used virtio?
[16:11] <cemc> kirkland: trying... I stopped the install, recreated the disks, and ran kvm with virtio
[16:11] <kirkland> cemc: good call
[16:11] <kirkland> cemc: me too
[16:11] <cemc> kirkland: but for some reason the vnc is now reeealy slow ;)
[16:11] <cbm> Hi all! Khow knows awk?
[16:12] <lil_cain> #awk probably knows it better than anyone here :-P
[16:13] <cemc> kirkland: painfully slow in fact, it just freezes at times. yours isn't?
[16:13] <kirkland>  | awk '{print $3}' | sed /^.*$/who/
[16:13] <kirkland> cemc: hmm, it's a bit slow, but my disk io is *pegged*
[16:14] <cemc> kirkland: It's formatting again
[16:14] <kirkland> cemc: okay, on the previous screen, did you see any extra "unusable" partitions?
[16:15] <cemc> kirkland: yes, I think I did. I'm not 100% sure but I think I had those on my 80GB disks too
[16:16] <kirkland> cemc: http://people.canonical.com/~kirkland/Screenshot-QEMU.png
[16:17] <cemc> kirkland: yep, I saw that too
[16:17] <cbm> i asks my question in #awk :) but do not get answer
[16:17] <cemc> kirkland: it's still slow :/
[16:17] <kirkland> cemc: well, it's writing the equivalent of 2-disks data to your 1 real disk
[16:18] <cemc> kirkland: a.img is 460MB, b.img is 1.4GB, slowly increasing
[16:19] <cbm> what types of discs you are using?
[16:19] <cemc> kirkland: iostat is showing like 3-4MB/s on the host, that's not impressive ;)
[16:20] <cemc> cbm: it's one 640MB WD Black SATA2
[16:20] <RoyK> usb1.1 kicking in? :)
[16:20] <cemc> 640GB even
[16:21] <cbm> not raid?
[16:21] <cemc> cbm: no. we're trying to install Lucid on RAID but in a KVM
[16:24] <kirkland> cemc: okay, give it a bit longer
[16:24] <cemc> kirkland: it's sync'ing too, with like 16KB/s :)
[16:26] <cbm> what host system in kvm you are using?
[16:27] <cbm> guest system (Lucid) the one?
[16:36] <kirkland> cemc: i got past the formatting, installing now
[16:46] <cemc> kirkland: still formatting, still at 33%
[16:55] <TeTeT> kirkland: I'm installing two new hosts via virt-manager, but performance is abysmal. it takes forever to install base system. Any ideas how to accelerate it again?
[16:55] <TeTeT> kvm version is 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9
[16:55] <kirkland> TeTeT: ps -ef | grep kvm | pastebinit
[16:55] <kirkland> TeTeT: lsmod | grep kvm
[16:56] <TeTeT> kirkland: http://pastebin.com/eMe8SDg5
[16:57] <TeTeT> $ lsmod | grep kvm
[16:57] <TeTeT> kvm_intel              39352  6
[16:57] <TeTeT> kvm                   245405  1 kvm_intel
[16:58] <jbernard> kirkland: I've started working on integrating my rackspace plugin into byobu
[16:58] <kirkland> TeTeT: anything in dmesg look ominous?
[16:58] <jbernard> kirkland: let me know if you have any suggestions
[16:58] <kirkland> jbernard: cool, thanks
[16:58] <kirkland> jbernard: will do, cheers
[16:59] <kirkland> TeTeT: your pastebin looks okay
[16:59] <Jeeves_> TeTeT: Are your disks busy?
[17:00] <cemc> I've tried to install lucid before in kvm, and indeed it was _very_ slow to the point where I aborted and installed karmic instead then upgrade
[17:00] <TeTeT> Jeeves_: not very, here's the iostat
[17:00] <Jeeves_> I've got several Lucid vm's which perform fine under kvm.
[17:00] <TeTeT> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
[17:00] <TeTeT> sda              11.64       114.46       463.33    1229041    4974928
[17:00] <TeTeT> Jeeves_: the ones I have running are fine, but just installing fresh ones
[17:01] <Jeeves_> TeTeT: iostat -kx ?
[17:01] <cemc> after installing it's all good, but the install is painfully slow
[17:01] <kirkland> TeTeT: what's "slow" about it?  screen refreshes?  responsiveness on the keyboard?  calculations?
[17:01] <Jeeves_> That also shows percentages
[17:01] <cemc> kirkland: btw, it's still formatting at 33%...
[17:01] <Jeeves_> I've seen that before. But also on normal hardware.
[17:02] <TeTeT> kirkland: it's at installing base system for 20 minutes - I've been insalling server in 15 minutes before
[17:02] <kirkland> cemc: hmm
[17:02] <TeTeT> Jeeves_: http://pastebin.com/NtbDEM3m
[17:02] <cemc> kirkland: it just finished this second, lol
[17:03] <kirkland> TeTeT: i agree that doesn't sound right; I don't have any more ideas, though, without more details
[17:03] <cemc> what I noticed in general, that it's not doing anything when installing, it just sitting there
[17:04] <kirkland> TeTeT: what kind of backing disk?
[17:04] <TeTeT> kirkland: ok, it's a SSD from Toshiba, reasonably fast normally
[17:04] <kirkland> TeTeT: i mean, what are you giving to the vm?
[17:04] <TeTeT> kirkland: 1 GB disk space, 512 MB RAM, 1 CPU
[17:04] <cemc> TeTeT: did you try installing Karmic guest? it's fast as lightning, right? compared to Lucid
[17:05] <TeTeT> cemc: have not tried karmic, can try it next. But I installed Lucid server and desktop before and it was fast
[17:12] <cemc> installing base system for like 10 minutes and it's at 38%. _slow_
[17:13] <cemc> kirkland: did you manage to install it?
[17:13] <kirkland> cemc: nearly done
[17:14] <RoAkSoAx> kirkland, howdy!!
[17:14] <cemc> after this I will try to compare Lucid vs. Karmic server install on KVM
[17:17] <kirkland> RoAkSoAx: howdy
[17:17] <RoAkSoAx> kirkland, how's it going?
[17:17] <kirkland> RoAkSoAx: crazy :-)
[17:18] <RoAkSoAx> kirkland, haha tell me I just moved to a new apartment. Anyways... im free to start working fully on testdrive.. shall we set meeting times or something?
[17:19] <mathiaz> hggdh: re bug 569248
[17:20] <mathiaz> hggdh: are you using the latest version of the preseeds?
[17:20] <mathiaz> hggdh: lp:~mathiaz/+junk/uec-testing-preseeds/
[17:21] <TeTeT> cemc: karmic does not seem to be faster for me, waiting at 30% for some time as well
[17:24] <kirkland> cemc: my test worked fine, with 100GB disks
[17:24] <kirkland> cemc: i'm very interested in your results with 250GB
[17:24] <cemc> kirkland: 55% right now, I'll let you know
[17:24] <kirkland> RoAkSoAx: nice
[17:24] <kirkland> RoAkSoAx: congrads on the new digs
[17:25] <kirkland> RoAkSoAx: i think i left some feedback for you on your last merge proposal
[17:25] <kirkland> RoAkSoAx: did you rework it accordingly?
[17:26] <RoAkSoAx> kirkland, the codename one?? I already recoded everything to obtain the codename i just need to merge it to the modularization
[17:27] <RoAkSoAx> kirkland, this one: https://code.launchpad.net/~andreserl/testdrive/codename/+merge/22763
[17:29] <kirkland> RoAkSoAx: yeah, that one ... did you rework per the feedback I gave in the review?
[17:30] <RoAkSoAx> kirkland, yep, I did some improvements too
[17:31] <kirkland> RoAkSoAx: can you re-propose for merging?
[17:31] <RoAkSoAx> kirkland, sure. I'll merge it then
[17:32] <hggdh> mathiaz: I had run a 'bzr pull' just before
[17:32] <hggdh> mathiaz: but I am not sure *where* bzr was pointing to... will check
[17:33] <hggdh> mathiaz: pointing to your local branch at tamarind...
[17:33] <kirkland> RoAkSoAx: ideally, i'd like to get the modularization pieces merged by UDS
[17:33] <kirkland> RoAkSoAx: let's try to get one UDS session on the Testdrive Frontend
[17:34] <kirkland> RoAkSoAx: i'd like to see you "drive" that session ;-)
[17:35] <RoAkSoAx> kirkland, I think I can make it on time. I'm gonna merge the codename on the current branch first, then merge it to the modularization.
[17:35] <RoAkSoAx> kirkland, That's not a problem, I can lead the session. The blueprint has not been accepted yet right?
[17:41] <hggdh> mathiaz: both are at the same revno on bzr, so yes, I was running the latest
[17:58] <rgreening> kirkland: ping
[17:59] <rgreening> I have a kvm I just update on a host running lucid. THe vm was updated to lucid, but fails at cant locate init. Any suggestions on recovery?
[18:07] <RoyK> rgreening: can it mount its root fs?
[18:10] <rgreening> RoyK: let me see what it says... I get dropped to initramfs prompt
[18:11] <rgreening> mount: mounting /dev/disk/by-uuid/[the uuid] on /root failed: Invalid argument
[18:12] <rgreening> then some additional mount fails for dev, sys, and proc. and Target filesystem doesn't have /sbin/init RoyK
[18:12] <cemc> rgreening: sounds like bug #569900
[18:13] <cemc> at least it's similar to that...
[18:14] <rgreening> its not raided
[18:14] <cemc> ah
[18:14] <rgreening> its a vm
[18:14] <rgreening> I have other vms on same system, upgraded in the same way, and they all booted.
[18:14] <rgreening> any recovery suggestions ?
[18:14] <cemc> rgreening: is anything special with this one? any special setup?
[18:15] <rgreening> nope. used the vmbuilder to make it and then updated it
[18:15] <rgreening> other than installing cacti
[18:18] <cemc> kirkland: it installed. it does not boot with virtio (FATAL: no bootable device), but if I do -hda, -hdb, it boots, and gives no error
[18:25] <hggdh> kirkland: topo4 done, all results on lp:~hggdh2/%2Bjunk/uec-qa/
[18:28] <kirkland> rgreening: interesting; that is the exact failure i see in my raid bug
[18:29] <kirkland> rgreening: wow, i hope i can reproduce this
[18:29] <rgreening> kirkland: The host OS has hardware raid, but the vm has an ext3 system. shouldn't matter, correct?
[18:29] <SJr> Why can connections still be made to smtp with these iptable rules: http://pastebin.com/nMbnMEgU
[18:29] <cemc> kirkland: maybe it's not related to the raid...
[18:30] <kirkland> cemc: what's your command line with virtio?
[18:30] <kirkland> rgreening: what's the backing disk for the vm?
[18:30] <kirkland> rgreening: qcow2 image?  lvm?  real device?
[18:30] <cemc> kirkland: kvm -m 512 -drive file=a.img,if=virtio -drive file=b.img,if=virtio -vnc 172.16.21.1:1 -cpu qemu64
[18:30] <rgreening> qcow2
[18:30] <kirkland> cemc: you're missing an option
[18:30] <FFF666> hi, I want to know which network configuration has UEC by default
[18:31] <cemc> kirkland: which one?
[18:31] <kirkland> cemc: index=0,boot=on
[18:31] <kirkland> cemc: -drive file=%s,if=virtio,index=0,boot=on %s"
[18:31] <kirkland> cemc: you'll need one drive to be index0, the other index=1
[18:32] <cemc> kirkland: mhm, I guess you don't need that when booting from cdrom, but after that you do need it. now it booted ok with virtio too, thanks
[18:32] <kirkland> FFF666: VNET_MODE="MANAGED-NOVLAN"
[18:32] <cemc> kirkland: so it was ok with 250gb disks too
[18:32] <kirkland> cemc: right; it boots from cdrom
[18:32] <kirkland> cemc: interesting;  i want to investigate rgreening's problem; this sounds like the same core problem
[18:33] <kirkland> rgreening: okay, you're in the initramfs?
[18:33] <cemc> kirkland: yeah, that's what I thought when I saw it...
[18:33] <rgreening> kirkland: yep
[18:33] <kirkland> rgreening: what's the exact size and partitioning of your vm's disk?
[18:33] <kirkland> rgreening: ls -alF /dev/disk/by-uuid
[18:35] <FFF666> kirkland: Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller
[18:36] <FFF666> does the problem in the configuration?
[18:36] <FFF666> sorry,    is the problem in the configuration?
[18:36] <rgreening> kirkland: here's the screenie of the output: http://imagebin.ca/view/lQIX5_j.html
[18:37] <rgreening> And here's the files sizes on disk: -rw-r--r-- 1 rgreening rgreening 1797062656 2010-04-26 12:37 disk0.qcow2
[18:37] <rgreening> -rw-r--r-- 1 rgreening rgreening  710279168 2010-04-26 12:37 disk1.qcow2
[18:38] <rgreening> with partitions: root: 8000, swap 4000, /var 20000
[18:39] <kirkland> hggdh: i'm building a package that should fix your walrus Java heap OOM errors
[18:39] <kirkland> hggdh: would you be able to test that today-ish?
[18:40] <FFF666> kirkland: Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the configuration?
[18:41] <kirkland> cjwatson: around?  rgreening is hitting the same error I saw on Lucid w/ RAID on real hardware;  he's not using RAID, and he's in a VM
[18:41] <kirkland> cjwatson: i could use someone with initramfs skills to help debug this and determine if it's release-critical
[18:41] <hggdh> kirkland: yes, no prob. I will have to go back to topo2, the one that seems more prone to this failure
[18:41] <kirkland> cjwatson: i asked keybuk for help last night, but he said it wasn't his focus
[18:42] <hggdh> hum. Lemme check on topo4 logs
[18:42] <rgreening> ty kirkland and cjwatson
[18:42] <kirkland> rgreening: thank you ...  i think your setup shows this problem even more straight-forwardly than mine
[18:42] <kirkland> rgreening: so 1797062656 is size on disk ... is this a sparse image?
[18:43] <rgreening> ok. Strange that I upgraded 5 other VM's similarly, and they worked. So, 1/6 times it failed :)
[18:43] <rgreening> kirkland: ya
[18:43] <kirkland> hggdh: okay, it's uploaded to ubuntu-virt-ppa; awaiting build
[18:43] <cjwatson> kirkland: not using RAID => different bug, presumptively
[18:43] <kirkland> cjwatson: well, unless RAID is not the actual issue at all
[18:43] <cjwatson> this one looks like it's that the device doesn't show up in time, maybe?
[18:43] <cjwatson> try rootdelay=120 or some such
[18:43] <hggdh> kirkland: k. Just keep in mind it may take a few hours to reach the OOM point
[18:43] <cjwatson> not sure
[18:44] <cjwatson> kirkland: I would recommend not pre-emptively conflating bugs with similar symptoms
[18:44] <cjwatson> conflate them once you know they're the same, by all means
[18:44] <cjwatson> but until then, assume they're separate
[18:44] <cjwatson> anyway, finishing up here I'm afraid
[18:44] <kirkland> cjwatson: okay, will avoid conflating
[18:44] <cjwatson> rgreening: if you can write out a recipe by which I can reproduce this in a fresh VM, I'll look at it tonight or tomorrow
[18:44] <kirkland> cjwatson: who else on your team can I bother about this?
[18:45] <cjwatson> you can try Keybuk; everyone else will be finishing for the day I think
[18:46] <cemc> rgreening: how did you upgrade the VMs from Karmic exactly?
[18:46] <rgreening> cjwatson: that's the thing, I have 1 out of 6 busted VM's. The others have similar setup and only this one failed update from karmic to lucid.. strange. I don't believe I could easily duplicate it. though I can try.
[18:46] <rgreening> cemc: do-release-upgrade -d
[18:46] <cemc> rgreening: they are all server installs?
[18:46] <rgreening> after installing update-manager-core
[18:46] <rgreening> yes
[18:46] <cemc> rgreening: 32/64bit ?
[18:46] <cjwatson> there's stuff on the wiki about getting more debug info out of the initramfs
[18:46] <rgreening> well, server installs made from vmbuilder
[18:47] <rgreening> cjwatson: got a url handy? I am interested in recovery as well... as my vm is dead :)
[18:47] <cjwatson> https://wiki.ubuntu.com/DebuggingKernelBoot
[18:47] <cjwatson> https://wiki.ubuntu.com/DebuggingKernelBoot#Initramfs rather
[18:48] <rgreening> ty cjwatson. I presume this works in a vm as well...
[18:48] <cjwatson> yes
[18:48] <rgreening> ok. will try that
[18:48] <cjwatson> though getting to the grub menu may be tricky depending on your vm impl
[18:49] <cjwatson> might need a rescue image to turn off the hidden timeout stuff
[18:49] <rgreening> using virt-manager
[18:49] <rgreening> and qemu+kvm
[18:50] <FFF666> Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN")
[18:50] <cjwatson> rgreening: oh, also, check dmesg
[18:50] <cjwatson> EINVAL is the generic "something went wrong" error code
[18:52] <kirkland> rgreening: did you try to the rootdelay=120 kernel option?
[18:52] <cjwatson> actually that probably won't make any difference
[18:52] <cjwatson> the device is there, it's just unmountable
[18:52] <cjwatson> EINVAL source had an invalid superblock.  [... or other stuff that doesn't apply here]
[18:52] <rgreening> cjwatson: I'm launching the recover kern image now to see the debug/printk output..
[18:53] <cjwatson> you could also try mounting it from that.  to me, if this doesn't have any lvm or raid or other funny stuff involved, EINVAL seems to imply that the filesystem is toast
[18:53] <cjwatson> but that's just an initial hypothesis
[18:53] <rgreening> ouch
[18:55] <rgreening> cjwatson: anyway to attempt recovery of the fs? working in a vm is a bit trickier than Im used to. in a real server, I'd use a recovery media. in a vm, not sure how to proceed
[18:55] <FFF666> Hello everyone. I can't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN")
[19:07] <FFF666> Hello everyone. I can't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN")
[19:08] <kirkland> FFF666: are they on the same subnet?
[19:08] <FFF666> yes
[19:08] <FFF666> 192.168.2.x
[19:08] <kirkland> FFF666: what's ifconfig's output on your CC
[19:10] <FFF666> I dont have the pc turn on
[19:10] <FFF666> but has the 192.168.2.100 ip
[19:11] <FFF666> I can ping it from the CC
[19:11] <kirkland> FFF666: i'm sorry, i don't know;  check #eucalyptus
[19:12] <FFF666> ahh, thanks
[19:20] <xperia> hello to all. i have some strange problem with my ubuntu web server. for some reason everything runs heavy slow. top show however a cpu usage of maximal 12 %. what could be the problem. all php scripts run with a cpu usage of 2% only.
[19:30] <ChmEarl> running MYoungs xen4+xendom kernel, do I need to compile xorg-x11-server-Xorg to get X11 working?
[19:30] <ChmEarl> wrong ch
[19:56] <hggdh> kirkland: test the OOM with RC + your packages, or daily+PPA?
[20:02] <kirkland> hggdh: daily + PPA, please
[20:04] <hggdh> kirkland: ack
[20:05] <fbc-mx> HI all!!
[20:18] <rgreening> kirkland: looks like my issue may be both a corrupted fs and issue regenerating the initramfs. I've mounted the qcow2 file and performing fsck repairs to it now. I'll let you know if this solves or if the initramfs is still an issue.
[21:00] <kirkland> rgreening: okay, thanks
[21:00] <kirkland> rgreening: so the rootdelay didn't have any effect?
[21:01] <rgreening> kirkland: np. still doing fsck on volumes :)
[21:01] <kirkland> rgreening: gotcha
[21:01] <rgreening> kirkland: never tried that yet. Saw some errors on fs
[21:01] <kirkland> rgreening: k
[21:01] <rgreening> kirkland: on another note, virsh -c qemu:///system shutdown doesnt seem to work
[21:01] <rgreening> and I have acpi enabled and acpid installed and running on guest
[21:02]  * rgreening puzzled
[21:02] <kirkland> rgreening: i think there's an open bug on that
[21:02] <kirkland> rgreening: is this a regression?
[21:02] <rgreening> yeah, I'm pretty sure I was able to shutdown before under Karmic
[21:03] <rgreening> though my brain is spaghetti at the moment.. trying to do way too much
[21:04] <rgreening> kirkland: host doesn't need acpid does it? just the guest? or ...
[21:04] <kirkland> rgreening: should just be the guest
[21:04] <rgreening> yeah, thats what I thought.
[21:04] <rgreening> bizarre
[21:04] <kirkland> rgreening: ack; today has been the busiest day ... 10 conversations at once, it seems
[21:05] <rgreening> hah
[21:05] <rgreening> Im sure we'll chat lots at UDS :P
[21:06] <rgreening> kirkland: rgreening@natasha:~$ acpi_listen
[21:06] <rgreening> button/power PWRF 00000080 0000000e
[21:06] <rgreening> my guest receives the event, just doesnt act on it
[21:07] <kirkland> rgreening: hrm ... pm-utils bug, perhaps?
[21:08] <rgreening> kirkland: I don't have that package
[21:08] <rgreening> hmm...
[21:09] <cemc> kirkland: any progress? had a laptop hdd issue here
[21:09] <kirkland> rgreening: sorry, that might be unrelated
[21:09] <kirkland> cemc: hmm, not much;  what do you have ?
[21:09] <rgreening> kirkland: I originally build vm with vmbuilder under karmic
[21:10] <cemc> kirkland: nothing either... I have a karmic vm, maybe I'll try to upgrade it, let's see what happens
[21:10] <kirkland> cemc: back it up first ;-)
[21:10] <cemc> ofcourse ;)
[21:22] <rgreening> kirkland: fsck fixed the guest. I believe my issue is caused by shutdown not working via acpi, which means the guests were being destroyed and not shutdown.
[21:23] <kirkland> rgreening: i'm trying to test that now
[21:23] <rgreening> ty.
[21:24] <jetole> does anyone know how to get rid of the bash suggestions like when I typo suod instead of sudo and it suggests what I want?
[21:24] <kirkland> rgreening: server image, correct?
[21:24] <kirkland> rgreening: ie, not desktop
[21:25] <rgreening> kirkland: I used this.. sudo vmbuilder kvm ubuntu -c mrtg.cfg -d /opt/vm/odin-kvm
[21:25] <rgreening> kirkland: which should default to a server setup
[21:25] <kirkland> rgreening: right
[21:26] <kirkland> rgreening: sorry, you've told me that :-)
[21:26] <rgreening> np
[21:26] <rgreening> :)
[21:26] <kirkland> *(too_man_conversations)
[21:26] <rgreening> you have a spaghetti brain just like me now
[21:26] <kirkland> rgreening: heh :-)
[21:32] <cemc> did the upgrade karmic->lucid in the vm, no problems
[21:36] <Gol023> good evening all
[22:00] <bogeyd6> I setup exim4 (or thought I did) and I can connect to my port 25 locally but not from a remote bawx. How can I enable exim to let remote users connect?
[22:01] <bogeyd6> There is no firewall installed*
[22:11] <kirkland> cemc: good to hear, thanks
[22:12] <smoser> kirkland, you rang ?
[22:12] <joaopinto> hi
[22:12] <joaopinto> is there a bug report about missing dependencies libapache2-mod-wsgi ?
[22:13] <joaopinto> for...
[22:14] <joaopinto> oh, there is, but is not fixed
[22:14] <mean67> any one want to assist me in setting up server
[22:15] <bogeyd6> !anyone | mean67
[22:15] <mean67> I am trying to set it up using 2 10k SATA drives that I have and am having some trouble
[22:16] <kirkland> smoser: sorry, forgot you were traveling
[22:16] <smoser> no problem.
[22:17] <mean67>  I would like to know the best way to have it set up
[22:17] <mean67> and am at the point where it is asking me partition info
[22:18] <bogeyd6> Do you plan on using a RAID card or Softraid?
[22:20] <bogeyd6> !raid
[22:34] <mean67> I am not using RAID
[22:35] <persia> mean67: Does selecting the default options not just work for you?
[22:36] <mean67> well I thought that putting the swap on my and the boot on the other would be the best way to go
[22:37] <persia> Heh, then select the manual configu.
[22:38] <persia> If you mostly start/stop new (uncached) binaries, it may help.  If you mostly serve data stored somewhere, it would be better to worry about separating the data from the swap, as the binaries are likely cached.
[22:48] <mean67> so not sure if I should pick guided or manual
[22:53] <kirkland> zul: ping
[22:53] <kirkland> zul: you've reproduced https://bugs.launchpad.net/bugs/569890 ?
[22:55] <kirkland> rgreening: i can confirm your shutdown issue
[23:00] <hggdh> kirkland: first round done, 400 instances. CLC did not (yet) blow on OOM; Walrus shows less memory usage
[23:00] <kirkland> hggdh: cool, and usually it would by that point?
[23:00] <kirkland> hggdh: and this is with the default mem allocation?
[23:01] <hggdh> kirkland: yes. CLC shows now ~800M of res, Wlarus shows 630M. Up to now, both CLC and Walrus would show about the same memory usage
[23:02] <hggdh> around 750M
[23:02] <kirkland> hggdh: good stuff
[23:02] <kirkland> hggdh: would you note the ppa version you tested, and confirmed in your orig bug report?
[23:02] <hggdh> kirkland: only surprise was an unilateral reboot of one of the NCs, near the end of the run
[23:02] <kirkland> hggdh: hmm, spontaneous reboot?
[23:02] <hggdh> kirkland: will do. I am also starting another round now
[23:03] <kirkland> hggdh: is mathiaz or anyone else dorking around on the lab kit?
[23:03] <hggdh> kirkland: seem to have been spontaneous, yes. I cannot see any indications in the logs about it
[23:03] <hggdh> except the reboot messages, of course
[23:03] <hggdh> kirkland: to my knowledge I am home alone
[23:04] <kirkland> hggdh: roger
[23:06] <hggdh> kirkland: pushing now the results
[23:07] <hggdh> I would expect to find some OOMs in this run...
[23:11] <kirkland> hggdh: alrighty
[23:11] <kirkland> hggdh: standing by
[23:13] <chrismsnz> Hey guys
[23:13] <chrismsnz> by any chance is anybody running ubuntu on SuperMicro Twin2 servers?
[23:13] <hggdh> kirkland: pushed
[23:14] <kirkland> hggdh: url?
[23:14] <chrismsnz> We've been running into problems (disk crashes, hard locks, reboots) and the oem mentioned since all 4 nodes were failing it was probably a software/driver/kernel problem
[23:18] <hggdh> kirkland: lp:/~hggdh2/%2Bjunk/uec-qa/
[23:18] <kirkland> hggdh: cheers, thanks
[23:23] <mean67> is there a GUI for UEC
[23:23] <Gol023> hi folks
[23:24] <Gol023> i've just installed ebox, how do i get to the interface?
[23:24] <Gol023> i'm new to this!
[23:27] <electro_> With regards to the Ubuntu Enterprise Cloud, I am looking to setup 1 front end with 4 Node Controllers.  I have the front end on a public addressable ip, and the node controllers on the same public.  I bridged the public interface, but want to run VMs on a private network.  Is there anyone that has experience with this I can talk to ?
[23:31] <RoAkSoAx> kirkland, I've just proposed my changes for merging into testdrive
[23:33] <kirkland> RoAkSoAx: cool, i'll check
[23:33] <RoAkSoAx> kirkland, :)
[23:34] <kirkland> rgreening: https://bugs.edge.launchpad.net/ubuntu/+source/acpi/+bug/570450
[23:35] <kirkland> rgreening: debdiff fix for that bug for you in there
[23:46] <persia> kirkland: release team is reviewing each upload in the queue (using queuediff).  You may as well upload and get feedback that way if you're definitely SRUing it anyway.
[23:46] <kirkland> persia: thanks
[23:47] <persia> Note that this is only every appropriate during final freeze: 50 weeks a year, subscription is the correct model.
[23:47] <persia> (or maybe 48, anyway, doesn't matter)
[23:49] <kirkland> persia: ack, thanks ;-)