[00:09] is there any way to see what packages depend on a given package? [00:11] apt-cache rdepends package [00:13] ah, cool, thanks [01:25] someone reads what I white? [01:27] Can I ask something about UEC? [01:38] hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. [01:38] I can run an image and login it via ssh, but only from the pc that I've the cloud controller. [01:38] The problem is that I can't ping or login ssh the image from a pc of the LAN. [02:04] hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. I can run an image and login it via ssh, but only from the pc that I've the cloud controller.The problem is that I can't ping or login ssh the image from a pc of the LAN. [02:08] !weekend | FFF666 [02:08] FFF666: It's a weekend. Often on weekends the paid developers and a lot of the community may not be around to answer your question. Please be patient, wait longer than you normally would or try again during the working week. [02:14] uu, ok I'll ask tomorrow === erichammond1 is now known as erichammond [04:44] hey guys, have you ever experienced a system freeze on high file transfer speeds [04:45] on either gigabit ethernet or local hard to hard drive [06:35] amstan: I have, but only with unreliable hardware. ;) [08:27] Hi, I extended a volume group spanning two soft raid devices (/dev/md0 and /dev/md1). I want to reduce the volume group by removing /dev/md1 and tried using 'vgreduce raid /dev/md1' which returns 'Physical volume "/dev/md1" is still in use.' I'm stuck and can't seem to figure it out. I'm using Hardy Server i386. http://pastebin.com/hkQ4D68A [08:38] netritious: if you want to reduce the size, you have to unmount the partition [08:38] reducing the size of filesystem isn't as simple as extending [08:40] ivoks: Thank you. I knew I was missing something. Will give that a try. [08:50] ivoks: He's wanting to remove a pv from a vg, not reduce the size of a filesystem, I think. [08:50] netritious: You probably want to look at pvmove. It lets you move the used PE's from one PV to another. [08:51] netritious: So, in your case, you'd run "pvmove /dev/md1". [08:51] netritious: When that's done, you should be able to run your vgreduce command. [08:56] soren, ivoks: My system still is not cooperating with me. http://pastebin.com/iHjqYPhB Where to go from here? Maybe detach the drives physically from the system and then try again? [08:58] back :) [08:58] wb ivoks :) [08:58] soren: well, he should reduce fs before reducing lv/vg, right? :) [08:59] netritious: you can't unmount partitions you are using :) [08:59] pvmove is good suggestion [09:00] netritious: let's start from begining [09:00] netritious: you have volume group that uses /dev/md0 and /dev/md1, right? [09:00] ivoks: yes, and ty for your patience :) [09:01] phone [09:01] np..I'm here all night [09:02] ivoks: Well, maybe. Depends on whether they'd all fit on the single md device. [09:03] Would more detail help? [09:04] netritious: Yikes, not pvremove. pvmove. [09:05] soren: ok, I think I have some explaining to do. :) [09:07] I have five drives; 1x160GB IDE that is LG 'system'. Then there are 4x1TB drives. sda and sdb make up /dev/md0 which I added a few days ago. sdc and sdd make up /dev/md1 which I added about an hour ago. [09:08] ok [09:08] I initially created LG raid on /dev/md0 and extended it to /dev/md1 (just an hour or so ago) [09:09] I performed 'umount -a' which has unmounted LG 'raid' which is on /dev/md0 and /dev/md1 [09:09] and now you want to take /dev/md1 out of LF? [09:09] LG [09:10] out of LG 'raid' yes [09:10] then what soren said [09:10] pvmove [09:10] pvmove -v /dev/md1 [09:11] k..will try now, and again thanks for your patience...I am really new to mdadm and lvm. [09:12] vgreduce raid /dev/md1 [09:13] pvremove /dev/md1 [09:13] that should be it [09:13] What's an LG? [09:13] logical group, probably :) [09:14] And what's that really? [09:14] An LV or a VG? [09:14] or south korean manufacturer :) [09:14] VG [09:14] soren: you're right..I should have used VG [09:14] i'm not sure you'll be able to move everything [09:14] # [09:14] PV Name /dev/md1 [09:15] # [09:15] Allocatable yes (but full) [09:15] I'm very confused by all of this: [09:16] netritious: You have 1 160 GB drive and 4 1 TB drives, and then you speak of sda, sdb, sdc, and sdd. I'm not sure how to map the drive sizes to the device names. [09:16] ...especially since there are 5 drives, but only 4 device names mentioned. [09:16] soren: he has two volume groups [09:16] soren: 160GB is system, 1TB are /dev/md0 and /dev/md1 in raid VG [09:16] Also, if an "LG" really is a VG, it doesn't really make sane that a "umount -a" unmounts an LG that is called raid. [09:17] soren: http://pastebin.com/hkQ4D68A [09:18] * soren doesn't get it at all :) [09:18] 'pvdisplay' is relevant part :) [09:19] Ah, there's and sde as well. [09:19] ...which is the 160 GB drive. [09:19] soren: in system VG [09:19] soren: correct [09:19] so, ignore it :) [09:19] exactly :) [09:19] Oh, and "raid" is the name of the volume group? [09:19] Apologies for the confusion [09:19] soren: yes [09:19] You don't unmount volume groups. [09:20] So this makes no sense: 08:09 < netritious> I performed 'umount -a' which has unmounted LG 'raid' which is on /dev/md0 and /dev/md1 [09:20] of course, it's not mounted at all [09:20] ivoks: Precisely. [09:20] soren: it probably unmounted LVs on raid VG [09:21] ivoks: You don't mount LV's either :) [09:21] :p [09:21] You mount filesystems on LV's.... and the VG doesn't care either way. [09:21] Anyways... [09:21] Ah, more confusion from me again..I unmounted /mnt/storage which is mounted on /dev/mapper/raid-storage [09:21] netritious: I stand by my instructions, nevertheless. "pvmove /dev/md1" (add a -v for more info, if you please).. then do "vgreduce" like you did. [09:21] there we go :) [09:22] ...and then pvremove if you're feeling bored. [09:26] soren: http://pastebin.com/jGgFy9ce [09:29] soren: 'system' is a volume group and 'raid' is a volume group, and nothing is mounted on a logical volume in volume group raid. So I'm still missing something. [09:30] ivoks: You had just come and gone... http://pastebin.com/jGgFy9ce [09:31] Anyone here with clue where to report broken jigdo images on the releases mirrors? [09:31] Jeeves_: #ubuntu-release, probably. [09:32] netritious: Could you try "pvdisplay -m /dev/md1"? [09:33] i guess he has a filesystem of 2TB on /dev/mapper/raid-storage [09:33] netritious: when you added /dev/md1, did you extend logical volume and filesystem to 2TB? [09:33] soren: Thanks [09:33] soren: http://pastebin.com/1eHhQnNv [09:34] backups? [09:35] ivoks: Do you still have the first pastebin thing open? It's expired so I can't go and see. [09:35] soren: nope :/ [09:35] oh, wait... [09:36] soren: http://pastebin.com/eYpAUkSz [09:36] ivoks: Great, thanks. [09:36] Oh, that just had pvdisplay :( [09:37] netritious: Can't help you. You've spent all the space on md1 on an lv called "backups". [09:37] I am trying to look in my bash_history to find all the commands I've ran and in what order [09:38] if you don't need backup, delete that LV :) [09:38] That's not really important, is it? [09:38] no, backups has nothing in/on it...was what I intended to remove from the volume group 'raid' [09:38] netritious: you can't remove disk which is filled with data, without losing that data [09:39] There is nothing in 'backups' [09:39] netritious: then delete that LV [09:39] netritious: Oh, hang on. [09:39] netritious: You just want to remove that lv? Nothing else? [09:39] netritious: You don't actually want to remove md1 from the vg? [09:39] yes, just that LV, not 'storage' [09:40] wait, let me be more specific.. [09:40] Yes, please :) [09:40] but that's not what you said before :) [09:40] Hello everyone [09:41] ivoks: Apologies for the confusion..still a n00b with lvm... [09:41] Guys we are going to deploy around 100 Ubuntu Server in virtualization - which ubuntu server would you go for? 9.10 or 8.04.4 LTS? [09:41] it's easy partition -> PV -> VG -> LV -> FS [09:41] i need to understand how stable is 9.10 vs 8.04.4 [09:42] netritious: It's much easier if you explain what you want to achieve rather than what you think you need to do to achieve it. [09:42] Wilco_: Why? [09:42] Wilco_: real question is what do you need server for [09:42] servers [09:42] coffe [09:42] Wilco_: There's a new LTS coming out on Thursday. You may want to wait for that. [09:42] we are building some kind of service which mostly run tomcats [09:42] Please accept my condolences. [09:43] so im going to have like 100 VMs running tomcats [09:43] the question which ubuntu to use [09:43] 10.04 :) [09:43] i need something super stable [09:43] oh, 8.04 then [09:43] soren: k, I want to remove /dev/md1 from logical group 'raid' and create another volume group just for /dev/md1 [09:43] :) [09:43] but it has old tomcat [09:43] netritious: Why? [09:43] we are going to use the tomcat from tomcat web site [09:43] 10.04 should be stable [09:43] and the java vm from sun [09:44] then use 8.04 [09:44] ok [09:44] ivoks: Hm? why? [09:45] soren: I don't want to span the logical volume across /dev/md0 and /dev/md1 [09:45] soren: cause it's 'super stable' :) [09:45] i noticed that 9.10 when installed installs EXT 4 [09:45] is ext 4 stable enough ? [09:45] you can change that to any FS you want [09:45] these will be production servers so i really need something that wont dye on me [09:45] dye=die [09:45] yeah [09:46] i was just suprised when i saw it default to ext4 [09:46] netritious: Why? [09:46] the issues i have with 9.10 on servers is upstart [09:46] soren: Why not? [09:46] explain ? [09:46] i use VMware to run the ubuntus and they load up pretty fast [09:46] soren: is that a bad idea? [09:46] even vmware-tools works fine [09:46] netritious: Maybe. Hence the "why"? [09:47] soren: would it be wise to use the latest LTS [09:47] coming on thu ? [09:47] Wilco_: Usually, yes. [09:47] hmm [09:47] dilemmas dilemmas [09:47] Wilco_: You can install it now and start testing. [09:47] Wilco_: No need to wait. [09:47] soren: It just makes sense to me to only have to deal with one volume group per raid device. [09:48] soren - our machines are very simple. a tomcat, java jdk, apache, some php and thats it [09:48] netritious: Ok. Well, if that's wha tyou want, that's fine. I just don't see the point. [09:48] Wilco_: on 9.10, nfs is started before network, for example [09:48] ivoks: thats lame - dont you get a timeout here? [09:48] it's not the problem for services [09:48] 9.10 had a lot of problems like that. They should all have been addressed in 10.04. [09:49] problem is when your /home is NFS :) [09:49] hehe [09:49] and in 8.04 its ok ? [09:49] then boot fails [09:49] 8.04 and 10.04 are ok [09:49] great. [09:49] ok - another issue that im puzzled with is MYSQL Enterprise 5.1 - (not the one that comes off the ubuntu) - do yo uknow if its runs in 64bit on ubuntu 8.04? [09:49] i would suggest 10.04 then [09:50] just make sure your app works with php 5.3 [09:50] they have a weird matrix [09:50] of supported OS [09:50] but they are vague on ubuntu [09:50] don't know [09:50] i never understood people that but mysql... :/ [09:50] buy [09:50] soren: To be completely honest I probably don't need LVM at all...I'm really not doing anything with it unless I span the two RADI1 devices, correct? [09:50] netritious: The awesomeness of lvm is not just that you can dynamically resize LV's, but also that you can have a VG that spans multiple LV's. Unless you're expecting to move one of the raid sets to another machine, I just don't see the point in splitting them into two VG's. [09:51] have one VG or don't have VG at all [09:51] (in this case) [09:51] soren: I'm trying to account for that exactly...in the event this old hardware gives out on me. [09:51] ivoks: we need to for support purposes [09:51] netritious: Then you would take one of the raid sets and put in one box and the other raid set in another? [09:51] the drives are new, but the motherboard/ram/cpus are 8+ years old [09:52] Wilco_: still... :) [09:52] soren: can't until I get more hardware [09:52] netritious: I'm asking.. [09:53] netritious: If your server exploded, would you move one of the raid sets to one machine and the other raid set to another? [09:53] netritious: Or would you move them both to the same box? [09:53] soren: very possibly [09:53] the first [09:53] netritious: Ok. Then just split them. You're just losing a lot of flexibility that way, but ok. [09:53] soren: I just want the option [09:55] soren: splitting them makes it less flexible in the event I have to move to two different machines? Just trying to make sure I understand. [09:56] soren: or are you saying I have less flexibility with LVM in general by not spanning across the raid devices? [09:56] netritious: No, splitting them means that you now have to worry about where your data is kept. [09:56] netritious: One of the great things about lvm is that you just have a great big blob of storage and don't need to worry about where the data is. [09:57] You just say "hey, I need an LV that's 400 [09:57] GB for my databases" and it gives it to you. [09:57] It may span multiple PV's or not. You don't need to know. [09:57] You just know that you get a blob of storage you can use. [09:57] If you want to extend it later on, you just do. [09:58] ...and that may expand it into another pv. You don't need to know. [09:58] If you split them, you always need to consider whether you're good on space on this VG or if you'd rather put it on the other VG or whatnot. [09:59] soren: that makes sense [10:00] soren: that was kind of my point about using LVM in the first place...It's probably not needed with what I think I want out of it, but on the other hand if I need to move each volume group to two different servers and then span across more physical or RAID devices it would be optimal to split it now..I'm guessing at that of course. [10:01] but 'sotrage' and 'backup' have distinct purposes, but atm 'storage' is acting as both, which is why I wanted to add more drives in the first place [10:06] guys any of you use landscape to manage patch management on multiple ubuntu servers [10:06] we are considering getting it so we have a baseline of patches across all servers [10:07] Wilco_: we have it on test, considering it [10:07] it seems to have its limitations, but looks good nevertheless [10:07] i only need it to patch the servers :) [10:07] have you tried it ? [10:08] We've tried it [10:08] Wilco_: for just patching servers, landscape might be overkill :) [10:08] Couldn't really find an advantage over our existing Nagios, Munin, and 'apt-get upgrade' strategy. [10:09] Jeeves_: apt-get upgrade is an issue [10:09] Wilco_: How? [10:09] there's stuff like cron-apt [10:09] because in different times you may not have the same patch level across servers [10:09] i must have all my servers using the same updates [10:09] just update them all at the same time. [10:09] unattend-upgrade configured to update everything + a nagios check to make sure there isn't any package "blocked" by dpkg is usually working perfectly fine [10:09] Wilco_: Than you should always apt-get upgrade at the same time [10:10] Wilco_: landscape is convenient for that, yes, especially with a large number of servers [10:10] but on production critical servers you sometimes cant [10:10] i gonna have like 200 servers in a year [10:10] i need ORDER :) [10:10] * Jeeves_ gives Wilco_ cfengine and autoapt.pl [10:10] Jeeves_: what are these? [10:11] scriptz? [10:11] Wilco_: landscape costs $150 per server (though less for non-commercial and less with many servers). with 200 servers, I'd say it might be worth it [10:11] yeah they giviing me superb price [10:11] * stgraber is managing something between 600 and 1k Ubuntu servers (mostly Ubuntu containers with vserver/openvz + some vmware/kvm VMs + some physical hosts) [10:11] Wilco_: cfengine is a scriptinglanguage which allows you to configure servers out of svn [10:11] price is not the issue [10:11] I'd just use puppet, or cluster-ssh. [10:11] autoapt.pl install's software when you want it, and configures it unattended [10:11] Wilco_: then go for it - less hassle than using cfengine and the likes [10:11] ok [10:11] ic [10:11] thats great support here guys :) [10:12] stgraber: how do you patch your 1000 servers? [10:12] does anyone here have some script that hardens an ubuntu server? [10:12] or at least checks that its hardened? [10:12] Nope [10:12] 'harden'? [10:12] security [10:12] scans the os [10:13] turns of what ever requires turns off [10:13] Wilco_: bcfg2 (similar to puppet) which make sure package lists + files match what we have in our repo every hour + unattended-upgrade at fixed time every night for the upgrades + munin/nagios to make sure everything is consistent (including package version) [10:13] it should be pretty decent out of the box [10:13] Wilco_: It doesn't start anything that you didn't install :) [10:13] Wilco_: so long as you don't install unneeded stuff, it should be quite hardened by default [10:13] Wilco_: and you buy support, pih... :) [10:13] Wilco_: there's tons of ways to do that. [10:13] i just install openSSH server [10:13] ufw enable might help too [10:14] Wilco_: and fail2ban [10:14] ic [10:14] Wilco_: propper iptables setup, rkhunter and fail2ban and patching is a good start. [10:14] ok [10:14] and I see some already said that :P [10:14] Did rkhunter stop whining about ssh allready? [10:14] i wont be using iptables - gonna use vShield Zones [10:14] Jeeves_: no :) [10:15] i'll check rkhunter and fail2ban [10:15] ewook: Than it still sucks :) [10:15] Jeeves_: or, depends what the wining is about :P [10:15] stgraber: very intresting [10:15] Jeeves_: you meen the blacklisted sums? [10:15] mean [10:15] ewook: No. The root-login stuff. [10:15] stgraber: what happens when a new kernel releases - do you update it too ? dont you restart the server in this scenario ? [10:16] Jeeves_: I've never seen that one on a ubuntu.. or so I think. [10:16] stgraber: you said "our repo" - you have your own repository ? and if so why ? [10:16] Wilco_: yes you do. but that is planned maintenence to take down a server :) [10:16] Jeeves_: What root login stuff? [10:16] lil_cain: 'You are allowing a root login! That is very dangerous!' [10:16] Wilco_: If you're managing >3/4 machines that run anything non-standard, it's worth having your own repo. [10:17] Jeeves_: there's no root enabled in my configs [10:17] ok [10:17] i dont. [10:17] Jeeves_: so you might wanna check your setup :P [10:17] i just use ubuntu main [10:17] Wilco_: as I said, most of these are containers, containers use the host kernel and don't have their own. For physical and VMs, we upgrade the kernel and we have hooks in /etc/kernel.d/ to send e-mail in cases where there's a custom module that need manual building + maintenance window. For others, it really depends on the support contract we have with the customer regarding security, in some cases we reboot, in some cases we wait for anot [10:17] very simple [10:17] Jeeves_: Ah. I always thought that was a stupid default on the part of Ubuntu. [10:17] stgraber: nice [10:18] ewook: We allow root-logins by default. Because it makes stuff much easier for us. [10:18] Wilco_: having your own repo is the same thinging as having a WSUS for windows-machines. [10:18] Wilco_: we use the official archive + a ppa + an internal repository for non-distributable packages. "repo" in my previous sentence was refering to a configuration repository describing all "profiles" (package lists, configuration files lists, ...) that's in our case a bzr branch. [10:18] am i right that the only time you need to reboot the server is when the kernel/drivers update? [10:18] Root still doesn't have a password, only pubkeys [10:18] Jeeves_: well then, it's not rkhunter's fault :P [10:18] Wilco_: when libc6 does as well. [10:18] ok [10:18] ewook: It is, when I tell it not to whine about it and it still does :) [10:19] Jeeves_: oh. gotcha. :P [10:20] Jeeves_: I'd be happy if there's a slipstreamed and 'secured' way that it rehashes itself after a apt-upgrade [10:20] guys maybe its not relevant - a friend of mine asked me to help after he compailned that his server was hacked. i was able to confirm taht his server was hackde (root) - i could see the entire history the hacker did i.e in bash_history [10:20] however we noticed that teh server loads up with promiscious mode on [10:20] at boot time [10:20] any ideas how to turn that off :) ? [10:20] Disconnect the server, and use the console? [10:21] Wilco_: wipe & reinstall the whole thing [10:21] yeah thats what i told him [10:21] but it was interswting [10:21] I love the word promiscious, it's such a durty word :P [10:21] to understand [10:21] the moment redhat starts it says entering promiscuous mode on eth0 [10:22] i even tried forcing the network confgiguration to turn of promisucous - no worky [10:22] Wilco_: So how many passwords does the hacker have by now? [10:22] millions :) [10:22] And how many other boxes are rooted by now? [10:22] i guess all of t hem [10:22] i told him to wipe it out [10:22] he is really fucked [10:23] Wilco_: It's a good thing that he's not a customer of ours. His box would be offline by now [10:23] heehe [10:23] or is he!? [10:23] Is he in .nl?! :) [10:23] thats happens when people put a server on the internet without a firewall [10:23] Wilco_: Bullshit [10:23] his machine was open for port 22 [10:23] they scanned it [10:23] My server has been online without a firewall for years [10:24] with SSH open? [10:24] yes [10:24] they scanned [10:24] brute forced attack it [10:24] found root password [10:24] that's when inexperienced people put a server on the internet without firewall ;) [10:24] boom [10:24] server hacked [10:24] Wilco_: use fail2ban [10:24] JanC: Indeed :) [10:24] and secure passwords [10:24] Wilco_: Use sane passwords [10:24] yeah [10:24] hehehe [10:24] sure [10:24] 'koekje123' [10:24] 'toor' [10:24] you know hwo many admins use [10:24] 123456! [10:24] :) [10:24] 'thisisapassword' [10:24] Jeeves_: I'd call you brave, if you didn't patch at least the services running on it.. [10:24] os q2w3e4 [10:24] :) [10:25] 'secret' [10:25] I don't use passwords to login [10:25] ewook: Obviously, I patch my servers [10:25] ok [10:25] Wilco_: and don't allow root login over ssh [10:25] don't allow them [10:25] hehehe [10:25] RoyK: agree [10:25] And you shouldn't call me brave, but stupid :) [10:25] when it comes to security im a real bitch [10:25] Wilco_: but do use fail2ban [10:25] i'll check fail2ban [10:25] Jeeves_: I didn't wanna say out loud myself ;) [10:25] ok does anyone here knows of a wrapper for tomact? [10:26] a good one that loads it up as a service properly [10:26] ewook: :) [10:26] Wilco_: You mean 'apt-get install tomcat' ? [10:26] no [10:26] Ow, that does start tomcat as a service :) [10:26] i use tomcat off apache tomcat org [10:26] i use their latest and greatest [10:26] i need a wrapper for it to load up as as service [10:27] s/and greatest// [10:27] i.e service tomcat stop/start [10:27] I think it's included [10:27] nah [10:27] Otherwise, just use the wrapper out of the ubuntu package [10:27] tomcat has a start.sh script [10:27] but its lame [10:27] hmm [10:27] thats an idea! [10:28] Wilco_: 'super stable' is oposite of 'latest and greatest' [10:28] hi [10:28] I'm thinking make a custom verison of ubuntu server as a NAS, I wonder anyone has done some work like this, I don't wanna duplicate work [10:28] ivoks: well tomcat wise i dont care [10:29] useually tomcat is highly stable [10:29] super stable from my point = kernel + os does not panic on you [10:29] ok [10:29] L-----D: openfiler? [10:30] Jeeves_, I know openfiler and freenas [10:31] Jeeves_, I'm trying something like nas+media server [10:32] L-----D: Doens't mythtv do that? [10:33] currently I have samba and mediatomb running well on 9.10 [10:34] i'm thing intergrate them into a disk, some sort of customer disk [10:35] and with a web-ui [10:38] guys thanks a lot for your help and assitance [10:41] np! [11:01] soren, ivoks: I was able to accomplish what I was after. Part of the issue was /dev/raid/backups spanned from /dev/md0 on to /dev/md1, and that /dev/raid/backups existed at all. I was able to remove the logical volume 'backups' and remove /dev/md1 from the 'raid' volume group. Thanks again for your patience, help and advice. [11:01] Of course part of the problem is I still a n00b. :) [11:14] !webmin [11:14] webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead. [11:14] !ebox [11:14] ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox [11:16] where can i get uec-publish-tarball from ? === royP is now known as royp === royp is now known as roypa [12:27] are you guys seriious ? [12:27] webmin not supported? [12:29] <_ruben> yup [12:31] <_ruben> any recommendations for decent disk benchmarking tools, tried iometer and bonnie++ so far [12:32] there's a blueprint for maverick; webmin review [12:35] hi [12:35] is anyone using Zend Optimizer 3.3.9 under x86_64 servers? Apache2 seems to crash with this extension [12:39] guys [12:39] i dont get - i just installed webmin latest 1.5.20 - works like a charm [12:39] they dont say any word that is not supported [12:39] i use webmin mainly for its monitoring options [12:39] its very light weight [12:42] guys [12:42] well the ubuntu support forum says webmin wins hands down over ebox [12:43] :( [12:45] 'they' are who? [12:46] 13:39 < Wilco_> they dont say any word that is not supported [12:46] whoa are 'they'? [12:47] problem with webmin is that it deletes any changes sysadmin does [12:47] so you can manage stuff only with webmin, you can not make personal changes trough CLI [12:47] webmin is for those that think they are linux sysadmins [12:48] at least, that was webmin's behaviour last time i checked it === swift_ is now known as swift [12:53] hmm [12:53] ok [12:53] well it worked so far for me in may time on redhat/centos [12:53] guys - how can i force someone to change password after they login [12:54] i want to create a user and once the user login it is required to change his password [13:04] I have a question ;) if there's no root password, and somehow it happens that ubuntu won't boot, because say it can't mount a partition, and it craps out, will it ask me for the root pwd, like say centos does? it's not that common but it could happen. I never had that on ubuntu [13:29] New bug: #564561 in dbconfig-common (main) "package phpmyadmin 4:3.3.1-1 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 2 zurück" [Undecided,New] https://launchpad.net/bugs/564561 [13:34] cemc it will probably prompt you for a password, you hit enter, and you're done [13:35] ^^ speculation [13:35] Italian_Plumber1: Error: "^" is not a valid command. [13:35] that was speculation [13:46] mdeslaur: ping [13:47] zul: what's up? [13:47] mdeslaur: do you know anything about opie? [13:48] mdeslaur: bug 569292 [13:48] Launchpad bug 569292 in opie "opiepasswd does not always generate valid seeds" [Undecided,New] https://launchpad.net/bugs/569292 [13:50] zul: The seed he was given is "co5482" and the seed he types is "co54", so of course it gives an error message [13:50] let me answer in the bug [13:51] mdeslaur: k thanks [13:51] oh hold on [13:52] hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. [13:52] I can run an image and login it via ssh, but only from the pc that I've the cloud controller. [13:52] The problem is that I can't ping or login ssh the image from a pc of the LAN. [13:52] Is there any kind of firewall or something that I have to configure to allow connections? [13:52] zul: hmm...for some reason his second opiepasswd didn't generate a correct response === swift_ is now known as swift [13:56] oh, jolly good [13:56] bug 374589 [13:56] Launchpad bug 374589 in samba "Samba fails to start at boot when not connected to network (and specific interfaces are configured)" [Wishlist,Triaged] https://launchpad.net/bugs/374589 [13:56] eh? [13:57] samba gets started before vmware, obviously [13:57] oh jaunty :) [13:57] pppthpp ;) [13:57] jaunty? [13:58] ivoks: oh its karmic...yeah it needs to be converted to upstart [13:58] which was done in lucid [13:58] :) [14:00] ivoks: so that bug should probably marked as fixed released [14:00] hm, yes... i guess [14:01] but if we always fix in newer version, can we really claim that the version-1 is supported? [14:03] well not really, but thats what SRU are for [14:03] hi, I have installed and configuring the UEC following this guide https://help.ubuntu.com/community/UEC/CDInstall. I can run an image and login it via ssh, but only from the pc that I've the cloud controller. [14:03] The problem is that I can't ping or login ssh the image from a pc of the LAN. Is there any kind of firewall or something that I have to configure to allow connections? [14:06] ivoks: we are going to have a session about this at UDS [14:06] i think [14:06] great [14:08] FFF666: complete guess, but do you have routing enabled on the clc? [14:09] rounting enabled? I dont know [14:09] where can I configre that? [14:34] ttx: can you have a look at #569248 [14:34] bug 569248 [14:34] Launchpad bug 569248 in eucalyptus "deregistered node is still reported on the CLC" [Undecided,New] https://launchpad.net/bugs/569248 [14:37] zul: I'm not sure I get it [14:38] ttx: neither do I...ill wait for hggdh and get him to add more info [14:40] zul: I triaged it [14:42] hell of a day [14:43] ivoks: o/ [14:43] mathiaz: hi [14:43] ivoks, mathiaz: o/ [14:43] \o/ [14:44] ttx: anything release critical? [14:44] hey mathiaz [14:44] ttx: dude, this raid issue is a pretty serious regression [14:45] i don't understand how it works in VM, but not on real hardware [14:49] and I can't reproduce it [14:52] ttx, zul: I expected (perhaps wrongly) that once I 'euca_conf --deregister-nodes ' it would not be shown anymore [14:52] hggdh: ah... [14:52] hggdh: i deferred to ttx since Im not that informed on eucalyptus [14:53] kirkland: move to -devel and explain ? [14:54] hggdh: right, I just wasn't sure I got it right. can't reproduce from where I am this week [14:55] kirkland: did you reinstall (frsh install) or upgrade? [14:56] * ttx reads [14:57] ttx: I just got back to it, but I have a feeling this may been caused by a mis-configuration on topo4, on the rig [14:57] looking at it right now (and will discuss with mathiaz) [14:57] kirkland: lp:~hggdh2/+junk/uec-qa [14:57] hggdh: ideally we would reproduce on a sensible configuration and be able to forward the bug to eucaguys [14:58] ttx: yes, I agree. any of the topologies in the righ would be good enough, except for this mis-conf right now [14:59] hggdh: but that's definitely possible... given that we don't deregister that often. [15:02] kirkland: looks like most people can't reproduce your issue though [15:04] kirkland: and AFAICR the iso tests for beta2 and RC were OK on RAID, if you except jdstrand's issue with raid juggling [15:04] (unfortunately we can't access the old milestones test results anymore) [15:07] ttx: okay [15:08] ttx: cemc did reproduce it [15:08] ttx: looks like it's dependent on your drive geometry [15:08] cemc: reinstall [15:10] On 9.10 doing the VM install, What is "Basic Ubuntu Server" [15:10] kirkland: oh, I missed that comment, then [15:11] cemc: can't find the comment where you reproduced Dustin's issue ? [15:12] ttx: cemc pinged me privately here [15:12] ah [15:12] ttx: i've asked cemc to add a note to the bug/mailing list === dendro-afk is now known as dendrobates [15:19] kirkland: bring the discussion to #ubuntu-release when you have sufficient view on the issue... Make everyone comment on the bug otherwise it will be impossible to follow [15:19] ttx: agreed, thanks. [15:20] morning folks [15:20] kirkland: i DIDN'T reproduce it [15:21] cemc: oh [15:21] cemc: okay, sorry [15:21] cemc: hah, so at that point two commenters on the bug that can't reproduce the issue, and only Dustin having it [15:21] and KVM not reproducing it [15:21] ttx: right, weird [15:21] ttx: seems so [15:21] kirkland: so I won't stop the presses right now, with your permission :) [15:22] ttx: yeah, please don't [15:22] jiboumans: good afternoon from cloudy England [15:22] jiboumans: howdy [15:22] ashclouds? [15:22] is Xen 4 in 10.04? [15:22] ttx: frenchman in england? [15:22] RoyK: no [15:22] ttx: good morning from a sunny and fantastic SFO [15:23] hey jiboumans [15:23] i expect snow today :) [15:23] kirkland: ttx: should I try to reproduce the bug on some hardware? [15:23] bummer [15:23] kirkland: ttx I may be able to get some hardware from the certificate lab [15:23] from the Xen 4 release notes "Better performance and scalability: 128 vcpus per guest, 1 TB of RAM per host, 128 physical CPUs per host (as a default, can be compile-time increased to lots more)." [15:23] kirkland: ttx: or the office [15:23] mathiaz: yeah, please! [15:23] mathiaz: that would be great [15:23] mathiaz: at least to build some confidence [15:23] not that I need 128 cores and 1TB RAM for a guest, but looks cool [15:23] mathiaz: ttx: i was just going to ask if there was a qa/test resource who could try this in the lab [15:23] mathiaz: you should have a candidate up soon [15:24] take care guys [15:24] ivoks: at Canonical offices for release sprint [15:24] see you tomorrow [15:24] dendrobates: hey, wanna do a UDS plenary on cassandra or something? [15:24] mathiaz: ~1hour [15:25] ttx: i'm going to try and test in kvm with virtual disks identical in size to my real ones [15:26] kirkland: you know you can use sdisk to extract the partition details, and push it back into sdisk to create them? [15:26] Daviey: hmm, no i've never used sdisk [15:32] Daviey, kirkland: you mean sfdisk [15:32] ttx: never used that either [15:33] * ttx remembers using that in his old Gentoo days [15:35] ttx: yes, sfdisk - sorry [15:36] kirkland: you've reinstalled several times and every time the same result? what did you answer when asked 'boot with degraded raid' ? [15:36] cemc: "yes, boot degraded" [15:36] cemc: i reinstalled many times, same result [15:36] cemc: i zero'd the disk between some tries, even [15:37] mhm [15:37] kirkland: put a qcow on loopback, sfdisk -d /dev/sda > layout.out -> sfdisk /dev/loop0 < layout.out [15:37] kirkland: no swap, no separate boot partition? nothing 'fancy' ? [15:37] actually, you could pipe it straight in [15:40] cemc: correct, correct, correct; 500GB disks [15:40] cemc: how big were your disks? [15:40] 80GB ... [15:40] and they were sync'ing after reboot, which I find a bit odd [15:40] cemc: ttx: I just started an install in kvm with 500GB backing disks; so far it looks like mine on real hardware (with this extra garbage raid partition) [15:41] cemc: that's standard [15:41] Daviey: thanks [15:41] kirkland: do you need 1TB for that? :) [15:42] cemc: not with qcow2 sparse files :-) [15:42] cemc: i will, if the raid sync's fully [15:42] cemc: but i'll kill the vm before that happens (i hope) [15:42] right [15:42] :) [15:42] cemc: for fun, try: [15:43] kirkland: I thought I'll try it too in a KVM with 2x500GB like you [15:43] cemc: sure [15:43] cemc: actually, try it with 250GB [15:44] cemc: let's try to get the size down and see if ther'es a breakpoint somewhere [15:44] can you paste me the kvm command what you use? [15:44] cemc: sure [15:45] kvm-img create -f qcow2 500G a.img [15:45] kvm-img create -f qcow2 500G b.img [15:45] kvm -m 512 -hda a.img -hdb b.img -cdrom ../iso/lucid-server-amd64.iso [15:48] installing [15:51] mathiaz: when you can, we need to chat about the rig [15:51] hggdh: sure - I' [15:51] hggdh: sure - I'll ping you in a couple of minutes [15:52] kirkland: is it just me, or is the formatting reaaaly slow? [16:06] hi all. according to http://hub.opensolaris.org/bin/view/Community+Group+xen/guest-compatibility, ubuntu has no paravirt kernel support for xen - is this still the case for 10.04? [16:08] cemc: yes, formatting takes a long time [16:09] kirkland: it's ridiculous ;) it's going for like 20 mins [16:09] my vmware server serial numbers don't work anymore :( [16:09] cemc: yeah; well, you could speed it up some by using virtio for disks [16:10] kirkland: that's what I'm doing right now [16:10] cemc: you used virtio? [16:11] kirkland: trying... I stopped the install, recreated the disks, and ran kvm with virtio [16:11] cemc: good call [16:11] cemc: me too [16:11] kirkland: but for some reason the vnc is now reeealy slow ;) [16:11] Hi all! Khow knows awk? [16:12] #awk probably knows it better than anyone here :-P [16:13] kirkland: painfully slow in fact, it just freezes at times. yours isn't? [16:13] | awk '{print $3}' | sed /^.*$/who/ [16:13] cemc: hmm, it's a bit slow, but my disk io is *pegged* [16:14] kirkland: It's formatting again [16:14] cemc: okay, on the previous screen, did you see any extra "unusable" partitions? [16:15] kirkland: yes, I think I did. I'm not 100% sure but I think I had those on my 80GB disks too [16:16] cemc: http://people.canonical.com/~kirkland/Screenshot-QEMU.png [16:17] kirkland: yep, I saw that too [16:17] i asks my question in #awk :) but do not get answer [16:17] kirkland: it's still slow :/ [16:17] cemc: well, it's writing the equivalent of 2-disks data to your 1 real disk [16:18] kirkland: a.img is 460MB, b.img is 1.4GB, slowly increasing [16:19] what types of discs you are using? [16:19] kirkland: iostat is showing like 3-4MB/s on the host, that's not impressive ;) [16:20] cbm: it's one 640MB WD Black SATA2 [16:20] usb1.1 kicking in? :) [16:20] 640GB even [16:21] not raid? [16:21] cbm: no. we're trying to install Lucid on RAID but in a KVM === bluethundr is now known as Guest42623 [16:24] cemc: okay, give it a bit longer [16:24] kirkland: it's sync'ing too, with like 16KB/s :) [16:26] what host system in kvm you are using? [16:27] guest system (Lucid) the one? [16:36] cemc: i got past the formatting, installing now [16:46] kirkland: still formatting, still at 33% [16:55] kirkland: I'm installing two new hosts via virt-manager, but performance is abysmal. it takes forever to install base system. Any ideas how to accelerate it again? [16:55] kvm version is 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9 [16:55] TeTeT: ps -ef | grep kvm | pastebinit [16:55] TeTeT: lsmod | grep kvm [16:56] kirkland: http://pastebin.com/eMe8SDg5 [16:57] $ lsmod | grep kvm [16:57] kvm_intel 39352 6 [16:57] kvm 245405 1 kvm_intel [16:58] kirkland: I've started working on integrating my rackspace plugin into byobu [16:58] TeTeT: anything in dmesg look ominous? [16:58] kirkland: let me know if you have any suggestions [16:58] jbernard: cool, thanks [16:58] jbernard: will do, cheers [16:59] TeTeT: your pastebin looks okay [16:59] TeTeT: Are your disks busy? === dendrobates is now known as dendro-afk [17:00] I've tried to install lucid before in kvm, and indeed it was _very_ slow to the point where I aborted and installed karmic instead then upgrade === |eagles05138785| is now known as eagles0513875 [17:00] Jeeves_: not very, here's the iostat [17:00] I've got several Lucid vm's which perform fine under kvm. [17:00] Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn [17:00] sda 11.64 114.46 463.33 1229041 4974928 [17:00] Jeeves_: the ones I have running are fine, but just installing fresh ones [17:01] TeTeT: iostat -kx ? [17:01] after installing it's all good, but the install is painfully slow [17:01] TeTeT: what's "slow" about it? screen refreshes? responsiveness on the keyboard? calculations? [17:01] That also shows percentages [17:01] kirkland: btw, it's still formatting at 33%... === dendro-afk is now known as dendrobates [17:01] I've seen that before. But also on normal hardware. [17:02] kirkland: it's at installing base system for 20 minutes - I've been insalling server in 15 minutes before [17:02] cemc: hmm [17:02] Jeeves_: http://pastebin.com/NtbDEM3m [17:02] kirkland: it just finished this second, lol [17:03] TeTeT: i agree that doesn't sound right; I don't have any more ideas, though, without more details [17:03] what I noticed in general, that it's not doing anything when installing, it just sitting there [17:04] TeTeT: what kind of backing disk? [17:04] kirkland: ok, it's a SSD from Toshiba, reasonably fast normally [17:04] TeTeT: i mean, what are you giving to the vm? [17:04] kirkland: 1 GB disk space, 512 MB RAM, 1 CPU [17:04] TeTeT: did you try installing Karmic guest? it's fast as lightning, right? compared to Lucid [17:05] cemc: have not tried karmic, can try it next. But I installed Lucid server and desktop before and it was fast [17:12] installing base system for like 10 minutes and it's at 38%. _slow_ [17:13] kirkland: did you manage to install it? [17:13] cemc: nearly done [17:14] kirkland, howdy!! [17:14] after this I will try to compare Lucid vs. Karmic server install on KVM [17:17] RoAkSoAx: howdy [17:17] kirkland, how's it going? [17:17] RoAkSoAx: crazy :-) [17:18] kirkland, haha tell me I just moved to a new apartment. Anyways... im free to start working fully on testdrive.. shall we set meeting times or something? [17:19] hggdh: re bug 569248 [17:19] Launchpad bug 569248 in eucalyptus "deregistered node is still reported on the CLC" [Low,Incomplete] https://launchpad.net/bugs/569248 [17:20] hggdh: are you using the latest version of the preseeds? [17:20] hggdh: lp:~mathiaz/+junk/uec-testing-preseeds/ [17:21] cemc: karmic does not seem to be faster for me, waiting at 30% for some time as well [17:24] cemc: my test worked fine, with 100GB disks [17:24] cemc: i'm very interested in your results with 250GB [17:24] kirkland: 55% right now, I'll let you know [17:24] RoAkSoAx: nice [17:24] RoAkSoAx: congrads on the new digs [17:25] RoAkSoAx: i think i left some feedback for you on your last merge proposal [17:25] RoAkSoAx: did you rework it accordingly? [17:26] kirkland, the codename one?? I already recoded everything to obtain the codename i just need to merge it to the modularization [17:27] kirkland, this one: https://code.launchpad.net/~andreserl/testdrive/codename/+merge/22763 [17:29] RoAkSoAx: yeah, that one ... did you rework per the feedback I gave in the review? [17:30] kirkland, yep, I did some improvements too [17:31] RoAkSoAx: can you re-propose for merging? [17:31] kirkland, sure. I'll merge it then [17:32] mathiaz: I had run a 'bzr pull' just before [17:32] mathiaz: but I am not sure *where* bzr was pointing to... will check [17:33] mathiaz: pointing to your local branch at tamarind... [17:33] RoAkSoAx: ideally, i'd like to get the modularization pieces merged by UDS [17:33] RoAkSoAx: let's try to get one UDS session on the Testdrive Frontend [17:34] RoAkSoAx: i'd like to see you "drive" that session ;-) [17:35] kirkland, I think I can make it on time. I'm gonna merge the codename on the current branch first, then merge it to the modularization. [17:35] kirkland, That's not a problem, I can lead the session. The blueprint has not been accepted yet right? [17:41] mathiaz: both are at the same revno on bzr, so yes, I was running the latest === dendrobates is now known as dendro-afk [17:58] kirkland: ping [17:59] I have a kvm I just update on a host running lucid. THe vm was updated to lucid, but fails at cant locate init. Any suggestions on recovery? [18:07] rgreening: can it mount its root fs? [18:10] RoyK: let me see what it says... I get dropped to initramfs prompt [18:11] mount: mounting /dev/disk/by-uuid/[the uuid] on /root failed: Invalid argument [18:12] then some additional mount fails for dev, sys, and proc. and Target filesystem doesn't have /sbin/init RoyK [18:12] rgreening: sounds like bug #569900 [18:12] Launchpad bug 569900 in mdadm "mount: mounting /dev/md0 on /root/ failed: Invalid argument" [High,New] https://launchpad.net/bugs/569900 [18:13] at least it's similar to that... [18:14] its not raided [18:14] ah [18:14] its a vm [18:14] I have other vms on same system, upgraded in the same way, and they all booted. [18:14] any recovery suggestions ? [18:14] rgreening: is anything special with this one? any special setup? [18:15] nope. used the vmbuilder to make it and then updated it [18:15] other than installing cacti [18:18] kirkland: it installed. it does not boot with virtio (FATAL: no bootable device), but if I do -hda, -hdb, it boots, and gives no error === dendro-afk is now known as dendrobates [18:25] kirkland: topo4 done, all results on lp:~hggdh2/%2Bjunk/uec-qa/ [18:28] rgreening: interesting; that is the exact failure i see in my raid bug [18:29] rgreening: wow, i hope i can reproduce this [18:29] kirkland: The host OS has hardware raid, but the vm has an ext3 system. shouldn't matter, correct? [18:29] Why can connections still be made to smtp with these iptable rules: http://pastebin.com/nMbnMEgU [18:29] kirkland: maybe it's not related to the raid... [18:30] cemc: what's your command line with virtio? [18:30] rgreening: what's the backing disk for the vm? [18:30] rgreening: qcow2 image? lvm? real device? [18:30] kirkland: kvm -m 512 -drive file=a.img,if=virtio -drive file=b.img,if=virtio -vnc 172.16.21.1:1 -cpu qemu64 [18:30] qcow2 [18:30] cemc: you're missing an option [18:30] hi, I want to know which network configuration has UEC by default [18:31] kirkland: which one? [18:31] cemc: index=0,boot=on [18:31] cemc: -drive file=%s,if=virtio,index=0,boot=on %s" [18:31] cemc: you'll need one drive to be index0, the other index=1 [18:32] kirkland: mhm, I guess you don't need that when booting from cdrom, but after that you do need it. now it booted ok with virtio too, thanks [18:32] FFF666: VNET_MODE="MANAGED-NOVLAN" [18:32] kirkland: so it was ok with 250gb disks too [18:32] cemc: right; it boots from cdrom [18:32] cemc: interesting; i want to investigate rgreening's problem; this sounds like the same core problem [18:33] rgreening: okay, you're in the initramfs? [18:33] kirkland: yeah, that's what I thought when I saw it... [18:33] kirkland: yep [18:33] rgreening: what's the exact size and partitioning of your vm's disk? [18:33] rgreening: ls -alF /dev/disk/by-uuid [18:35] kirkland: Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller [18:36] does the problem in the configuration? [18:36] sorry, is the problem in the configuration? [18:36] kirkland: here's the screenie of the output: http://imagebin.ca/view/lQIX5_j.html [18:37] And here's the files sizes on disk: -rw-r--r-- 1 rgreening rgreening 1797062656 2010-04-26 12:37 disk0.qcow2 [18:37] -rw-r--r-- 1 rgreening rgreening 710279168 2010-04-26 12:37 disk1.qcow2 [18:38] with partitions: root: 8000, swap 4000, /var 20000 [18:39] hggdh: i'm building a package that should fix your walrus Java heap OOM errors [18:39] hggdh: would you be able to test that today-ish? [18:40] kirkland: Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the configuration? [18:41] cjwatson: around? rgreening is hitting the same error I saw on Lucid w/ RAID on real hardware; he's not using RAID, and he's in a VM [18:41] cjwatson: i could use someone with initramfs skills to help debug this and determine if it's release-critical [18:41] kirkland: yes, no prob. I will have to go back to topo2, the one that seems more prone to this failure [18:41] cjwatson: i asked keybuk for help last night, but he said it wasn't his focus [18:42] hum. Lemme check on topo4 logs [18:42] ty kirkland and cjwatson [18:42] rgreening: thank you ... i think your setup shows this problem even more straight-forwardly than mine [18:42] rgreening: so 1797062656 is size on disk ... is this a sparse image? [18:43] ok. Strange that I upgraded 5 other VM's similarly, and they worked. So, 1/6 times it failed :) [18:43] kirkland: ya [18:43] hggdh: okay, it's uploaded to ubuntu-virt-ppa; awaiting build [18:43] kirkland: not using RAID => different bug, presumptively [18:43] cjwatson: well, unless RAID is not the actual issue at all [18:43] this one looks like it's that the device doesn't show up in time, maybe? [18:43] try rootdelay=120 or some such [18:43] kirkland: k. Just keep in mind it may take a few hours to reach the OOM point [18:43] not sure [18:44] kirkland: I would recommend not pre-emptively conflating bugs with similar symptoms [18:44] conflate them once you know they're the same, by all means [18:44] but until then, assume they're separate [18:44] anyway, finishing up here I'm afraid [18:44] cjwatson: okay, will avoid conflating [18:44] rgreening: if you can write out a recipe by which I can reproduce this in a fresh VM, I'll look at it tonight or tomorrow [18:44] cjwatson: who else on your team can I bother about this? [18:45] you can try Keybuk; everyone else will be finishing for the day I think [18:46] rgreening: how did you upgrade the VMs from Karmic exactly? [18:46] cjwatson: that's the thing, I have 1 out of 6 busted VM's. The others have similar setup and only this one failed update from karmic to lucid.. strange. I don't believe I could easily duplicate it. though I can try. [18:46] cemc: do-release-upgrade -d [18:46] rgreening: they are all server installs? [18:46] after installing update-manager-core [18:46] yes [18:46] rgreening: 32/64bit ? [18:46] there's stuff on the wiki about getting more debug info out of the initramfs [18:46] well, server installs made from vmbuilder [18:47] cjwatson: got a url handy? I am interested in recovery as well... as my vm is dead :) [18:47] https://wiki.ubuntu.com/DebuggingKernelBoot [18:47] https://wiki.ubuntu.com/DebuggingKernelBoot#Initramfs rather [18:48] ty cjwatson. I presume this works in a vm as well... [18:48] yes [18:48] ok. will try that [18:48] though getting to the grub menu may be tricky depending on your vm impl [18:49] might need a rescue image to turn off the hidden timeout stuff [18:49] using virt-manager [18:49] and qemu+kvm [18:50] Ican't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN") [18:50] rgreening: oh, also, check dmesg [18:50] EINVAL is the generic "something went wrong" error code [18:52] rgreening: did you try to the rootdelay=120 kernel option? [18:52] actually that probably won't make any difference [18:52] the device is there, it's just unmountable [18:52] EINVAL source had an invalid superblock. [... or other stuff that doesn't apply here] [18:52] cjwatson: I'm launching the recover kern image now to see the debug/printk output.. [18:53] you could also try mounting it from that. to me, if this doesn't have any lvm or raid or other funny stuff involved, EINVAL seems to imply that the filesystem is toast [18:53] but that's just an initial hypothesis [18:53] ouch [18:55] cjwatson: anyway to attempt recovery of the fs? working in a vm is a bit trickier than Im used to. in a real server, I'd use a recovery media. in a vm, not sure how to proceed [18:55] Hello everyone. I can't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN") [19:07] New bug: #463703 in clamav (main) "AV Installation failed on 9.1 inside VMware" [Undecided,Incomplete] https://launchpad.net/bugs/463703 [19:07] Hello everyone. I can't ping/ssh an image from a pc in the LAN, but Ican do it from the pc which I've the cloud controller. is the problem in the network configuration? (the VNET_MODE="MANAGED-NOVLAN") [19:08] FFF666: are they on the same subnet? [19:08] yes [19:08] 192.168.2.x [19:08] FFF666: what's ifconfig's output on your CC [19:10] I dont have the pc turn on [19:10] but has the 192.168.2.100 ip [19:11] I can ping it from the CC [19:11] FFF666: i'm sorry, i don't know; check #eucalyptus [19:12] ahh, thanks === luis__lopez is now known as luis_lopez [19:20] hello to all. i have some strange problem with my ubuntu web server. for some reason everything runs heavy slow. top show however a cpu usage of maximal 12 %. what could be the problem. all php scripts run with a cpu usage of 2% only. [19:30] running MYoungs xen4+xendom kernel, do I need to compile xorg-x11-server-Xorg to get X11 working? [19:30] wrong ch [19:56] kirkland: test the OOM with RC + your packages, or daily+PPA? [20:02] hggdh: daily + PPA, please [20:04] kirkland: ack [20:05] HI all!! [20:18] kirkland: looks like my issue may be both a corrupted fs and issue regenerating the initramfs. I've mounted the qcow2 file and performing fsck repairs to it now. I'll let you know if this solves or if the initramfs is still an issue. [21:00] rgreening: okay, thanks [21:00] rgreening: so the rootdelay didn't have any effect? [21:01] kirkland: np. still doing fsck on volumes :) [21:01] rgreening: gotcha [21:01] kirkland: never tried that yet. Saw some errors on fs [21:01] rgreening: k [21:01] kirkland: on another note, virsh -c qemu:///system shutdown doesnt seem to work [21:01] and I have acpi enabled and acpid installed and running on guest [21:02] * rgreening puzzled [21:02] rgreening: i think there's an open bug on that [21:02] rgreening: is this a regression? [21:02] yeah, I'm pretty sure I was able to shutdown before under Karmic [21:03] though my brain is spaghetti at the moment.. trying to do way too much [21:04] kirkland: host doesn't need acpid does it? just the guest? or ... [21:04] rgreening: should just be the guest [21:04] yeah, thats what I thought. [21:04] bizarre [21:04] rgreening: ack; today has been the busiest day ... 10 conversations at once, it seems [21:05] hah [21:05] Im sure we'll chat lots at UDS :P [21:06] kirkland: rgreening@natasha:~$ acpi_listen [21:06] button/power PWRF 00000080 0000000e [21:06] my guest receives the event, just doesnt act on it [21:07] rgreening: hrm ... pm-utils bug, perhaps? [21:08] kirkland: I don't have that package [21:08] hmm... [21:09] kirkland: any progress? had a laptop hdd issue here [21:09] rgreening: sorry, that might be unrelated [21:09] cemc: hmm, not much; what do you have ? [21:09] kirkland: I originally build vm with vmbuilder under karmic [21:10] kirkland: nothing either... I have a karmic vm, maybe I'll try to upgrade it, let's see what happens [21:10] cemc: back it up first ;-) [21:10] ofcourse ;) [21:22] kirkland: fsck fixed the guest. I believe my issue is caused by shutdown not working via acpi, which means the guests were being destroyed and not shutdown. [21:23] rgreening: i'm trying to test that now [21:23] ty. [21:24] does anyone know how to get rid of the bash suggestions like when I typo suod instead of sudo and it suggests what I want? [21:24] rgreening: server image, correct? [21:24] rgreening: ie, not desktop [21:25] kirkland: I used this.. sudo vmbuilder kvm ubuntu -c mrtg.cfg -d /opt/vm/odin-kvm [21:25] kirkland: which should default to a server setup [21:25] rgreening: right [21:26] rgreening: sorry, you've told me that :-) [21:26] np [21:26] :) [21:26] *(too_man_conversations) [21:26] you have a spaghetti brain just like me now [21:26] rgreening: heh :-) [21:32] did the upgrade karmic->lucid in the vm, no problems [21:36] good evening all === dendrobates is now known as dendro-afk [22:00] I setup exim4 (or thought I did) and I can connect to my port 25 locally but not from a remote bawx. How can I enable exim to let remote users connect? [22:01] There is no firewall installed* === dendro-afk is now known as dendrobates [22:11] cemc: good to hear, thanks [22:12] kirkland, you rang ? [22:12] hi [22:12] is there a bug report about missing dependencies libapache2-mod-wsgi ? [22:13] for... [22:14] oh, there is, but is not fixed [22:14] any one want to assist me in setting up server [22:15] !anyone | mean67 [22:15] mean67: A large amount of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? [22:15] I am trying to set it up using 2 10k SATA drives that I have and am having some trouble [22:16] smoser: sorry, forgot you were traveling [22:16] no problem. [22:17] I would like to know the best way to have it set up [22:17] and am at the point where it is asking me partition info [22:18] Do you plan on using a RAID card or Softraid? [22:20] !raid [22:20] Tips and tricks for RAID and LVM can be found on https://help.ubuntu.com/community/Installation/SoftwareRAID and http://www.tldp.org/HOWTO/LVM-HOWTO - For software RAID, see https://help.ubuntu.com/community/FakeRaidHowto [22:34] I am not using RAID [22:35] mean67: Does selecting the default options not just work for you? [22:36] well I thought that putting the swap on my and the boot on the other would be the best way to go [22:37] Heh, then select the manual configu. [22:38] If you mostly start/stop new (uncached) binaries, it may help. If you mostly serve data stored somewhere, it would be better to worry about separating the data from the swap, as the binaries are likely cached. [22:48] so not sure if I should pick guided or manual [22:53] zul: ping [22:53] zul: you've reproduced https://bugs.launchpad.net/bugs/569890 ? [22:53] Launchpad bug 569890 in kvm "kvm hangs on Lucid Server RC" [Undecided,Confirmed] [22:55] rgreening: i can confirm your shutdown issue [23:00] kirkland: first round done, 400 instances. CLC did not (yet) blow on OOM; Walrus shows less memory usage [23:00] hggdh: cool, and usually it would by that point? [23:00] hggdh: and this is with the default mem allocation? [23:01] kirkland: yes. CLC shows now ~800M of res, Wlarus shows 630M. Up to now, both CLC and Walrus would show about the same memory usage [23:02] around 750M [23:02] hggdh: good stuff [23:02] hggdh: would you note the ppa version you tested, and confirmed in your orig bug report? [23:02] kirkland: only surprise was an unilateral reboot of one of the NCs, near the end of the run [23:02] hggdh: hmm, spontaneous reboot? [23:02] kirkland: will do. I am also starting another round now [23:03] hggdh: is mathiaz or anyone else dorking around on the lab kit? [23:03] kirkland: seem to have been spontaneous, yes. I cannot see any indications in the logs about it [23:03] except the reboot messages, of course [23:03] kirkland: to my knowledge I am home alone [23:04] hggdh: roger [23:06] kirkland: pushing now the results [23:07] I would expect to find some OOMs in this run... [23:11] hggdh: alrighty [23:11] hggdh: standing by [23:13] Hey guys [23:13] by any chance is anybody running ubuntu on SuperMicro Twin2 servers? [23:13] kirkland: pushed [23:14] hggdh: url? [23:14] We've been running into problems (disk crashes, hard locks, reboots) and the oem mentioned since all 4 nodes were failing it was probably a software/driver/kernel problem [23:18] kirkland: lp:/~hggdh2/%2Bjunk/uec-qa/ [23:18] hggdh: cheers, thanks [23:23] is there a GUI for UEC [23:23] hi folks [23:24] i've just installed ebox, how do i get to the interface? [23:24] i'm new to this! [23:27] With regards to the Ubuntu Enterprise Cloud, I am looking to setup 1 front end with 4 Node Controllers. I have the front end on a public addressable ip, and the node controllers on the same public. I bridged the public interface, but want to run VMs on a private network. Is there anyone that has experience with this I can talk to ? [23:31] kirkland, I've just proposed my changes for merging into testdrive [23:33] RoAkSoAx: cool, i'll check [23:33] kirkland, :) === KB1JWQ is now known as TheOracle === TheOracle is now known as KB1JWQ [23:34] rgreening: https://bugs.edge.launchpad.net/ubuntu/+source/acpi/+bug/570450 [23:34] Launchpad bug 570450 in acpi "regression in /etc/acpi/powerbtn.sh causes it to fail entirely on Ubuntu servers" [High,In progress] [23:35] rgreening: debdiff fix for that bug for you in there [23:46] kirkland: release team is reviewing each upload in the queue (using queuediff). You may as well upload and get feedback that way if you're definitely SRUing it anyway. [23:46] persia: thanks [23:47] Note that this is only every appropriate during final freeze: 50 weeks a year, subscription is the correct model. [23:47] (or maybe 48, anyway, doesn't matter) [23:49] persia: ack, thanks ;-)