[00:06] has anyone installed and used postfix? [00:08] me [00:08] why? This isn't the #postfix room === erichammond1 is now known as erichammond [01:15] jetole: it often is the postfix room, though. :-( [01:25] * jetole shrugs [01:25] anyone know where I can find the docs for a program called in-target that exists only in the install cd? [01:31] you can ignore my in-target question. It's a shell script === alex88_ is now known as alex88 === luis__lopez is now known as luis_lopez [04:39] hey anyone ever experience an issue with the apache2 binding to www-data even when specifying another User in the conf? (and completely ignoring ENVVAR (statically defining it in the config) [04:49] xorl: which config file are you trying to override it in? [04:49] I imagine the right place is /etc/default/apache2 [05:17] twb: in the httpd.conf itself [05:17] I think it has to do with the prefork mpm [05:18] but I am not sure where the forking is calling the user that's not defined anywhere anymore [05:19] twb: so I changed User/Group in the httpd.conf, and in envvars [05:23] even though it's not required anymore [05:23] and sorry I meant apache2.conf* [06:01] does anyone know where the machines domain name is stored? [06:04] nevermind, it's stored in /etc/hosts [06:11] If there *is* a domain, yes. :-) [06:12] If you preseed domain= in your d-i, you get a hosts entry without an FQDN [06:17] hmmm [06:18] FSVO d-i = sid's d-i as at last week [06:18] YMMV, etc. === FireCrotch is now known as a === a is now known as FireCrotch === th0mz_ is now known as th0mz [07:47] Hey everyone, I want to setup a fileserver for a small office with 4 people working there with ubuntu server OS. [07:47] Would a mini atx box with an intel atom chipset be enough for this? [07:47] intel atom processor I mean [07:48] with 2 sata disks in raid [07:48] SmokeyD: It depends on usage. Unless you're doing a huge amount of file transfer, that should be more than adequate. [07:49] no, we are just four self employed people doing standard office stuff. No video work or stuff like that. === Jeeves__ is now known as Jeeves_ [08:03] A MIPS system is adequate for a NAS [08:03] twb: Not one that runs Ubuntu :) [08:04] persia: could install it n a loongson [08:04] (last I saw the mips port was still at toolchain-level, and I'm not sure it's gotten much attention over the past year or so) [08:04] oh, right [09:43] Is it possible to get KVM to use USB passthrough and, crucially, boot from the USB-passthrough hard disk? [09:44] cjwatson: you can cheat in at least one way: [09:44] i.e. I'm currently using sudo kvm -m 512 -usb -usbdevice host:VENDOR:DEVICE where VENDOR:DEVICE identifies my external USB-connected hard disk, and would like to get the virtual machine to boot from that hard disk; by default it doesn't seem to be among KVM's boot devices [09:44] qemu -hda /dev/uba [09:44] twb: Yes, but then kvm doesn't see it as USB, which can be important. [09:45] hence "cheat" [09:45] as it happens that might be OK in this narrow case [09:46] good enough for finishing an install test, at least :-) Thanks! [09:47] I wouldn't mind knowing if it's possible with -usbdevice anyway, but no longer a blocker === jpds_ is now known as jpds [10:15] I'm afraid I've never done USB, it's too "desktopy" [10:15] There' #qemu and probably a #kvm, but IME they're full of crackheads [10:16] You might get lucky [10:41] I have no idea if the standard bios supports booting from USB. [10:46] ttx: any tests that should be done on UEC Lucid at the moment? I have fired up my one front end/one node controller cloud and have some time at hand [10:47] TeTeT: nothing specific [10:48] Daviey: good point [10:49] You might be able to use -kernel gpxe or unetbootin to bootstrap a USB device that way [11:07] Anyone experienced this with a virsh create domain.xml (error: internal error cannot parse QEMU version number in '') === ogra_ is now known as ogra === samferry is now known as Guest66070 [12:21] kirkland, smoser: ping me when you have 5 minutes [12:44] ttx, here now. [12:46] smoser: will ping you back in ~10min [12:46] on a call right now [12:46] k [12:59] smoser: pm? [13:00] yeah [13:02] where is the no-ip folder at ubuntu-serveR? [13:07] jaypur: folder? Can you rephrase your question? [13:08] Pici, sorry just woke up [13:08] i wanna see the files of no-ip... and see if i can edit it.... where can i find it? [13:10] jaypur: see dpkg -L no-ip (or noip2, depending on what version of Ubuntu you're using) [13:11] just a sec [13:15] Pici, lol it says it's not installed both of them, but i installed [13:15] lol [13:22] ttx: 3.4.6 still has the debconf use of unitilized error [13:22] bug #541869 [13:22] bug 541869 [13:23] beh [13:23] that makes four of us! [13:23] hrm [13:23] oops [13:23] Could someone help me figure out why MySQL won't start? http://ryanak.ca/~ryan/mysql-install-log [13:24] zul: yes, dupe of bug 522514 [13:24] soren: ping, looks like uvirtbot is MIA. [13:24] zul: that one was never really fixed [13:25] zul: the proposed patch just doesn't do it [13:26] maybe open up a bug in debian? [13:26] ryanakca: maybe "[ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13)" ? [13:26] zul: Aye, and what should I do about that? [13:27] ryanakca: fix your mysql tables, check google [13:28] zul: OK. Should this be an issue on a fresh install? [13:28] ryanakca: it shouldnt but i never seen that before on a fresh install [13:34] zul: Everything I'm finding requires me to connect to MySQL... which I can't do if it won't start [13:35] ryanakca: you can try re-installing mysql [13:36] zul: I've tried purging and reinstalling [13:36] That doesn't work. [13:36] Purge [13:36] You have no data at all that can be lost? [13:37] Jeeves_: None whatsoever. It's a fresh install [13:37] ok [13:37] apt-get remove --purge mysql(or-so) [13:37] rm -rf /var/lib/mysql [13:38] rm -rf /var/run/mysqld [13:38] rm -rf /var/log/mysql* [13:38] That remove everything there is to be for mysql [13:38] Than apt-get install mysql again [13:39] Pici, it's noip2 [13:39] i wrote it wrong sorry [13:40] Jeeves_: I can get rid of /etc/mysql* too? [13:41] ryanakca: Yes, that one too. Although that should be empty after a purge [13:41] Jeeves_: Hmmm... decided to leave it [13:44] Jeeves_: Same thing, http://ryanak.ca/~ryan/mysql2 [13:45] zul: i'll dupe it [13:46] ryanakca: Did you change the location of the mysql data in the config? [13:46] Jeeves_: I haven't touched any config files === jbernard_ is now known as jbernard [13:47] what does "grep datadir /etc/mysql/my.cnf" say? [13:49] grep: /etc/mysql/my.cnf: No such file or directory [13:49] lemme reinstall mysql-common [13:51] :) [13:56] Oh, bravo: tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 7546/mysqld [13:56] Many thanks :) [13:56] :) [14:01] ttx: ping when you get a second can you have a look at the patch at https://bugs.edge.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/506985 [14:02] ttx: i would really like to get that fixed for beta2 if possible [14:13] zul: reading the comment, it doesn't appear to be sufficient ? Maybe ask soren about this ? [14:13] k [14:13] soren: ping [14:15] kirkland: http://lifehacker.com/5496679/testdrive-virtualizes-brand+new-ubuntu-builds-for-easy-testing [14:15] kirkland: Ars and Lifehacker in one week! [14:21] jcastro: nice :-) [14:21] jcastro: i haven't seen the ars article yet [14:24] http://arstechnica.com/open-source/news/2010/03/ubuntu-prerelease-testing-made-easy-with-testdrive.ars === dendrobates is now known as dendro-afk [14:38] Do you set the hostname of a machine to 127.0.0.1 or to one of the public ips in a server in /etc/hosts ? === dendro-afk is now known as dendrobates [15:07] Pupeno: A hostname is a "name" so do not use an IP address like 127.0.0.1. You can add a name (hostname) to the public IP if you want - and associate that interface with the chosen name. [15:09] sherr: recently it happened to me that hostname was xen2, and /etc/hosts only contained 127.0.0.1 localhost, so sudo complained about xen2 not found and hostname -f failed miserably. If I assign xen2 to 8.8.8.8 instead of 127.0.0.1; wouldn't I have similar failures if the public key of that machine changes? [15:16] ttx: https://bugs.edge.launchpad.net/ubuntu/+source/bacula/+bug/515274 [15:20] zul: ok, removing milestone until this gets accepted [15:27] <_ruben> Pupeno: you running google's public recursive dns server? if not, i dont see why you'd use 8.8.8.8 as ip address [15:32] mathiaz: how can I go on the UEC testing blueprint (tests on B1 candidate)? [15:33] mathiaz: as I understood you yesterday, the test rig is not running beta1 [15:33] hggdh: well - it's running what's the latest from the archive [15:33] hggdh: what needs to be done is to install from the *iso* [15:34] hggdh: on tamarind, you can see in the topologie.yaml file that you can select the media to install from [15:34] hggdh: one of the example is to set a url for an iso [15:34] hggdh: you can set the url to the iso candidates from cdimages [15:34] mathiaz: OK. Any specific topol to be used? [15:35] hggdh: all of the topologies need to be tested [15:35] hggdh: so start by updating the iso rule, then update the installer-media reference in the yaml file [15:35] hggdh: and then ./set_topo.py lucid-amd64-topo1 [15:36] mathiaz: have 5 minutes ? [15:36] ttx: I hope so! [15:37] mathiaz: see pm [15:37] mathiaz: roger wilco === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === dendrobates is now known as dendro-afk [15:57] how can I make the fs-mounting initscript wait until cryptdisk is finished? [16:01] _ruben: it was an example obviously. === robbiew_ is now known as robbiew [16:05] <_ruben> Pupeno: then use example ip addresses (192.0.2.0/24 is reserved for that) :) not existing ones [16:08] _ruben: ok, I'll try to remember it the next time, so nobody gets confused about my examples, but I probably won't. [16:09] <_ruben> Pupeno: the "problem" with 8.8.8.8 as a "random example" is that it's a quite "known" existing ip address, being google's open resolver :) [16:10] _ruben: ok. [16:10] <_ruben> ok, enough nitpicking for today .. time to head home .. weeeekend === dendro-afk is now known as dendrobates [16:39] Hi can anyone help me with this : http://pastebin.com/mzfHRJnK [16:40] grepFu: Unless you are tying to do something special, try purging dovecot-common dovecot-imapd and dovecot-pop3d before running the install again [16:41] I did, it outputs similar errors again. [16:41] After purge? How did you purge? [16:41] sudo apt-get purge [16:42] http://pastebin.com/rhLKJWjk [16:42] if I do the install again, i get the same errors as before, bit of a looping problem. [16:42] Can't uninstall, can't install... [16:43] purge really ought to forget that you intentionally deleted the config files. [16:43] Was following this - https://help.ubuntu.com/community/Postfix [16:43] Maybe hunt through and purge *all* the dovecot stuff you have installed, and reinstall? [16:43] It wasn't working, and is pretty hard to follow so decided to start again, and follow my official server book instead. [16:44] So did apt-get remove on the apt-get installs [16:44] and reinstalled PostFix and then tried to do DoveCot, and this is what happened ;( [16:44] Somehow it believes you manually deleted some configuration files. [16:44] Can I not just put them back in, so it'll shut up and reinstall? [16:45] You could. Do you know what they contain? [16:45] Also, unless you manage to put back the *exact* files, it won't automatically update them on upgrade. [16:45] no idea [16:46] I just want to fresh install the package. [16:47] mathiaz, fyi, you're elite yaml skilzz were helpful to me. https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/541907/comments/3 [16:47] purge + install is supposed to do that. Report a bug against the package. [16:47] thank you for a.) suggesting use of yaml b.) showimg me about aliases (./topologies.yaml) [16:47] smoser: yeah - yaml is an awesome language for configuration IMO [16:48] smoser: it just has the right syntax for doing good configuration [16:48] damn.. ok thanks for the help [16:48] smoser: but isn't a full blown langage [16:48] smoser: I really like the references [16:48] yeah. #include missing. [16:48] smoser: right - includes are missing [16:50] the bug system is a bit wierd [16:52] smoser: is ami-ad09e6c4 the beta1 EMI? [16:52] smoser: *AMI* [16:53] yes. [16:53] us-east-1 ami-ad09e6c4 ubuntu-images-milestone/ubuntu-lucid-10.04-beta1-i386-server-20100317 [16:53] smoser: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=3102 [16:54] smoser: ^^ still list 9.10 in the AMI manifest [16:54] fixing [16:54] :) [16:55] smoser: I haven't checked the other web page though [16:55] needs moar uec [16:56] managed to fix it myself [16:56] :D [16:56] Made the folder /etc/dovecot [16:56] then made the 3 conf files (as blank) [16:56] then purged, then reinstalled. [16:57] Ah. So the purge wasn't completing because the directory was missing? Please file a bug : it should be smarter than that. [16:57] ok [16:58] Thanks again for your help, wouldn't have figured it out otherwise... 2 hours of my life I won't get back, loggin the bug now. [16:58] mathiaz, its fixed now.. i actually think you might have had a cache or something . but it does your's show 20100117 in the manifest path ? [16:58] (before reloading) [16:59] smoser: ubuntu-images-milestone-us/ubuntu-lucid-9.10-beta1-i386-server-201003XX.manifest.xml [16:59] grepFu: Thanks for figuring out the real reason : had me stumped. Let's hope that gets fixed before someone else gets stuck. [16:59] smoser: ^^ this is what AMI Manifest showed [17:00] hm.. oh well. it shoudl b good now. [17:00] thanks, mathiaz also noticed the '9.10' in there and fixed that. [17:01] smoser: are you using a template to generate the content? [17:01] smoser: or is it manual? [17:01] smoser: (ie the process of creating the amazon entry) [17:01] well, mostly manual. but i bzr it. [17:02] you have to edit a form . in aws to update it. [17:02] with lots of fields, so its easy to miss one even if you have the correct data in front of you. [17:02] smoser: the security uses a similar workflow to publish USN IIRC [17:02] smoser: well - kind of similar [17:03] smoser: IIRC they use a template to generate the text and then copy'n'paste into the ubuntu.com website [17:03] smoser: it may not apply to the AWS website though [17:03] smoser: especially if there are multiple fields involved [17:03] yeah. i've just not gotten to generating the content. [17:04] https://code.launchpad.net/~smoser/ubuntu-on-ec2/ami-pages is the branch [17:16] mathiaz, if you wanted to, i'd take a patch to modify AWS web pages programmatically. [17:16] since you don't have anything else to do [17:16] :) [17:16] smoser: :) [17:16] smoser: ETA: 22nd March 3010 [17:17] sweet. [17:30] mathiaz: with a schedulling policy of greedy one one NC should have 2 instances (following the test), right? [17:31] hggdh: I think so [17:31] mathiaz: I do not see it (at least, I think so). Can you check dempedak? [17:41] hggdh: http://paste.ubuntu.com/397914/ [17:41] hggdh: seems like the test is failing [17:51] what's that thing people liketo use to manage their servers from teh web? [17:52] webmin [17:57] ssh [17:57] ;) [17:58] technically, ssh is not web [17:59] but i guess no one cares [18:02] I care that ssh is not web lenios. I care a lot. And frankly I've been losing sleep over it lately, I haven't shaved or showered or gone to work in days contemplating the wide reaching and deep impacting ramifications of that very statement. [18:02] :) [18:03] anyway, some people can't tell the difference between google and internet [18:05] wait until they have chromeos [18:10] mathiaz: bug 542075 opened === maxb_ is now known as maxb [18:59] I have a problem with my ubuntu server [18:59] I am using version 9.10, have in it Dhcp server, apache server, mysql, iptables to NAT the school's internet to other pcs on the network (conected to the server through a switch) [18:59] the computer I'm using is a dell server [19:00] uses a Single Core XEON Processor by intel [19:00] running at 2Ghz, and 1GB of ram [19:00] problem is, everyday [19:00] it hangs, completelly [19:00] any sugestions? [19:03] what time is it hanging, look through files in /var/log for things happening at that time [19:04] madcat1990: Nothing in /var/log/syslog? [19:05] madcat1990: Can you put a serial console on it? Might display an error. [19:05] hmm [19:05] will check that log then [19:05] let me take note [19:08] thanks, after I see it [19:08] i will be back === Airells is now known as airells [19:55] How can I prevent a dynamic interface updating from updating /etc/resolv.conf while still getting other IP info from DHCP? [19:58] Hello everyone. I would like to achieve auto-login using ssh to another server but no luck. I have copied the public key to the remote server but I still get prompted for a password, any help? [20:00] geneticx_wrk: I always find this to be the best guide by far http://sial.org/howto/openssh/publickey-auth/ [20:01] geneticx_wrk: have you tried running ssh -vvv [20:01] geneticx_wrk: ime the problems is usually one of permissions :) [20:02] if any of the files are too weak on permissions, the ssh client or server will ignore them [20:05] VSpike: this is what my ssh -vvv looks like http://pastebin.com/9fHniHPL but still can't figure it out [20:07] geneticx_wrk: you're doing it as root? [20:08] yup [20:09] so does /root/.ssh/id_rsa exist? [20:10] or /root/.ssh/id_dsa [20:12] VSpike: only known_hosts [20:15] geneticx_wrk: then I don't think you've correctly set things up. Try following the procedure on the link I gave. You just need to do stuff under "Public Key Setup". Ignore the stuff about ssh-agent for now. [20:20] VSpike: Ok I will, but when it asks to copy the contents of id_rsa.pub >> authorized_keys .. in ubuntu i have known_hosts so should I create another file called authorized_keys or just append it to known_hosts? [20:22] geneticx_wrk_: known_hosts is definitely the wrong file [20:23] geneticx_wrk_: the command it says is cat ~/id_rsa.pub >> ~/.ssh/authorized_keys [20:23] That will do the right thing [20:23] It will create the file if it doesn't exist [20:27] VSpike: ok, hummm..now I'm wondering what the purpose of the known_hosts file is in ubuntu and how it's used with ssh [20:27] known_hosts records hostnames and ip addresses against certs [20:28] YOu know the message when you ssh somewhere for the first time that says "are you sure this what you think it is? yes/no" [20:28] When you say yes, the machine gets added to the list [20:28] if the cert ever changes, ssh will quit with a warning that someone is maybe doing something nasty [20:28] ah yes, you are correct [20:29] geez brain fart [20:30] if it does that and you want to remove the cert and start again (i.e. you know everything is OK), a good tip rather than editing the filed direct is to use ssh-keygen -R [20:31] VSpike: have you ever used ssh-copy-id instead? [20:32] nope, not come across it [20:32] ah neat.. looks handy [20:32] I kept meaning to put all the stuff from that site into a script :) [20:33] Looks like someone got there first === Maletor_ is now known as Maletor [20:33] VSpike: hehe..yeah [20:38] VSpike: my intended plan is to use rsync and auto-login to the source machine [21:08] hi, I have a problem for install proftpd-mysql, I can't install it with apt-get so with dpgk & the .deb, I had a depedency problem [21:10] neurostim: why not use apt-get? what's the problem? [21:11] E: Aucun paquet ne correspond au paquet proftpd-mysql [21:11] * no package found [21:13] neurostim, try proftpd-mod-mysql [21:13] lenios : thx [21:14] lenios : It's worked great :-D [21:15] aptitude search proftpd* :) [21:16] lenios : thx too ;) [21:21] How can I prevent a dynamic interface updating from updating /etc/resolv.conf while still getting other IP info from DHCP? [21:32] aha.. figured it out [21:44] VSpike: what's the solution? === erichammond1 is now known as erichammond [23:11] lullabud: add 'supersede domain-name-servers 127.0.0.1;' to /etc/dhcp3/dhclient.conf ... in my case, inside 'interface "eth0" { ... }' although that's not required [23:12] very handy. i was expecting something more along the lines of /etc/resolve.d, which seems to be the direction most things are going. [23:17] lullabud: yeah.. another way would have been a post-up command or script to just overwrite /etc/resolv.conf but that would have been more kludgy