=== n0ts is now known as n0ts_off [00:47] Hi everyone [00:49] I am having issues with ubuntu server 12.04 and this network card: intel Corporation 82562V-2 10/100 [00:50] It completely freezes after one or two hours working, and I have to reboot the server for the network card to work again. === n0ts_off is now known as n0ts === n0ts is now known as n0ts_off === n0ts_off is now known as n0ts === n0ts is now known as n0ts_off === Lcawte is now known as Lcawte|Away === LordOfTime is now known as TheLordOfTime === n0ts_off is now known as n0ts [04:27] New bug: #1003231 in vm-builder (universe) "vmbuilder generates many "method not found" errors" [High,Expired] https://launchpad.net/bugs/1003231 [04:43] Hey guys, i need some help with ufw, i'm wondering if i should turn of stateful support, and if so how. [04:44] The machine will have around 800k simultaneous connections, i'm assuming stateful will increase the memory requirements a lot, but not sure. === MikaT_ is now known as MikaT [05:34] i've used /sbin/sysctl -w net.ipv4.netfilter.ip_conntrack_max=1048576 to set my conntrack higher, but still unclear what the positives of a stateful firewall is and if i should have it set on or off. === obelus|2 is now known as obelus === jussio1 is now known as jussi [07:38] Why do I get a frozen virtual box vm every other time when I restart my ubuntu guest [08:51] New bug: #1036093 in nova (main) "nova volume-attach with high device name keeps volume in state "attaching"" [Undecided,New] https://launchpad.net/bugs/1036093 [09:31] New bug: #1030943 in python-swiftclient (universe) "[MIR] python-swiftclient" [Undecided,Fix released] https://launchpad.net/bugs/1030943 === disposab1e is now known as disposable [10:38] hello, i'm having trouble with squirrelmail on 12.04, when i compose a message and click send it never browses away from the compose screen, even though the message does get sent [10:39] tried on ie and chrome. the system is all updated and there are no php errors in the apache logs [10:40] nada when i google "squirrelmail compose send" and similar [10:48] w00t, the dudes in #ubuntu got me a fix: http://comments.gmane.org/gmane.mail.squirrelmail.user/38887 === n0ts is now known as n0ts_off === Lcawte|Away is now known as Lcawte === cpg is now known as cpg|away [12:41] Hello all... Is it normal to have fs.nr_open = 1048576 [12:43] 'sudo lsof | wc -l' shows 2390 open files/descriptors. And I am getting 'Too Many Open Files' crashes (I've raised limits to 65536 files). [13:10] jdstrand: ping cinder should be good for main now [13:22] zul: yeah, it is on my list after I read email [13:22] jdstrand: ok cool [14:05] zul: can you confirm a pep8 backport fixes this FTBFS, https://launchpad.net/~ubuntu-cloud-archive/+archive/folsom-staging/+packages ? [14:06] Daviey: sure just a sec [14:08] (hint, zul - don't confirm by uploading :) [14:08] Daviey: well duh :) [14:09] * zul quickly hits control-c [14:09] heh [14:16] New bug: #1036206 in google-perftools (universe) "powerpc test suite execution fails" [Undecided,New] https://launchpad.net/bugs/1036206 === fire_ is now known as codingenesis === codingenesis is now known as fire_ [14:21] zul: any news? [14:21] Daviey: not yet still building [14:26] utlemming, walinuxagent now in precise-proposed BTW [14:26] jamepsage: awesome :) [14:30] utlemming, do you have a handy way of testing it? we can then nudge it through to -updates ASAP [14:31] jamespage: yeah, I can give that a test rather easily [14:31] utlemming, marvellous! [14:40] Daviey: confirmed [14:42] zul: confirmed it fixes it? [14:42] Daviey: confirmed it fixes it [14:43] zul: okay.. what version is it? [14:43] Daviey: 1.2 from quantal [14:44] zul: wait, i thought 1.2 was evil? [14:44] for folsom? [14:45] Daviey: it is...not for f1 though [14:48] zul: wow, that much fail got introduced for >f1 ? [14:48] Daviey: yeah [14:48] xnox, I need todo something with dumbo for you today don't I [14:49] hmmm.... jamespage you could =) [14:49] xnox, branch? [14:49] jamespage: i have packaging done, but debian/copyright [14:49] it's not done yet. [14:49] let me push it to lp.net [14:50] xnox, as its PPA not to worried about d/copyright [14:50] * jamespage slaps himself [14:50] zul: well, there is a reasonable chance we might need to fall back to 1.1 for folsom [14:50] well for the time being at least [14:50] so doing the same for the cloud archive is reasonable [14:50] Daviey: ack [14:51] zul: can you upload a dsc and Friends somewhere? [14:51] Daviey: for pep8? [14:51] zul: yeah [14:52] Daviey: hold on [14:52] jamespage: two branches: lp:~dmitrij.ledkov/+junk/typedbytes and lp:~dmitrij.ledkov/dumbo/packaging [14:52] it's two small python packages. [14:53] jamespage: feel free to repush to a more appropriate ~person [14:53] and if/when it's in the ppa, I can adjust juju charms to optionally include those [14:53] there is also pydumbo, but it's slower and I have no experience with it. And dumbo is sufficient so far. [14:53] although pydumbo has dfs bindings.... [14:54] zul: so.. first line of the changelog for novas, i set to - nova (2012.2~f1-0ubuntu1~cloud0) precise-folsom; urgency=low .. .changes = "Distribution: precise" .. does that make sense? [14:54] Daviey: yeah iirc thats what we agreed to [14:55] Daviey: pep8 stuff is at: http://people.canonical.com/~chucks/tmp/ [14:56] Daviey: because eventually you are going to have precise-grizzly, precise-h, precise-i, etc ,etc [14:56] zul: right [15:01] xnox, I've pushed them both to the dev PPA [15:02] xnox, all of the hadoop related charms support use of dev|test|stable PPA's for that team [15:02] jamespage: cool, thanks =) [15:03] xnox, I really like the idea of not having to write stuff in Java [15:05] jamespage: ideally i want to jujufy discoproject map-reduce [15:05] which uses tags instead of folders for dfs [15:06] and python instead of java for mapreduce [15:06] but server part is written in erlang and relies on DNS available for the nodes [15:06] but HPCloud doesn't support DNS at the moment [15:07] so I'm stuck with both discoproject and HPCloud lacking feature: dns-less setups or dns setup respectfully =) [15:08] * jamespage sighs [15:11] Repeating after a few hours: Is it normal to have fs.nr_open = 1048576 [15:11] 'sudo lsof | wc -l' shows 2390 open files/descriptors. And I am getting 'Too Many Open Files' crashes (I've raised limits to 65536 files). [15:12] zul: re cinder> commented in the bug [15:15] jamespage: it looks like walinuxagent hasn't landed in the archive yet...as soon as I see it, I'll test [15:16] hm... seems when I reboot this machine, some drives in my raid come up as "missing" during initial bootup, and I get kicked into busybox. just exiting busybox works, and after that, I can mdadm --stop && mdadm --assemble and mount it - any idea how I can "slow down" this detection or increase the timeouts to avoid this problem? [15:16] utlemming, should be - its in precise-proposed - https://launchpad.net/ubuntu/+source/walinuxagent/1.0~git20120606.c16f5e9-0ubuntu2~12.04.1 [15:17] if its not there after 4 days we have a problem [15:17] RoyK: if you are using precise, please upgrade to mdadm from -precise, as I commented on your bug report? [15:17] from -proposed that is. [15:17] it has an extra timeout to wait for udev to finish processing events, before dropping into busybox, which helps most people. [15:17] jamespage: duh, my apt sources.list was wrong [15:18] lol [15:20] xnox: how can I upgrade to that from -proposed? [15:20] jdstrand: damn it...*grumble* *grumble* [15:21] xnox: this is precise, btw [15:21] New bug: #1036240 in cinder (universe) "cinder-common fails to install" [High,New] https://launchpad.net/bugs/1036240 [15:21] !proposed [15:22] RoyK: https://wiki.ubuntu.com/Testing/EnableProposed [15:22] xnox: thanks === dendrobates is now known as dendro-afk [15:27] xnox: \o/ === dendro-afk is now known as dendrobates [15:52] zul: did a qa-regression-test run just fora sanity's sake, all still looks good. just lettin' you know cause i'm sure you're unable to sleep at nights worrying about it [15:53] hallyn: libvirt? [15:53] zul: yeah [15:53] hallyn: coolness [15:53] zul: do you know of anything we still need to do to libvirt during q? [15:54] hallyn: nope just make sure it doesnt break [15:54] hallyn: although i hope we can get the new libvirt-lxc stuff in for q [15:54] which new stuff? [15:55] hallyn: like the lxc reboot [15:55] do you know where that went in? is it in 0.9.14? [15:56] i think it is in trunk [15:56] cause i assume that went in after those 500 'let's rename stuff for fun 'patches, so forget about backporting [15:56] jamespage: confirmed [15:56] utlemming, great - nice one [15:56] hallyn: yeah thats why i want trunk :) [15:56] jamespage: I fired up a couple of instances to be sure. [15:56] by trunk you mean git head? [15:57] (not trying to be pedantic, justnot sure what you mean) [15:59] hallyn: ack [15:59] k [16:09] Hi, I have a little "legal" licensing problem in my company, so the lawyer are asking my for license of Ubuntu-Server, apache and subversion. What he need? GPLv3 and Apache License "texts" and tha'ts all? === skaet_ is now known as skaet [17:18] Anyone has a spare intel matrix raid controller? [17:19] souliaq: tarball of /usr/share/common-licenses/ as well as /usr/share/doc/*/copyright [17:27] xnox: is that real raid or fakeraid? [17:28] looks like fakeraid to me [17:28] better use software raid :) [17:28] RoyK: it's not real-real, but it's usually managed with dmraid but recent mdadm can store external metadata using intel matrix format [17:29] and i want to test that, cause I am about to update mdadm in precise [17:29] ok [17:37] xnox: will I have to update mdadm manually when you're done with the precise update, currently using the one in proposed? [17:38] RoyK: no you wont. The one in -proposed will be promoted into -updates pocket, such that everyone will get it and it will be included in the 12.04.1 [17:39] thanks [17:39] RoyK: your welcome =) [17:41] * RoyK wonders slightly if bcache will make it into upstream kernel ;) [17:44] hm.. i have this utility http://smoser.brickies.net/git/?p=tildabin.git;a=blob;f=make-seed-disk;hb=HEAD [17:45] that i'd like to have packaged. cloud-utils seems reasonablel place for it [17:46] but it would add a depends on genisoimage (and probably a 'Suggests:' for mtools) [17:46] i was going to name it "cloud-localds" (local datasource) [17:46] anyone hav ea better idea than its own binary package of cloud-utils ? [17:51] utlemming, you want to do that^ ? [17:51] i cannot do it today for sure. [17:52] smoser: yeah...I think I can give it a shot...are we thinking of a subpackage of "cloud-utils-localds" to the cloud-utils package? [17:52] or just adding it in [17:55] https://bugs.launchpad.net/cloud-utils/+bug/1036312 [17:55] Launchpad bug 1036312 in cloud-utils "please add cloud-localds from make-seed-disk" [Undecided,New] [17:55] utlemming, ^ i think a subpackage is best [17:56] smoser: ack, we're on the same page === tyhicks` is now known as tyhicks === higgs_ is now known as Guest46041 === cpg|away is now known as cpg [18:47] rk === smw_ is now known as smw_work [18:50] jdstrand: so how you would you handle that cinder.conf bug? === fire_ is now known as codingenesis === codingenesis is now known as fire_ [18:57] zul: well, I don't know the issue intimately-- seems we should be shipping our own cinder.conf or patching the one in source before moving it into place. [18:57] jdstrand: i was thinking something like ucf [18:57] well, it does say this: [18:57] The root_helper option (which lets you specify a root wrapper different from cinder-rootwrap, and defaults to using sudo) is now deprecated. You should use the rootwrap_config option instead. [18:58] zul: did you you root_helper instead of rootwrap_config? [18:58] s/you you/you use/ [18:58] jdstrand: its in the cinder.conf for the new version of cinder [19:01] zul: you misunderstood [19:02] zul: your installed cinder.conf uses: [19:02] [DEFAULT] [19:02] root_helper = sudo /usr/sbin/cinder-rootwrap [19:02] right [19:02] the error says that root_helper is deprecated. use rootwrap_config instead [19:02] jdstrand: ahhhhhh....duh :) [19:03] so: s/root_helper/rootwrap_config/ in cinder.conf (doing whatever else you need use rootwrap_config) [19:03] heh, right :) [19:16] jdstrand: okies fixed [19:18] cool [19:21] is there a way to have upstart run stuff as other users yet? === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [19:49] is Ubuntu JeOS and vmbuilder still actively developed? I ask because the vm-builder launchpad site has a 20 month old download link and there also doesn't seem to be a JeOS 12.04 image anywhere... Just curious [19:51] jeos isn't a separate iso any longer [19:54] Hi, I have a hardware related query. can anyone suggest a server hardware under USD $500 to run ubuntu server in home. [19:54] I am looking for something that is portable too like the box shape of mac mini. [19:54] ssvss: ##hardware would be your channel. [19:56] Thanks, I will ask in the ##hardware channel [19:57] RoyK: thanks [19:59] ssvss: raspberry pi. [19:59] ssvss: a bit hard to use sata devices on a pi [20:01] How much ram should lamp be using without any traffic? [20:02] wrapids: lamp is apache, mysql, php, and may be using variable amounts of memory [20:02] RoyK: Yes. [20:02] wrapids: for a small database, mysql won't be using much. php may be using a lot, depending on the code [20:02] RoyK: That's assuming I have traffic [20:02] which I dont. [20:02] apache isn't that heavy on memory [20:03] say, 50 megs will go a long way without too much work [20:03] php shouldn't be using much of anything as nothing is being exectued. There are no queries going on in the db either [20:03] Would the database size affect the mysql services usage if it's not getting any queries? [20:03] once php starts running things, and mysql starts buffering things, say, 512MB should normally do well [20:04] but then, you can't say unless you know the database size and the php code [20:04] RoyK: Wouldn't running/buffering require traffic? [20:04] Yes Rasberry Pi is not what I am looking for, I was thinking something close to the size fo mac mini in which I can have 2 sata disks [20:04] nothing is buffered unless it is accessed [20:04] I'm trying to figure out why I'm using nearly 512mb with 0 traffic, nothing being executed, no queries. [20:04] ssvss: you can get some mini itx boards quite cheap with SATA [20:06] hrm, after a reboot it's doing better [20:06] ssvss: or pico itx or pc/104 or ... [20:06] 188mb with no traffic? [20:06] wrapids: is that RSS or DRS? [20:07] RoyK: I'm not sure how to determine [20:07] ps axfv [20:07] top also tells that [20:07] RoyK: there is always cubox [20:07] I've been using top [20:08] top shows VIRT and RES and SHR [20:08] mysql is using about 50mb idling, apache using about 25mb idling [20:08] what you want to look for is RES [20:08] Sorry, about 40 for apache [20:09] should be fine [20:09] It was running in the several hundreds before I rebooted [20:09] free -m was giving me 11 free with only apache/mysql using above .5% [20:09] resident or virtual? [20:09] res [20:09] free usually shows very low "free" memory [20:09] most of the memory is spent on caching [20:09] It was fairly accurate compared to the top results [20:10] Mem: 8178284 8056176 122108 0 469068 6906568 [20:10] apache had 6-10 processes using about 5-10% each [20:10] that's close to zero free [20:10] which is fine [20:10] according to top [20:10] because you want linux to spend its memory on caching [20:11] wrapids: seriously - if you don't have a performance issue, don't care about how much memory is spent [20:11] RoyK: I do have a performance issue when it starts doing that [20:11] does it start swapping? [20:11] I have no idea [20:11] how much memory do you have? [20:11] 512 [20:11] It's just a dev server [20:11] not a whole lot [20:11] the problem is that it starts eating ram like that with nothing going on, I get very delayed response from the ssh interface [20:12] no idea why [20:12] hrm [20:12] check the logs [20:12] and check swap use [20:17] wrapids: apache uses prefork with php, so each of te processes weren't using 5-10% each, they were probably sharing most of that [20:17] ah === bitmonk_ is now known as bitmonk [20:45] I have some interfaces configured for bonding. I have to bring up each of the slave interfaces before bringing up the bond0 interface or it times out waiting for slaves to be available [20:46] https://gist.github.com/3343969 [20:46] e.g. Waiting for a slave to join bond0 (will timeout after 60s) [20:46] if I do, ifup eth2; ifup eth3;ifup bond0; it works [21:03] hey all. can anyone help me shrink an LVM partition? [21:03] i booted a livecd and tried to shrink it thru gparted, but i guess gparted doesn't support LVM [21:04] so then i figured i had to remove it from LVM to get gparted to see it... that hasn't worked out so far. [21:05] i removed and added back a logical volume... and i have a feeling it's FUBAR now. i can't seem to get the system to boot. [21:05] is there a way to recover it, or should i just reinstall? [21:07] arrrghhh: what do you actually want to resize and what is it stacked on top of? [21:07] the whole chain [21:07] so there's a set physical SAS disks [21:08] then i have physical volumes setup, and logical volumes underneath it [21:08] i'd like to shrink one logical AND physical volume [21:08] then shrink the actual amount provided to the OS [21:08] this is in an ESXi environment, and i'd like to reclaim a bit i've allocated [21:08] i fear i've already done too much. i removed the LV, and readded a smaller one - now the OS won't boot, and I'm not sure if it can be recovered. [21:09] makes me wish i had snapshotted it before doing all this... oy [21:09] arrrghhh: you are doing it wrong way around [21:09] OK [21:09] first you shrink the OS filesystem. [21:09] then you shrink logical volume [21:09] then you shrink physical volume [21:09] then you can shrink the partition [21:09] my issue was #1 - i couldn't shrink the OS filesystem when it's mounted [21:10] so i went to a liveCD, and that didn't support LVM [21:10] ok. [21:10] (gparted doesn't support LVM rather) [21:10] in livecd you install lvm2 package [21:10] ok, done :) [21:10] then you scan lvm groups [21:10] yes [21:10] then you mount the logical volume you want to shrink [21:10] ok let me try [21:10] then you start shrinking that filesystem [21:11] then lvresize the logical volume [21:11] or lvreduce [21:11] and etc. downwords [21:11] good night [21:11] thanks [21:11] hrm. xnox are you leaving? [21:12] i'll take that as a yes. can anyone else lend a hand with LVM? [21:12] i am trying to learn about it the hard way, as usual. [21:13] i have a LV name of /dev/ubuntu/root in lvs, but i can't seem to mount it... [21:14] anyone? is the data still on the physical volume perhaps? can i just remove LVM and use all the data on the disk? === dendrobates is now known as dendro-afk [21:25] perhaps someone can help me with that? [21:25] remove LVM, preserve data [21:26] perhaps restore LVM down the road once i get more comfy with it ;) [21:29] smoser: hey, have we ever considered putting the cloud images into the archive as packages? [21:29] utlemming: ^^ [21:30] SpamapS, no. [21:30] ther ewas a thread once on debian-devel (or maybe ubuntu-devel) [21:30] about "appliance" packages [21:30] We're trying to solve the "how to have everything cached for LXC on install" problem [21:30] i forget how started it [21:30] its just hucky [21:30] well the thinking is that users are used to downloading things with the package manager [21:30] well, we certainly want ot make downloding of those simple and "cached" [21:31] there is a plan for that. [21:31] oh do tell [21:31] and things like update-manager is pretty good at downloading in the background and stuff... [21:34] smoser utlemming: so we've been throwing around a couple of ideas... [21:35] doing nothing will result in the juju local provider being confusing to use (due to the initial "stealth" download of the lxc image on the first deploy) [21:36] one idea was to bust the juju package up into 'juju' and 'juju-local-provider'... the latter downloads the lxc image during postinst [21:36] with a couple of variations on that theme [21:38] smoser utlemming: these all suck... I want an easy (or at least idiomatic to packaging) way to download images [21:38] m_3, i'm sorry. i really ahve to run right now. [21:39] smoser: no prob... lemme know if you think of anything pls [21:39] m_3: when you say "package" are you looking for a pacakge that does the download? [21:39] utlemming: sure.. or even a package that _is_ the download 'juju-local-provider-data' would work [21:40] I'd prefer the package to *contain the images* [21:40] my reasoning being that postinsts doing downloads is counter-intuitive when we have *package managers* to do downloads. [21:40] SpamapS: yikes....that would mean SRU'ing each and every new spin of the images. [21:41] good with the least sucky variation at this point :) [21:41] utlemming: MRE would be pretty easy to get given the contents are just the same packages already SRU'd ;) [21:41] SpamapS: right now we spin up new images as needed, and next cycle we are heavily considering a 3-week new release cadiance [21:41] it might be able to live in a ppa [21:42] SpamapS: what does a packaging of the image offer over a post-install that downloads and verifies? === cpg is now known as cpg|away [21:42] utlemming: uniformity and discoverability [21:43] utlemming: its useful as a Suggests: in many cases (glance.. virt-manager) [21:43] utlemming: the download behind the scenes is very mysterious. Downloading a new version of the cloud images as a package is obvious. [21:44] SpamapS: I not opposed to this, I'm just unsure about how to make this atomic. [21:44] utlemming: after you publish the cloud image, you run a package build including get-orig-source which downloads, dch's, and uploads the updated -data package. [21:45] Spamaps: So if you update the package to a new spin of the images, then all of the sudden you have people that were wanting YYYYMMDD are now getting a different YYYYMMDD [21:45] right, I understand that bit [21:45] its want users are going to expect, and what they get [21:47] utlemming: its really more like the kernel than regular packages. I could see ubuntu-cloud-images-current depending on the latest cloud image.. but each one would get its own package (ubuntu-cloud-images-20120813) [21:48] it would be worse than that....it would have to be ubuntu-cloud-images-- [21:49] since not all images are build on the same day and the build serial is almost never the same between releases. [21:49] * m_3 nods [21:51] utlemming: I don't see that as "worse" [21:51] utlemming: just different, so each release would have its own meta [21:51] well, its worse because this a manual process [21:51] ubuntu-cloud-image-precise would -> ubuntu-cloud-image-precise-YYYYMMDD [21:51] manual would be out of the question [21:51] we are looking at automating all aspects of the builds next cycle [21:52] It should be an idempotent thing that gets run after images are published [21:52] I'm showimg my packaging ignorance here, but is there a way to automate this? [21:52] utlemming: totally! [21:52] docs? [21:52] * SpamapS points to the packaging guide [21:53] utlemming: all packaging is automatable. [21:53] you're just used to doing it the most manual way [21:53] because tht is less error prone [21:53] but if all you are doing is bumping upstream version.. very simple [21:54] right, that bit makes sense. Do we give upload rights to bots? [21:55] debian/rules get-orig-source && dch -v 12.04.1-20120813 'New Upstream Release' && dpkg-buildpackage [21:55] something like that [21:55] utlemming: who signs your cloud images? [21:56] that is an automated process [21:56] that key is pretty much entrusted with all cloud users' safety.. :) [21:56] this is true [21:56] so yes, that would be trustable [21:56] okay, so I am not opposed to this plan [21:56] lets ping Mr. Rosales and talk about this next team meeting [21:57] utlemming: aye [21:59] utlemming: uscan & uupdate also make new upstream versions pretty easy === Ursinha` is now known as Ursinha [22:08] has anyone setup a PXE Install server with FOG? i'm got a question for those who have [22:19] What does it take for Ubuntu to automatically "discover" it's correct fqdn? [22:20] rather than setting it in /etc/hosts [22:25] zastern, dnsmasq? [22:25] arrrghhh: so I have to use some sort of private dns server? [22:25] zastern, just trying to think of how to solve that issue [22:26] you'd need some sort of a DNS server in order for the FQDN to be automatically propagated [22:28] what's the default username for cobbler on an Ubuntu install? The communitydocs say the install prompts for the password, but it does not. [22:28] I found a way to "define" the password, http://openskill.info/topic.php?ID=201 [22:30] three18ti, cobbler/cobbler? [22:30] no love. that's what all the docs for fedora say though... [22:30] `htdigest /etc/cobbler/users.digest "Cobbler" cobbler` [22:31] hrm [22:31] allowed me to "reset" the password though. [22:31] maybe there's a way to update the docs? [22:32] (actually, it's all coming back to me, this is not the first time I've chased myself in circles following the Ubuntu cobbler docs) [22:32] lol [22:32] ;) [22:32] never heard of cobbler before, just googled hoping i could help ;) [22:33] from what I've read cobbler is pretty cool. I've looked at a number of bare-metal provisioning systems and so far it looks the best. [22:33] fancy [22:34] just looked it up myself [22:34] FAI, cobbler, linmin, OpenQRM (though openqrm is a whole 'nuther ball game) [22:34] so based on the install guide, you set the password on install [22:34] https://help.ubuntu.com/community/Cobbler/Installation [22:34] yea, that's what I'm saying, the install guide is wrong. [22:34] okie :) [22:35] ah, i missed that line. haha. [22:35] lol. :) === zyga is now known as zyga-afk [22:55] so can anyone help me save my LVM setup? [23:03] *crickets* [23:05] arrrghhh: yes but I am using a public DNS already [23:05] with nsrecords etc [23:06] i wonder if theres a way to do it with public dns [23:06] zastern, not that i know of, how would that work? [23:06] arrrghhh: no idea :) [23:06] everyone populating their local DNS friendly names to the wide internet? [23:06] no [23:06] im setting it manually of course [23:06] i was just hoping there was a way for ubuntu to pull thast in [23:06] that in* from the dns server where i set it [23:07] hrm [23:13] crap. looks like i should just start over. [23:16] can anyone help me start over? lol. i just want to make sure my other untouched and fine LVM is restored [23:19] http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html [23:19] looks like that covers it. [23:40] ok i just need to be able to send an email from this server... [23:40] i have an account on an email server and an smtp server address to us [23:40] i would like to set up a smarthost... [23:40] i got to tell you the step by step configureation verbage in dpkg-reconfigure exim4 makes no sence === Lcawte is now known as Lcawte|Away