[00:47] <francisvgarcia> Hi everyone
[00:49] <francisvgarcia> I am having issues with ubuntu server 12.04 and this network card: intel Corporation 82562V-2 10/100
[00:50] <francisvgarcia> It completely freezes after one or two hours working, and I have to reboot the server for the network card to work again.
[04:43] <lfactor> Hey guys, i need some help with ufw, i'm wondering if i should turn of stateful support, and if so how.
[04:44] <lfactor> The machine will have around 800k simultaneous connections, i'm assuming stateful will increase the memory requirements a lot, but not sure.
[05:34] <lfactor> i've used /sbin/sysctl -w net.ipv4.netfilter.ip_conntrack_max=1048576 to set my conntrack higher, but still unclear what the positives of a stateful firewall is and if i should have it set on or off.
[07:38] <zaggynl> Why do I get a frozen virtual box vm every other time when I restart my ubuntu guest
[10:38] <JacKnife_> hello, i'm having trouble with squirrelmail on 12.04, when i compose a message and click send it never browses away from the compose screen, even though the message does get sent
[10:39] <JacKnife_> tried on ie and chrome.  the system is all updated and there are no php errors in the apache logs
[10:40] <JacKnife_> nada when i google "squirrelmail compose send" and similar
[10:48] <JacKnife_> w00t, the dudes in #ubuntu got me a fix: http://comments.gmane.org/gmane.mail.squirrelmail.user/38887
[12:41] <Lachezar> Hello all... Is it normal to have fs.nr_open = 1048576
[12:43] <Lachezar> 'sudo lsof | wc -l' shows 2390 open files/descriptors. And I am getting 'Too Many Open Files' crashes (I've raised limits to 65536 files).
[13:10] <zul> jdstrand: ping cinder should be good for main now
[13:22] <jdstrand> zul: yeah, it is on my list after I read email
[13:22] <zul> jdstrand: ok cool
[14:05] <Daviey> zul: can you confirm a pep8 backport fixes this FTBFS, https://launchpad.net/~ubuntu-cloud-archive/+archive/folsom-staging/+packages ?
[14:06] <zul> Daviey: sure just a sec
[14:08] <Daviey> (hint, zul - don't confirm by uploading :)
[14:08] <zul> Daviey: well duh :)
[14:09]  * zul quickly hits control-c
[14:09] <Daviey> heh
[14:21] <Daviey> zul: any news?
[14:21] <zul> Daviey:  not yet still building
[14:26] <jamespage> utlemming, walinuxagent now in precise-proposed BTW
[14:26] <utlemming> jamepsage: awesome :)
[14:30] <jamespage> utlemming, do you have a handy way of testing it? we can then nudge it through to -updates ASAP
[14:31] <utlemming> jamespage: yeah, I can give that a test rather easily
[14:31] <jamespage> utlemming, marvellous!
[14:40] <zul> Daviey: confirmed
[14:42] <Daviey> zul: confirmed it fixes it?
[14:42] <zul> Daviey: confirmed it fixes it
[14:43] <Daviey> zul: okay.. what version is it?
[14:43] <zul> Daviey: 1.2 from quantal
[14:44] <Daviey> zul: wait, i thought 1.2 was evil?
[14:44] <Daviey> for folsom?
[14:45] <zul> Daviey:  it is...not for f1 though
[14:48] <Daviey> zul: wow, that much fail got introduced for >f1 ?
[14:48] <zul> Daviey:  yeah
[14:48] <jamespage> xnox, I need todo something with dumbo for you today don't I
[14:49] <xnox> hmmm.... jamespage you could =)
[14:49] <jamespage> xnox, branch?
[14:49] <xnox> jamespage: i have packaging done, but debian/copyright
[14:49] <xnox> it's not done yet.
[14:49] <xnox> let me push it to lp.net
[14:50] <jamespage> xnox, as its PPA not to worried about d/copyright
[14:50]  * jamespage slaps himself
[14:50] <Daviey> zul: well, there is a reasonable chance we might need to fall back to 1.1 for folsom
[14:50] <jamespage> well for the time being at least
[14:50] <Daviey> so doing the same for the cloud archive is reasonable
[14:50] <zul> Daviey: ack
[14:51] <Daviey> zul: can you upload a dsc and Friends somewhere?
[14:51] <zul> Daviey: for pep8?
[14:51] <Daviey> zul: yeah
[14:52] <zul> Daviey: hold on
[14:52] <xnox> jamespage: two branches: lp:~dmitrij.ledkov/+junk/typedbytes and lp:~dmitrij.ledkov/dumbo/packaging
[14:52] <xnox> it's two small python packages.
[14:53] <xnox> jamespage: feel free to repush to a more appropriate ~person
[14:53] <xnox> and if/when it's in the ppa, I can adjust juju charms to optionally include those
[14:53] <xnox> there is also pydumbo, but it's slower and I have no experience with it. And dumbo is sufficient so far.
[14:53] <xnox> although pydumbo has dfs bindings....
[14:54] <Daviey> zul: so.. first line of the changelog for novas, i set to -  nova (2012.2~f1-0ubuntu1~cloud0) precise-folsom; urgency=low .. .changes = "Distribution: precise" .. does that make sense?
[14:54] <zul> Daviey: yeah iirc thats what we agreed to
[14:55] <zul> Daviey: pep8 stuff is at: http://people.canonical.com/~chucks/tmp/
[14:56] <zul> Daviey: because eventually you are going to have precise-grizzly, precise-h, precise-i, etc ,etc
[14:56] <Daviey> zul: right
[15:01] <jamespage> xnox, I've pushed them both to the dev PPA
[15:02] <jamespage> xnox, all of the hadoop related charms support use of dev|test|stable PPA's for that team
[15:02] <xnox> jamespage: cool, thanks =)
[15:03] <jamespage> xnox, I really like the idea of not having to write stuff in Java
[15:05] <xnox> jamespage: ideally i want to jujufy discoproject map-reduce
[15:05] <xnox> which uses tags instead of folders for dfs
[15:06] <xnox> and python instead of java for mapreduce
[15:06] <xnox> but server part is written in erlang and relies on DNS available for the nodes
[15:06] <xnox> but HPCloud doesn't support DNS at the moment
[15:07] <xnox> so I'm stuck with both discoproject and HPCloud lacking feature: dns-less setups or dns setup respectfully =)
[15:08]  * jamespage sighs
[15:11] <Lachezar> Repeating after a few hours: Is it normal to have fs.nr_open = 1048576
[15:11] <Lachezar> 'sudo lsof | wc -l' shows 2390 open files/descriptors. And I am getting 'Too Many Open Files' crashes (I've raised limits to 65536 files).
[15:12] <jdstrand> zul: re cinder> commented in the bug
[15:15] <utlemming> jamespage: it looks like walinuxagent hasn't landed in the archive yet...as soon as I see it, I'll test
[15:16] <RoyK> hm... seems when I reboot this machine, some drives in my raid come up as "missing" during initial bootup, and I get kicked into busybox. just exiting busybox works, and after that, I can mdadm --stop && mdadm --assemble and mount it - any idea how I can "slow down" this detection or increase the timeouts to avoid this problem?
[15:16] <jamespage> utlemming, should be - its in precise-proposed - https://launchpad.net/ubuntu/+source/walinuxagent/1.0~git20120606.c16f5e9-0ubuntu2~12.04.1
[15:17] <jamespage> if its not there after 4 days we have a problem
[15:17] <xnox> RoyK: if you are using precise, please upgrade to mdadm from -precise, as I commented on your bug report?
[15:17] <xnox> from -proposed that is.
[15:17] <xnox> it has an extra timeout to wait for udev to finish processing events, before dropping into busybox, which helps most people.
[15:17] <utlemming> jamespage: duh, my apt sources.list was wrong
[15:18] <jamespage> lol
[15:20] <RoyK> xnox: how can I upgrade to that from -proposed?
[15:20] <zul> jdstrand: damn it...*grumble* *grumble*
[15:21] <RoyK> xnox: this is precise, btw
[15:21] <xnox> !proposed
[15:22] <xnox> RoyK: https://wiki.ubuntu.com/Testing/EnableProposed
[15:22] <RoyK> xnox: thanks
[15:27] <RoyK> xnox: \o/
[15:52] <hallyn> zul: did a qa-regression-test run just fora sanity's sake, all still looks good.  just lettin' you know cause i'm sure you're unable to sleep at nights worrying about it
[15:53] <zul> hallyn: libvirt?
[15:53] <hallyn> zul: yeah
[15:53] <zul> hallyn: coolness
[15:53] <hallyn> zul: do you know of anything we still need to do to libvirt during q?
[15:54] <zul> hallyn: nope just make sure it doesnt break
[15:54] <zul> hallyn: although i hope we can get the new libvirt-lxc stuff in for q
[15:54] <hallyn> which new stuff?
[15:55] <zul> hallyn: like the lxc reboot
[15:55] <hallyn> do you know where that went in? is it in 0.9.14?
[15:56] <zul> i think it is in trunk
[15:56] <hallyn> cause i assume that went in after those 500 'let's rename stuff for fun 'patches, so forget about backporting
[15:56] <utlemming> jamespage: confirmed
[15:56] <jamespage> utlemming, great - nice one
[15:56] <zul> hallyn: yeah thats why i want trunk :)
[15:56] <utlemming> jamespage: I fired up a couple of instances to be sure.
[15:56] <hallyn> by trunk you mean git head?
[15:57] <hallyn> (not trying to be pedantic, justnot sure what you mean)
[15:59] <zul> hallyn: ack
[15:59] <hallyn> k
[16:09] <souliaq> Hi, I have a little "legal" licensing problem in my company, so the lawyer are asking my for license of Ubuntu-Server, apache and subversion. What he need? GPLv3 and Apache License "texts" and tha'ts all?
[17:18] <xnox> Anyone has a spare intel matrix raid controller?
[17:19] <xnox> souliaq: tarball of /usr/share/common-licenses/ as well as /usr/share/doc/*/copyright
[17:27] <RoyK> xnox: is that real raid or fakeraid?
[17:28] <RoyK> looks like fakeraid to me
[17:28] <RoyK> better use software raid :)
[17:28] <xnox> RoyK: it's not real-real, but it's usually managed with dmraid but recent mdadm can store external metadata using intel matrix format
[17:29] <xnox> and i want to test that, cause I am about to update mdadm in precise
[17:29] <RoyK> ok
[17:37] <RoyK> xnox: will I have to update mdadm manually when you're done with the precise update, currently using the one in proposed?
[17:38] <xnox> RoyK: no you wont. The one in -proposed will be promoted into -updates pocket, such that everyone will get it and it will be included in the 12.04.1
[17:39] <RoyK> thanks
[17:39] <xnox> RoyK: your welcome =)
[17:41]  * RoyK wonders slightly if bcache will make it into upstream kernel ;)
[17:44] <smoser> hm.. i have this utility http://smoser.brickies.net/git/?p=tildabin.git;a=blob;f=make-seed-disk;hb=HEAD
[17:45] <smoser> that i'd like to have packaged. cloud-utils seems reasonablel place for it
[17:46] <smoser> but it would add a depends on genisoimage (and probably a 'Suggests:' for mtools)
[17:46] <smoser> i was going to name it "cloud-localds" (local datasource)
[17:46] <smoser> anyone hav ea better idea than its own binary package of cloud-utils ?
[17:51] <smoser> utlemming, you want to do that^ ?
[17:51] <smoser> i cannot do it today for sure.
[17:52] <utlemming> smoser: yeah...I think I can give it a shot...are we thinking of a subpackage of "cloud-utils-localds" to the cloud-utils package?
[17:52] <utlemming> or just adding it in
[17:55] <smoser> https://bugs.launchpad.net/cloud-utils/+bug/1036312
[17:55] <smoser> utlemming, ^ i think a subpackage is best
[17:56] <utlemming> smoser: ack, we're on the same page
[18:47] <smw_> rk
[18:50] <zul> jdstrand: so how you would you handle that cinder.conf bug?
[18:57] <jdstrand> zul: well, I don't know the issue intimately-- seems we should be shipping our own cinder.conf or patching the one in source before moving it into place.
[18:57] <zul> jdstrand: i was thinking something like ucf
[18:57] <jdstrand> well, it does say this:
[18:57] <jdstrand> The root_helper option (which lets you specify a root wrapper different from cinder-rootwrap, and defaults to using sudo) is now deprecated. You should use the rootwrap_config option instead.
[18:58] <jdstrand> zul: did you you root_helper instead of rootwrap_config?
[18:58] <jdstrand> s/you you/you use/
[18:58] <zul> jdstrand: its in the cinder.conf for the new version of cinder
[19:01] <jdstrand> zul: you misunderstood
[19:02] <jdstrand> zul: your installed cinder.conf uses:
[19:02] <jdstrand> [DEFAULT]
[19:02] <jdstrand> root_helper = sudo /usr/sbin/cinder-rootwrap
[19:02] <zul> right
[19:02] <jdstrand> the error says that root_helper is deprecated. use rootwrap_config instead
[19:02] <zul> jdstrand: ahhhhhh....duh :)
[19:03] <jdstrand> so: s/root_helper/rootwrap_config/ in cinder.conf (doing whatever else you need use rootwrap_config)
[19:03] <jdstrand> heh, right :)
[19:16] <zul> jdstrand: okies fixed
[19:18] <jdstrand> cool
[19:21] <antihero> is there a way to have upstart run stuff as other users yet?
[19:49] <hdave> is Ubuntu JeOS and vmbuilder still actively developed?  I ask because the vm-builder launchpad site has a 20 month old download link and there also doesn't seem to be a JeOS 12.04 image anywhere... Just curious
[19:51] <RoyK> jeos isn't a separate iso any longer
[19:54] <ssvss> Hi, I have a hardware related query. can anyone suggest a server hardware under USD $500 to run ubuntu server in home.
[19:54] <ssvss> I am looking for something that is portable too like the box shape of mac mini.
[19:54] <Psi-Jack> ssvss: ##hardware would be your channel.
[19:56] <ssvss> Thanks, I will ask in the ##hardware channel
[19:57] <hdave> RoyK: thanks
[19:59] <cheez0r> ssvss: raspberry pi.
[19:59] <RoyK> ssvss: a bit hard to use sata devices on a pi
[20:01] <wrapids> How much ram should lamp be using without any traffic?
[20:02] <RoyK> wrapids: lamp is apache, mysql, php, and may be using variable amounts of memory
[20:02] <wrapids> RoyK: Yes.
[20:02] <RoyK> wrapids: for a small database, mysql won't be using much. php may be using a lot, depending on the code
[20:02] <wrapids> RoyK: That's assuming I have traffic
[20:02] <wrapids> which I dont.
[20:02] <RoyK> apache isn't that heavy on memory
[20:03] <RoyK> say, 50 megs will go a long way without too much work
[20:03] <wrapids> php shouldn't be using much of anything as nothing is being exectued. There are no queries going on in the db either
[20:03] <wrapids> Would the database size affect the mysql services usage if it's not getting any queries?
[20:03] <RoyK> once php starts running things, and mysql starts buffering things, say, 512MB should normally do well
[20:04] <RoyK> but then, you can't say unless you know the database size and the php code
[20:04] <wrapids> RoyK: Wouldn't running/buffering require traffic?
[20:04] <ssvss> Yes Rasberry Pi is not what I am looking for, I was thinking something close to the size fo mac mini in which I can have 2 sata disks
[20:04] <RoyK> nothing is buffered unless it is accessed
[20:04] <wrapids> I'm trying to figure out why I'm using nearly 512mb with 0 traffic, nothing being executed, no queries.
[20:04] <RoyK> ssvss: you can get some mini itx boards quite cheap with SATA
[20:06] <wrapids> hrm, after a reboot it's doing better
[20:06] <RoyK> ssvss: or pico itx or pc/104 or ...
[20:06] <wrapids> 188mb with no traffic?
[20:06] <RoyK> wrapids: is that RSS or DRS?
[20:07] <wrapids> RoyK: I'm not sure how to determine
[20:07] <RoyK> ps axfv
[20:07] <RoyK> top also tells that
[20:07] <raubvogel> RoyK: there is always cubox
[20:07] <wrapids> I've been using top
[20:08] <RoyK> top shows VIRT and RES and SHR
[20:08] <wrapids> mysql is using about 50mb idling, apache using about 25mb idling
[20:08] <RoyK> what you want to look for is RES
[20:08] <wrapids> Sorry, about 40 for apache
[20:09] <RoyK> should be fine
[20:09] <wrapids> It was running in the several hundreds before I rebooted
[20:09] <wrapids> free -m was giving me 11 free with only apache/mysql using above .5%
[20:09] <RoyK> resident or virtual?
[20:09] <wrapids> res
[20:09] <RoyK> free usually shows very low "free" memory
[20:09] <RoyK> most of the memory is spent on caching
[20:09] <wrapids> It was fairly accurate compared to the top results
[20:10] <RoyK> Mem:       8178284    8056176     122108          0     469068    6906568
[20:10] <wrapids> apache had 6-10 processes using about 5-10% each
[20:10] <RoyK> that's close to zero free
[20:10] <RoyK> which is fine
[20:10] <wrapids> according to top
[20:10] <RoyK> because you want linux to spend its memory on caching
[20:11] <RoyK> wrapids: seriously - if you don't have a performance issue, don't care about how much memory is spent
[20:11] <wrapids> RoyK: I do have a performance issue when it starts doing that
[20:11] <RoyK> does it start swapping?
[20:11] <wrapids> I have no idea
[20:11] <RoyK> how much memory do you have?
[20:11] <wrapids> 512
[20:11] <wrapids> It's just a dev server
[20:11] <RoyK> not a whole lot
[20:11] <wrapids> the problem is that it starts eating ram like that with nothing going on, I get very delayed response from the ssh interface
[20:12] <RoyK> no idea why
[20:12] <wrapids> hrm
[20:12] <RoyK> check the logs
[20:12] <RoyK> and check swap use
[20:17] <RoyK> wrapids: apache uses prefork with php, so each of te processes weren't using 5-10% each, they were probably sharing most of that
[20:17] <wrapids> ah
[20:45] <jkyle> I have some interfaces configured for bonding. I have to bring up each of the slave interfaces before bringing up the bond0 interface or it times out waiting for slaves to be available
[20:46] <jkyle> https://gist.github.com/3343969
[20:46] <jkyle> e.g. Waiting for a slave to join bond0 (will timeout after 60s)
[20:46] <jkyle> if I do, ifup eth2; ifup eth3;ifup bond0; it works
[21:03] <arrrghhh> hey all.  can anyone help me shrink an LVM partition?
[21:03] <arrrghhh> i booted a livecd and tried to shrink it thru gparted, but i guess gparted doesn't support LVM
[21:04] <arrrghhh> so then i figured i had to remove it from LVM to get gparted to see it... that hasn't worked out so far.
[21:05] <arrrghhh> i removed and added back a logical volume... and i have a feeling it's FUBAR now.  i can't seem to get the system to boot.
[21:05] <arrrghhh> is there a way to recover it, or should i just reinstall?
[21:07] <xnox> arrrghhh: what do you actually want to resize and what is it stacked on top of?
[21:07] <xnox> the whole chain
[21:07] <arrrghhh> so there's a set physical SAS disks
[21:08] <arrrghhh> then i have physical volumes setup, and logical volumes underneath it
[21:08] <arrrghhh> i'd like to shrink one logical AND physical volume
[21:08] <arrrghhh> then shrink the actual amount provided to the OS
[21:08] <arrrghhh> this is in an ESXi environment, and i'd like to reclaim a bit i've allocated
[21:08] <arrrghhh> i fear i've already done too much.  i removed the LV, and readded a smaller one - now the OS won't boot, and I'm not sure if it can be recovered.
[21:09] <arrrghhh> makes me wish i had snapshotted it before doing all this... oy
[21:09] <xnox> arrrghhh: you are doing it wrong way around
[21:09] <arrrghhh> OK
[21:09] <xnox> first you shrink the OS filesystem.
[21:09] <xnox> then you shrink logical volume
[21:09] <xnox> then you shrink physical volume
[21:09] <xnox> then you can shrink the partition
[21:09] <arrrghhh> my issue was #1 - i couldn't shrink the OS filesystem when it's mounted
[21:10] <arrrghhh> so i went to a liveCD, and that didn't support LVM
[21:10] <xnox> ok.
[21:10] <arrrghhh> (gparted doesn't support LVM rather)
[21:10] <xnox> in livecd you install lvm2 package
[21:10] <arrrghhh> ok, done :)
[21:10] <xnox> then you scan lvm groups
[21:10] <arrrghhh> yes
[21:10] <xnox> then you mount the logical volume you want to shrink
[21:10] <arrrghhh> ok let me try
[21:10] <xnox> then you start shrinking that filesystem
[21:11] <xnox> then lvresize the logical volume
[21:11] <xnox> or lvreduce
[21:11] <xnox> and etc. downwords
[21:11] <xnox> good night
[21:11] <arrrghhh> thanks
[21:11] <arrrghhh> hrm.  xnox are you leaving?
[21:12] <arrrghhh> i'll take that as a yes.  can anyone else lend a hand with LVM?
[21:12] <arrrghhh> i am trying to learn about it the hard way, as usual.
[21:13] <arrrghhh> i have a LV name of /dev/ubuntu/root in lvs, but i can't seem to mount it...
[21:14] <arrrghhh> anyone?  is the data still on the physical volume perhaps?  can i just remove LVM and use all the data on the disk?
[21:25] <arrrghhh> perhaps someone can help me with that?
[21:25] <arrrghhh> remove LVM, preserve data
[21:26] <arrrghhh> perhaps restore LVM down the road once i get more comfy with it ;)
[21:29] <SpamapS> smoser: hey, have we ever considered putting the cloud images into the archive as packages?
[21:29] <SpamapS> utlemming: ^^
[21:30] <smoser> SpamapS, no.
[21:30] <smoser> ther ewas a thread once on debian-devel (or maybe ubuntu-devel)
[21:30] <smoser> about "appliance" packages
[21:30] <SpamapS> We're trying to solve the "how to have everything cached for LXC on install" problem
[21:30] <smoser> i forget how started it
[21:30] <smoser> its just hucky
[21:30] <SpamapS> well the thinking is that users are used to downloading things with the package manager
[21:30] <smoser> well, we certainly want ot make downloding of those simple and "cached"
[21:31] <smoser> there is a plan for that.
[21:31] <m_3> oh do tell
[21:31] <SpamapS> and things like update-manager is pretty good at downloading in the background and stuff...
[21:34] <m_3> smoser utlemming: so we've been throwing around a couple of ideas...
[21:35] <m_3> doing nothing will result in the juju local provider being confusing to use (due to the initial "stealth" download of the lxc image on the first deploy)
[21:36] <m_3> one idea was to bust the juju package up into 'juju' and 'juju-local-provider'... the latter downloads the lxc image during postinst
[21:36] <m_3> with a couple of variations on that theme
[21:38] <m_3> smoser utlemming: these all suck...  I want an easy (or at least idiomatic to packaging) way to download images
[21:38] <smoser> m_3, i'm sorry. i really ahve to run right now.
[21:39] <m_3> smoser: no prob... lemme know if you think of anything pls
[21:39] <utlemming> m_3: when you say "package" are you looking for a pacakge that does the download?
[21:39] <m_3> utlemming: sure.. or even a package that _is_ the download 'juju-local-provider-data' would work
[21:40] <SpamapS> I'd prefer the package to *contain the images*
[21:40] <SpamapS> my reasoning being that postinsts doing downloads is counter-intuitive when we have *package managers* to do downloads.
[21:40] <utlemming> SpamapS: yikes....that would mean SRU'ing each and every new spin of the images.
[21:41] <m_3> good with the least sucky variation at this point :)
[21:41] <SpamapS> utlemming: MRE would be pretty easy to get given the contents are just the same packages already SRU'd ;)
[21:41] <utlemming> SpamapS: right now we spin up new images as needed, and next cycle we are heavily considering a 3-week new release cadiance
[21:41] <m_3> it might be able to live in a ppa
[21:42] <utlemming> SpamapS: what does a packaging of the image offer over a post-install that downloads and verifies?
[21:42] <SpamapS> utlemming: uniformity and discoverability
[21:43] <SpamapS> utlemming: its useful as a Suggests: in many cases (glance.. virt-manager)
[21:43] <SpamapS> utlemming: the download behind the scenes is very mysterious. Downloading a new version of the cloud images as a package is obvious.
[21:44] <utlemming> SpamapS: I not opposed to this, I'm just unsure about how to make this atomic.
[21:44] <SpamapS> utlemming: after you publish the cloud image, you run a package build including get-orig-source which downloads, dch's, and uploads the updated -data package.
[21:45] <utlemming> Spamaps: So if you update the package to a new spin of the images, then all of the sudden you have people that were wanting YYYYMMDD are now getting a different YYYYMMDD
[21:45] <utlemming> right, I understand that bit
[21:45] <utlemming> its want users are going to expect, and what they get
[21:47] <SpamapS> utlemming: its really more like the kernel than regular packages. I could see ubuntu-cloud-images-current depending on the latest cloud image.. but each one would get its own package (ubuntu-cloud-images-20120813)
[21:48] <utlemming> it would be worse than that....it would have to be ubuntu-cloud-images-<release>-<build serial>
[21:49] <utlemming> since not all images are build on the same day and the build serial is almost never the same between releases.
[21:49]  * m_3 nods
[21:51] <SpamapS> utlemming: I don't see that as "worse"
[21:51] <SpamapS> utlemming: just different, so each release would have its own meta
[21:51] <utlemming> well, its worse because this a manual process
[21:51] <SpamapS> ubuntu-cloud-image-precise would -> ubuntu-cloud-image-precise-YYYYMMDD
[21:51] <SpamapS> manual would be out of the question
[21:51] <utlemming> we are looking at automating all aspects of the builds next cycle
[21:52] <SpamapS> It should be an idempotent thing that gets run after images are published
[21:52] <utlemming> I'm showimg my packaging ignorance here, but is there a way to automate this?
[21:52] <SpamapS> utlemming: totally!
[21:52] <utlemming> docs?
[21:52]  * SpamapS points to the packaging guide
[21:53] <SpamapS> utlemming: all packaging is automatable.
[21:53] <SpamapS> you're just used to doing it the most manual way
[21:53] <SpamapS> because tht is less error prone
[21:53] <SpamapS> but if all you are doing is bumping upstream version.. very simple
[21:54] <utlemming> right, that bit makes sense. Do we give upload rights to bots?
[21:55] <SpamapS> debian/rules get-orig-source && dch -v 12.04.1-20120813 'New Upstream Release' && dpkg-buildpackage
[21:55] <SpamapS> something like that
[21:55] <SpamapS> utlemming: who signs your cloud images?
[21:56] <utlemming> that is an automated process
[21:56] <SpamapS> that key is pretty much entrusted with all cloud users' safety.. :)
[21:56] <utlemming> this is true
[21:56] <SpamapS> so yes, that would be trustable
[21:56] <utlemming> okay, so I am not opposed to this plan
[21:56] <utlemming> lets ping Mr. Rosales and talk about this next team meeting
[21:57] <SpamapS> utlemming: aye
[21:59] <Daviey> utlemming: uscan & uupdate also make new upstream versions pretty easy
[22:08] <Vamps-AFK> has anyone setup a PXE Install server with FOG? i'm got a question for those who have
[22:19] <zastern> What does it take for Ubuntu to automatically "discover" it's correct fqdn?
[22:20] <zastern> rather than setting it in /etc/hosts
[22:25] <arrrghhh> zastern, dnsmasq?
[22:25] <zastern> arrrghhh: so I have to use some sort of private dns server?
[22:25] <arrrghhh> zastern, just trying to think of how to solve that issue
[22:26] <arrrghhh> you'd need some sort of a DNS server in order for the FQDN to be automatically propagated
[22:28] <three18ti> what's the default username for cobbler on an Ubuntu install?  The communitydocs say the install prompts for the password, but it does not.
[22:28] <three18ti> I found a way to "define" the password, http://openskill.info/topic.php?ID=201
[22:30] <arrrghhh> three18ti, cobbler/cobbler?
[22:30] <three18ti> no love.  that's what all the docs for fedora say though...
[22:30] <three18ti> `htdigest /etc/cobbler/users.digest "Cobbler" cobbler`
[22:31] <arrrghhh> hrm
[22:31] <three18ti> allowed me to "reset" the password though.
[22:31] <three18ti> maybe there's a way to update the docs?
[22:32] <three18ti> (actually, it's all coming back to me, this is not the first time I've chased myself in circles following the Ubuntu cobbler docs)
[22:32] <arrrghhh> lol
[22:32] <three18ti> ;)
[22:32] <arrrghhh> never heard of cobbler before, just googled hoping i could help ;)
[22:33] <three18ti> from what I've read cobbler is pretty cool.  I've looked at a number of bare-metal provisioning systems and so far it looks the best.
[22:33] <arrrghhh> fancy
[22:34] <arrrghhh> just looked it up myself
[22:34] <three18ti> FAI, cobbler, linmin, OpenQRM (though openqrm is a whole 'nuther ball game)
[22:34] <arrrghhh> so based on the install guide, you set the password on install
[22:34] <arrrghhh> https://help.ubuntu.com/community/Cobbler/Installation
[22:34] <three18ti> yea, that's what I'm saying, the install guide is wrong.
[22:34] <arrrghhh> okie :)
[22:35] <arrrghhh> ah, i missed that line.  haha.
[22:35] <three18ti> lol. :)
[22:55] <arrrghhh> so can anyone help me save my LVM setup?
[23:03] <arrrghhh> *crickets*
[23:05] <zastern> arrrghhh: yes but I am using a public DNS already
[23:05] <zastern> with nsrecords etc
[23:06] <zastern> i wonder if theres a way to do it with public dns
[23:06] <arrrghhh> zastern, not that i know of, how would that work?
[23:06] <zastern> arrrghhh: no idea :)
[23:06] <arrrghhh> everyone populating their local DNS friendly names to the wide internet?
[23:06] <zastern> no
[23:06] <zastern> im setting it manually of course
[23:06] <zastern> i was just hoping there was a way for ubuntu to pull thast in
[23:06] <zastern> that in* from the dns server where i set it
[23:07] <arrrghhh> hrm
[23:13] <arrrghhh> crap.  looks like i should just start over.
[23:16] <arrrghhh> can anyone help me start over?  lol.  i just want to make sure my other untouched and fine LVM is restored
[23:19] <arrrghhh> http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html
[23:19] <arrrghhh> looks like that covers it.
[23:40] <grendal> ok i just need to be able to send an email from this server...
[23:40] <grendal> i have an account on an email server and an smtp server address to us
[23:40] <grendal> i would like to set up a smarthost...
[23:40] <grendal> i got to tell you the step by step configureation verbage in dpkg-reconfigure exim4  makes no sence