[00:05] hi, filezilla wont delete files from the server because of a permissions, how do i access permissions to delete and upload files [00:11] WeThePeople: understanding unix permissions is integral to being able to use linux well.. you may wish to study these three quick overviews: http://www.sal.ksu.edu/faculty/tim/unix_sg/nonprogrammers/file_sys/permissions.html http://oldfield.wattle.id.au/luv/permissions.html http://en.wikibooks.org/wiki/A_Quick_Introduction_to_Unix/Permissions [00:11] (as a side-note, it's amazing how many 'introduction to unix permissions' don't cover _directory_ permissions, which is just as important as file permissions. sigh.) [00:19] sarnold, i think i need to chown / [00:20] with -R [00:20] you could, if yo uwanted to totally destroy all permissions on that server [00:26] ok, is " cd /var/www/ " the correct way to cd dir. in the server? [00:27] i cant get passed /var [00:27] into www [00:28] WeThePeople: you might want to pastebin the results of ls -ld / /var /var/www /var/www/* -- and describe what you're trying to do [00:32] sarnold, i am trying to delete the index.html from /home/var/www/ >>> http://imgh.us/Screenshot_from_2013-05-13_17:31:29.png [00:32] from filezilla, and am getting permission probs === Sargun_ is now known as Sargun [00:35] it is a chown issue [00:43] ok i think filezilla doesnt have the correct permissions then [00:45] how do i set the permissions for filezilla? [00:59] WeThePeople: what account did you use to log in? [00:59] WeThePeople: .. and why not just sudo rm /var/www/index.html ? [00:59] sarnold, i did [00:59] i am working on a solution to upload now [01:00] aha. what user account will you use to upload? [01:00] what i ssh into [01:00] ace? [01:00] idk what user account [01:00] yes [01:00] ace [01:01] sudo chown -R ace:ace /var/www [01:01] that'll change /var/www and all its child directories to owned by ace [01:02] sarnold, i did that and still filezilla would not let me delete that file [01:04] WeThePeople: really? o_O [01:04] yes [01:14] its a command issue in filezilla, its only "ls" listing it [01:17] sarnold, >>> http://imgh.us/Screenshot_from_2013-05-13_18:15:23.png [01:18] WeThePeople: figure out for certain what user account you're using in filezilla.. [01:28] im using ace [01:28] sarnold, im using ace [01:29] WeThePeople: pastebin the ls -ld / /var /var/www again... [01:30] really, just add ace to the www-data group [01:30] patdk-l2: it was owned root:root before I suggested ace:ace ... [01:30] oh? odd [01:31] patdk-l2: indeed. (not that I like www-data owning files, but I'm sure you're sick of hearing that particular rant :) [01:37] patdk-12, how do i do that [01:39] patdk-l2, ^^ [01:41] sarnold, http://paste.ubuntu.com/5663113/ [01:42] WeThePeople: aha. your chown didn't actually work. [01:42] WeThePeople: it's still root:root [01:42] ah yes i see [01:42] interesting [01:42] sarnold, well, www-data owning files depends, but ya [01:43] WeThePeople: time to run :) you can either fix things up as patdk-l2 recommends or you can change evreything to be owned by ace. either way. [01:43] thanks for the help [01:44] WeThePeople: just be sure to spend some time with those three introduction to unix permissions I pasted earlier -- knowing how to fix this stuff is important, even if it does take a bit to understand initially... [01:44] have fun and good luck :) === Ursinha is now known as Ursinha-zzz [03:11] is it possible to have a gui come up in ubuntu-server [03:56] WeThePeople, it's possible to install a desktop, but then that's not ubuntu server anymore [03:56] thanks === flebel is now known as Guest71621 [06:39] zul: http://people.canonical.com/~agandelman/ca/grizzly/python-glanceclient/ [06:40] jamespage: ^ [06:58] <_PehdeN_> anyone here [06:58] Just you. [06:59] <_PehdeN_> Good. Error duplicat sources. [06:59] <_PehdeN_> lol [06:59] <_PehdeN_> I dont remember how to clear the cache I think thats what i need to do [06:59] ...wat [07:00] <_PehdeN_> apt-get cache or some there was a command that clears the cache [07:00] Your DNS cache, your disk cache, your web cache, your proxy cache, your LDAP cache, your apt cache, etc... [07:00] apt-get clean? [07:01] <_PehdeN_> right [07:01] <_PehdeN_> apt cache [07:04] <_PehdeN_> i asked you then remembered man. lol [07:06] <_PehdeN_> Can i pm you Corey [07:06] <_PehdeN_> or [07:07] <_PehdeN_> nvm here > https://pastee.org/h2s23 [07:07] <_PehdeN_> im lost === racedo` is now known as racedo [07:08] <_PehdeN_> thats the only ones that fail, i am not sure what the issue is it seems like everything else tuns smooth [07:08] _PehdeN_: Your sources list may be nutty, check /etc/apt/sources.list and sources.list.d/ for duplicates. [07:11] <_PehdeN_> https://pastee.org/r522g [07:11] <_PehdeN_> corey ^ [07:11] <_PehdeN_> looks like there is something odd here [07:12] <_PehdeN_> https://pastee.org/dht4q [07:13] <_PehdeN_> the second is sources.list.d [07:13] <_PehdeN_> have to love clex right === Guest80863 is now known as BenyG [07:37] Hi, server packages are supported for 5 years and desktop packages for 3 years. But how can I find a list of which packages are considered "server packages"? [07:40] can anyone help with upstart [07:41] i am trying to create an instance [07:41] env PIDFILE="/var/www/shared/tmp/pids/resque_worker_0_instance_$ID.pid" [07:41] alkisg: IIRC, everything in 'main' [07:41] but the file being created with the $ID not being parsed [07:42] Jeeves_: apt-cache policy kde-l10n-el => main, apt-cache show kde-l10n-el => Supported: 18m [07:42] But some packages don't have a "supported" entry in apt-cache show... :-/ [07:46] alkisg: So that's correct then [07:47] Jeeves_: so there are packages in main that are supported for 5 years, other packages in main supported only for 18 months, and also some packages don't have a "supported" entry in their control file... ...so I'm at a loss on how to tell which packages are supported for 5 years and which not [07:49] alkisg: If it's in main, and you're on a desktop: 18m [07:49] 18m? [07:49] LTS desktop packages are supposedly supported for 3 years [07:49] That's not correct :) [07:49] Not for 18 months, although that's what the package says, 18 months [07:49] Miscalculation on my part [07:50] On 12.04: $ apt-cache show kde-l10n-el | grep Supported [07:50] Supported: 18m [07:50] $ apt-cache show kde-baseapps | grep Supported [07:50] Supported: 5y [07:51] ...I don't understand the difference there :-/ [07:53] dpkg -l | awk '/^ii/ { print $2 }' | xargs apt-cache show | grep ^Supported | sort -u [07:53] Supported: 18m [07:53] Supported: 5y [07:53] ...I don't have any package at all that is supported for 3y === Adri2000_ is now known as Adri2000 [08:39] How do I make it possible to send mail from my server.. when I have another mail relay server I can point the configuration to? === ENOSLEEP is now known as greppy [09:08] jamespage: I guess https://blueprints.launchpad.net/ubuntu/+spec/servercloud-r-seeded-qa-workflow needs refreshing? [10:05] Hi guys === Ben66 is now known as Ben64 [10:50] hi again [11:00] can i paste real ips or it is unallowed? [11:02] http://pastebin.com/LpZFfnRK [11:02] hey? here is a real IP - 1.2.3.4 [11:02] mardraum: :) [11:02] i dont see the probelm [11:02] the 12345 error is not (i bet) [11:02] you have told postfix you are running a milter on localhost:12345 [11:02] but you are not [11:03] fix your config [11:03] yes, i just did it [11:03] but thats the reason because i get my emails as spam? [11:04] I don't follow what you are getting at, sorry [11:04] what exactly is the problem? [11:04] i have a server with various domains, the main domain is sudoers.so which i use for send emails, conoced.me is an other domain i want to send emails from [11:05] but when i send emails from conoced.me they are treated as spam [11:05] by whom? [11:07] i send an email from conoced.me trought sudoers.so to librepensamiento.es (which is really a gmail account) [11:08] just for testing, i dont have more accounts [11:08] anyway i tried some @gmail.com and i get the same result, is the pastebin [11:12] hXm: I don't see any spam problems in the pastebin? [11:13] either me! but the mail is stored in the spam folder in the gmail [11:13] instead the normal inbox [11:14] try some different text in the email? [11:17] http://cl.ly/OwHf still [11:17] oh wait [11:17] it uses ipv6 now [11:18] Received-SPF: fail (google.com: domain of testing@conoced.me does not designate 2001:41d0:8:3d62::1 as permitted sender) client-ip=2001:41d0:8:3d62::1; [11:18] i used telnet this time [11:18] anyway i added this to the dns Non-authoritative answer: [11:18] conoced.me text = "v=spf1 a ptr ip6:2001:41d0:8:3d62::1 -all" [11:18] conoced.me text = "v=spf1 a ptr ip4:176.31.118.98 -all" [11:19] I don't think submitting an email via telnet is a good test of the gmail spam system somehow [11:19] is because i dont a client email for this domain [11:23] sending this email body: hello this is a testing is also treated as spam [11:24] I'd treat you telnetting as spam too if I were gmail. [11:24] it stinks of some custom code written by a spammer they got to run on a botnet [11:25] ok so i go to configure a thunderbird for send [11:26] hi all, i am trying to restrict authorized_keys to use just rsync, can someone explain me this line? command="rsync --server --sender -vlogDtprz . /var/backup" [11:26] actually the --server and the . [11:29] sk1pper: man rsync the search for server by typing "/--server", and the . refers to "here", ie, pwd === unreal_ is now known as unreal [11:30] mardraum: thanks, is it possible to use to restrict the ssh key to just command="rsync" without any parameters? [11:41] sk1pper: authorized_keys has nothing to do with rsync afaik? [11:41] wait, i cant send emails to internet without tls? [11:41] you probably want some sort of restricted shell [11:43] sk1pper: rssh [11:43] anyway, you seem to be rsyncing inside a system, why not just use cron? [11:44] no need to connect remotely to run a local command. [11:59] adam_g, reviewed and uploaded - thanks! [12:03] zul, oh great "/usr/src/modules/openvswitch-datapath/openvswitch/datapath/linux/datapath.c:65:2: error: #error Kernels before 2.6.18 or after 3.8 are not supported by this version of OpenvSwitch" [12:03] saucy #bang [12:04] jamespage: im lauging so hard that im crying [12:06] zul, I'll poke it with 0.10.0 release and see if that fixes stuff [12:06] jamespage: by the end of the release cycle i can call you a kernel hacker then ;) [12:06] zul, I think you could call me that already [12:07] had to hack it last cycle as well! [12:09] im going to go get saucy built this morning [12:11] zul, 1.10 might not be the right thing todo; 1.9.x is the lts release [12:11] I suspect its just a quick patch to fixup the kernel version check [12:13] jamespage: im sure it is [12:14] http://cloud-images.ubuntu.com/releases/ down? [12:14] smoser, utlemming: ^^ re cloud-images [12:15] swaT30, let me see what I can find out [12:15] jamespage: thanks === Psi-Jack_ is now known as Psi-Jack [12:38] swaT30, should be back now - was impacted by some datacenter issues earlier today [12:38] jamespage: cool, just wanted to make sure you guys were aware [12:39] thanks! [12:39] swaT30, thanks for reporting the issue - much appreciated! [12:39] no worries! === wedgwood_away is now known as wedgwood [12:43] zul, adam_g, jamespage: Who is running servercloud-s-openstack-havana ? [12:43] (vUDS session) [12:43] me i think [12:43] Daviey, zul is [12:43] generally Drafter == Lead [12:44] Daviey, we start in just over 1 hour right? [12:44] is there any way how you can update all packages BUT ONE...or all but certain group ? [12:44] Daviey/jamespage: apparently i can run the qa session as well [12:44] w00t [12:44] gyre007, make it sticky [12:44] patdk-l2: how ? [12:44] by setting it's priority [12:44] so pinning [12:44] ? [12:45] yep [12:45] pinning in Ubuntu is UTTER pain [12:45] literally [12:45] but yeah [12:45] thats the option [12:45] heh? thought it was pretty simple [12:45] cheers [12:45] atleast every time I have done it, it is [12:45] patdk-l2: hah! if you have like 6 different PPAs and each provide the same package..different versions etc.. [12:45] no fun [12:46] i had my share of this fun... [12:46] dunno how that makes a difference [12:46] trust me it does ... [12:46] you increase the prority of the one you want, done [12:46] zul, someone already proposed a 3.9 kernel fix upstream - I'll let that land and then pull into saucy [12:46] patdk-l2: http://serverfault.com/questions/506772/prioritise-repositories-in-ubuntu/506938?noredirect=1#506938 [12:46] you could always rebuild it into your own ppa, adjusting the version number [12:46] jamespage: cool [12:47] jamespage: cmd2 version mismatch with cliff so thats why quantum is failing === zeppo_ is now known as zeppo [12:48] zul, great [12:48] oh well - it gonna be like this for a bit yet! [12:51] jamespage, did someone fix ? swaT30 i guess? [12:51] smoser, there was a datacenter issue earlier - fixed now [12:54] jamespage: kick off plenary starts in 1:54, but i don't think our prescience is required. === Overand_ is now known as Overand [13:04] jamespage: yay we have a clean cinder (no more patches) [13:08] has anyone ever setup ubuntu server as a network virus scanner to scan all PC's on a domain plus network shares [13:14] Daviey: server meeting @ 16:00GMT right? [13:14] server team meeting* [13:14] * TheLordOfTime kicks his computer [13:14] TheLordOfTime: Erm, about that. [13:15] Daviey: cancelled, changed, etc.? [13:15] TheLordOfTime: I forgot we canceled it, due to having virtual UDS [13:15] TheLordOfTime: Sorry about that. [13:15] Daviey: no problem, i wasn't up *just* for the server meeting, just making sure of things :) [13:15] Daviey: TBH I forgot about vUDS o.O [13:16] and that's uncommon since i'm usuallly keeping track of those [13:16] Daviey: when's the next server team meeting, i assume sometime after vUDS [13:16] TheLordOfTime: same time next week [13:16] i'll make sure to be around :) [13:16] no idea on a samba / clamav network virus scanner? [13:18] Ugh, vUDS times are all out by 1hr. It's all off by 1hr. [13:18] jamespage / smoser ^ [13:18] oh? [13:18] what now? [13:18] the schedule is not showing utc you mean ? [13:19] Daviey, what did you mean [13:19] smoser: No, the UTC timings are correct. Many people thought it was starting now. [13:20] ah. [13:20] Daviey: got a link to the vUDS schedule? [13:20] * TheLordOfTime can't find it even though he looked [13:20] http://summit.ubuntu.com/uds-1305/2013-05-14/ [13:20] i blame my crappy cache [13:20] smoser: thanks [13:20] you can see server/cloud only also [13:21] go "up" to http://summit.ubuntu.com/uds-1305/ [13:21] hello all [13:21] guess I have to figure it out myself [13:22] smoser: i actually was looking for everything, i occasionally attend non-server stuff :) [13:22] sudobash: we might just be busy and not have gotten around to answering you [13:22] !patience > sudobash [13:22] sudobash, please see my private message [13:29] rbasak: did you see foundations-1305-checkbox-arm-server ? [13:30] how I upgrade the driver for HP Smart Array Controller P420? [13:30] * rbasak looks [13:33] Daviey: thanks - I'll attend. Not sure why it's in Foundations. I guess there's no specific QA track? [13:33] rbasak: QA is EVERY track :) [14:01] hi all [14:04] Daviey: do we still have the server team mailing address? [14:04] s/address/list/ === Ursinha-zzz is now known as Ursinha [14:06] jamespage: interesting in cinder if i do fakeroot debian/rules clean setup.cfg gets blown away === mahmoh1 is now known as mahmoh [14:15] TheLordOfTime, were you looking for ubuntu-server@lists.ubuntu.com ? [14:15] probably [14:15] :P [14:16] but i'll wait to the next server team meeting :) [14:18] smoser: Have you lower 3rd "How To" handy? [14:32] has anyone ever integrated clamav and samba for a network virus scanner? === _croop is now known as croop === jiriki- is now known as jiriki === mikehale_ is now known as mikehale [14:42] ppetraki: doe the problem lie in md or in udev? I wonder if new libudev in saucy is meant to fix that [14:43] hallyn, no, it's just plain incomplete [14:43] hallyn, we use udev to respond to things like lvm, but don't use event driven scanning for any other block devices [14:44] hallyn, once an md array starts, you would have to inspect whatever new disk was hotplugged, determine which array it belongs to and insert it [14:44] ppetraki: but where are the races comin from? who's trigging two device up events? [14:44] hallyn, scsi probe is async, it can scan N buses, the first one can complete before the md scan, the next one can complete a year from now [14:44] (this might be better discussed on ubuntu-devel) === koolhead17 is now known as koolhead17|afk [14:45] and so the dups are for the same device on different channels? [14:45] at different completion times? [14:45] let me try my hack first :) http://pastebin.ubuntu.com/5664678/ [14:45] ok [14:46] hallyn, they are unique devices, on different channels, might as well be different hbas [14:46] hallyn, and they all are required for this RAID 10 I built, sometimes I just get half, other times I don't get enough to start [14:47] hm, why the upstart job, as opposed to adding scsi_wait_scan to your initramfs or something? is there atrick you're doing there? [14:49] hallyn, mdadm runs as rc.d script, so this should be good enough, or I read it wrong [14:50] hallyn, you're probably right, I should address this in ramdisk [14:52] ppetraki: ok - that can be worried about later then, i was just wondering if i was missing something cool [14:53] hallyn, we could make something cool :) === blitzkrieg3 is now known as jmleddy [15:07] hallyn: a quick ping...is netcf ready for promotion to unstable from experimental yet? [15:08] ahs3: I should think so [15:08] we've been using it in ubuntu for some time now [15:08] hallyn: nod. now that the Debian freeze is over, i'll likely do that this week [15:08] ahs3: cool, thanks [15:09] happily, not much going on upstream there for now :) [15:09] :-) [15:09] (it does what it needs to - /me doesn't enjoy needless churn) [15:10] ack === resno_ is now known as resno === dannf` is now known as dannf === yofel_ is now known as yofel === jiriki- is now known as jiriki === NomadJim_ is now known as NomadJim === k1ng440 is now known as k1ng === medberry is now known as med_ [16:16] I have 2 LUKS volume groups, OS & cinder-volumes, but only OS prompts for decryption at boot. How do I make it prompt for the other, too? === wizonesolutions is now known as lefnire === lefnire is now known as wizonesolutions === wizonesolutions is now known as lefnire === lefnire is now known as wizonesolutions === wizonesolutions is now known as lefnire === blitzkrieg3 is now known as jmleddy [16:58] NginUS: why did you not put them in the same volume? you will need to do it manually, which is not a biggie [16:58] write a shell script [17:00] http://askubuntu.com/questions/21025/mount-a-luks-partition-at-boot [17:02] adam_g, zul, jamespage: bug 1179750 [17:02] Launchpad bug 1179750 in python-glanceclient "python-glanceclient requires python-keystoneclient <0.2 but 0.2.3 is installed" [Undecided,Confirmed] https://launchpad.net/bugs/1179750 [17:03] The latest comment confuses me [17:04] is it that glance needs to depend on glanceclient? [17:04] but.. surely it would have updated regardless? [17:06] Daviey: im confused as well ill take a look [17:07] Daviey, thats what prompted my question re dh_python2/overrides [17:08] Daviey, glanceclient functions fine even with the unsatisfied hard version requirement in requires.txt [17:08] interesting. [17:09] Daviey, i noticed the same thing yesterday with cinder, which has a requires.txt of paramiko > what is installed. does not error out with the standard distutils errors [17:10] adam_g: Have you identified why this passed ok in CI? [17:12] Daviey, no, as i said.. none of this is causing any functional errors [17:12] :( [17:14] pmatulis_: I found it in /etc/crypttab [17:15] Why doesn't my WiFi show up after bootup? It's only present if I have the wired connection plugged in at boot, which defeats the purpose. [17:15] Daviey, at least not with new installs.t he bug states there is an issue upgrading glance and it not pulling in the correct, newer glanceclient version. [17:16] Daviey, that sounds like a legit bug (d/control doesnt specify a version requirement on python-glanceclient) still doesn't explain why everything works when requires.txt deps are not satisfied (or overridden in pydist-overrides) [17:18] yeah [17:22] NginUS: nice [17:24] I just wish my WiFi would work now [17:25] Ok it was the F2 key, on linux you have to reboot after toggling the radio [17:26] hooray my WiFi works again === blkperl_ is now known as blkperl === blitzkrieg3 is now known as jmleddy [17:46] I am having horrible time enabling multicast (well getting it) on multi home machine. I have both NIC configured as static IP (p4p1 default gateway) and (p1p1 to receive multicast) I added static route "route add -net 224.0.0.0 netmask 240.0.0.0 dev p1p1". AppArmor is disabled (/etc/default/apport) enabled=0. When trying to listen using my app bound to p1p1 I am not getting any mcast data. When running netstat -g I see "p1p1 1 [17:46] 224.0.25.67". [17:46] I do not have SELinux installed [17:47] guma: apparmor has nothing to do with apport. different tools. [17:47] Also I have another machine with just one NIC connected t othe same router as p1p1 and this machine can get mcast data just fine with same application and settings [17:47] sarnold: ok. So how do you go about finding out what is wrong. I am on 12.10 x64 [17:48] guma: are you confident you need to be manipulating the multicast routes by hand? [17:48] (it's out of my experience either way, but it sounds odd..) [17:48] I am king of out of ideas. Been reading docs and can't find anything. I just moved from CentOS and my app server was working. Well Different Linux... [17:49] sarnold: At this point my confidence is very low :) I am really out of ideas... [17:50] Hi, I'm installing for the first time the server version of Ubuntu onto a Dell Poweredge 840 server, where can I find out if the hardware is fully supported? [17:50] I do not have iptables running.... [17:51] Is there a better channel to ask such question? [17:51] Dandalion: which ubuntu version? what problems are you seeing? [17:52] 13.04, I haven't installed it yet, I'm just trying to read before I install it since this is my first time installing server. [17:52] Does server have a GUI also or just shell? [17:52] Dandalion: servers generally doesn't have a GUI - what's the use? [17:52] Dandalion: also, I'd recommend using an LTS release, 12.04, for a server [17:52] I want to create a nagios server [17:52] for my windows domain [17:53] Just shell. [17:53] in order to monitor our servers and notify me by email when any of them go down [17:53] Dandalion: then use LTS, really [17:53] Ok [17:53] 13.04 has 9 months worth of support IIRC [17:53] LTS is 5 years [17:55] Dandalion: that server is pretty old, so it should be well supported. chipset support issues usually happens on newer stuff [17:56] smoser, did the jstack stuff ever end up being useful for real ostack dev? [17:56] no. [17:56] 3 thigns stop it [17:57] smoser, ignoring screen/tmux for the moment. [17:57] a.) io killed it [17:57] BTW what is the TLS release cycle? When is new TLS release? Just wondering. I am new to ubuntu... [17:57] jamespage, i know we're busy but to get a head start on the new cadence: http://people.canonical.com/~agandelman/ca/grizzly/2013.1.1/ & http://people.canonical.com/~agandelman/ca/folsom/2012.2.4/ i have (quantal, raring)-proposed versions of each, ready to go into queue there as well [17:57] zul, Daviey ^ [17:57] b.) openstack componnts (nova and others) expect basically full access to hardware and to kernel (modprobe) [17:57] guma, the next LTS is 14.04, the last was 12.04 with 5 years of support, they come out every 2yrs [17:58] adam_g: ill have a look in a bit about to head into another sessopm [17:58] so the only way that you could really do this stuff is have the charm declare "I need these modules, and access to these devices" [17:58] and have juju set that up during deployment [17:58] anyway... [17:58] hazmat: thanx. is TLS considered more stable for production? Or jsut longer support [17:59] smoser, i ask b/c the guy asking about btrfs/juju is apparently trying out openstack w/ local provider juju [17:59] guma: we're less likely to make crazy decisions during a cycle if we're going to be releasing an LTS [18:00] I see. [18:01] guma, fwiw, most people deploy production on LTS. [18:01] smoser, thanks [18:02] My personal servers are running the latest releases, but at work I stick to LTSes. [18:03] Also what solutions (possibly free tools) are available or recommended to stage/update servers to some version or better date. Lest say I have one dev server and two small prod servers. I always want to update dev first and do some testing. That take some time so when ready I would like to update prod to same version/time as dev instead of latest which could be newer at that time. [18:06] guma: LTS is rather conservative, when 14.04 arrives, 12.04.x installations won't upgrade to 12.04 with do-release-upgrade until 14.04.1 is released, some months later [18:06] guma: meaning it'll be generally safer, although not bulletproof, of course [18:07] RoyK: sure. You talking about major upgrades. I got it. But what about my above point related to apt-get update or distro upgrade [18:07] guma: stuff like redhat/centos/scientificliunux is very conservative, so is debian, but then, they lack new stuff added later [18:08] guma: generally, an LTS release doesn't add much new, mostly bugfixes [18:08] that is, the updates to an LTS release [18:08] RoyK: That is why I am giving a spin :) and see what is going on hee ... [18:08] in 12.04.1, a new kernel was added for new installs, though, to add better hardware support for new stuff [18:09] guma: I somewhat doubt that machine will have problems running 12.04 [18:09] guma: you could install without -updates or -security, and then later, install your second system without -updates or -security -- but that feels entirely too conservative to me, you'd probably want at least the security updates, and probably the normal updates as well [18:09] but even bug fixes in system updates could possibly break or uncover my app server updates. [18:09] obviously, yes [18:09] but that rarely happens [18:11] So what you saying it is "ok" when you on TLS to update to latest even that updating prod TLS might happened little later then initial dev TLS box. [18:11] TLS == transport level security, LTS == long term support ;) [18:12] guma: that is the goal and so far as I can tell, the reality as well :) [18:12] RoyK: I was looking for some sort of mirroring service or app that can be added to dev box and them prod point to it and update to same version that dev box is. Dev box is also very controlled. So not one is messing with it. [18:12] guma: rsync? [18:12] opps LTS :) [18:13] oof, rsync feels wrong for that :) [18:13] should i use ldap or salt for user management? [18:13] guma: just don't turn on automatic updates and test on a dev box first. usually that's paranoia, but if you have specialised applications it may be needed [18:14] I never used rsync for that. But I got a bad feeling about it. But then again i like to heat what works and what does not. [18:14] guma: what sort of systems do you use on this thing? [18:14] it is a price feed [18:14] well, java, php, what? [18:14] C++ [18:15] then there really shouldn't be a problem. never seen API changes on LTS updates [18:15] with php, perhaps, since php is rather slack on version control, but c++? not likely [18:16] ok thanx for info. too bad apt-get does not have option like update to specific date/time [18:16] guma: have you experienced apps broken by a an apt-get update? [18:16] so other machiens can be updated to same date/time dropping anything new. [18:17] guma: I've been using debian/ubuntu for more than a decade, and I've never experienced userspace stuff broken by an update that wasn't major (as in do-release-upgrade) [18:17] One while back on CentOS. Well it was really problem in my app. But still found out too late :) That is why I am extra carful now ... [18:19] release upgrades I prefer "full clean reinstall" it is quite quick for me since I keep it to minimum. So that is my preference. Perhaps over kill. But I feel more safe... I just realized how paranoid I am LOL [18:25] if someone remembers me because im configuring a smtp for life, i just want to say: the main domain can send emails without any spam mark, so im so happy [18:25] but the second domain is still filtered [18:25] at this point i just can imagine is a txt record in my dns? [18:26] btw i configured dkim too [18:27] hXm: excellent :) [18:28] thanks, is a step [18:29] but i still need send emails from the secondary domain name because is the main project === TREllis_ is now known as TREllis [18:29] and i still wont think about dovecot and roundcube [18:30] adam_g, do you think we should lock step the entry of packages into each pipeline [18:31] i.e. the CA package only gets accepted into -proposed once the associated SRU upgrade does [18:32] jamespage, i think they should both be uploaded in lock step (to ubuntu queue and CA stagin), and we can use the acceptance into ubuntu -proposed as trigger to promote to CA -proposed. [18:32] smoser, do you intend on attending the Kernel topics session in the next slot? [18:32] adam_g, agreed [19:24] does postfix requires a special configuration for multidomains? [19:24] in $myorigin [19:24] it only allows one domain tough [19:25] hXm: the origin shouldn't matter much - the From: header in the envelope sets that and whatever the MTA does shouldn't be an issue [19:29] using this tool for the second domain http://www.kitterman.com/spf/validate.html? all tests are ok but i get this [19:29] Results - None SPF records must start with 'v=spf1' please use the back button your browser and try the Mail From record again. [19:30] which im not sure if thats an error or just an info, the last message is this HELO/EHLO Results - PASS sender SPF authorized [19:32] dunno - try #postfix === bradm_ is now known as bradm [19:59] jamespage: fyi, bug #1180084 [19:59] Launchpad bug 1180084 in nova "nova-conductor should be in main" [Undecided,New] https://launchpad.net/bugs/1180084 [19:59] jdstrand, gah - yes - of course [19:59] I'll sort that out [19:59] thanks [20:00] jdstrand, thanks for pointing it out [20:00] np, I was setting up grizzly and couldn't figure out why nova-manage service list wasn't listing compute. bingo :) === aarcane_ is now known as aarcane === Bass10 is now known as Bass10_ === Bass10_ is now known as Bass10 === JanC_test_ is now known as JanC_test === wedgwood is now known as wedgwood_away [21:22] On a standard apache install, are logs rotated by default by apache, or is log rotation handled by another config somewhere else? [21:24] oh, nm. I just saw /etc/logrotate.d/apache2 === JonnyNomad_ is now known as JonnyNomad [21:43] Hello, I am attempting to have nfs home directories mounted upon users login (users are authing against sun LDAP) I am able to Create the directory using pam module common-session "session required pam_mkhomedir.so umask=0022 skel=/etc/skel" LDAP works but the NFS mount doesnt work until a restart. I would like it to work without a restart because I am building a private cloud. Thanks so much === huats_ is now known as huats [22:33] wdilly: Have you tried autofs? Bit antiquated, but I use it at home on my network. works ok. [22:35] GrueMaster, I am using autofs. [22:35] GrueMaster, I have elaborated on my issue here: https://answers.launchpad.net/ubuntu/+question/228898 [22:35] thanks [22:38] Not sure why it wouldn't work. It may be something in your autofs configuration. Have you tried logging in as a local user (no LDAP) and seeing if you can ls a NFS directory forcing an automount? [22:39] * GrueMaster hasn't setup ldap. Just a simple autofs for different mirror mount points. [22:44] GrueMaster, yes local user automounting works, and it works for the ldap user after restarting the system [22:49] Are you using nfs4 or nfs3? [22:52] GrueMaster, thanks for your help, unfortunately gotta pick kids up [22:52] nfs3 [22:52] I'll dig around and see if I can find any solutions. [22:53] yhx [22:53] thx