[00:47] <galore> Hello, a trivial question. I have problems with traceroute with ufw enabled or disabled, ping works just fine. (12.10 server)
[00:47] <galore> traceroute just times out, its really weird
[00:48] <galore> works just fine from a macbook behind same firewall
[00:50] <patdk-lap> traceroute uses a udp port
[00:50] <patdk-lap> so you must allow the udp port it uses
[00:51] <galore> patdk-lap: hmm ok, nothing shows in ufw.log though
[00:51] <patdk-lap> heh?
[00:51] <patdk-lap> should something show?
[00:52] <galore> you mean allow it in the firewall ? How come it works perfect from a macbook behind same fwall
[00:52] <galore> i turned off ufw, problem persist
[00:53] <patdk-lap> well, as you haven't given much discription how should I know?
[00:53] <patdk-lap> is the ufw this firewall? or different system?
[00:53] <galore> i am trying to do a traceroute from my 12.10 server , it times out
[00:53] <patdk-lap> it times out at what part?
[00:53] <galore> firewall is a hwardware firewall for all machines /24
[00:54] <galore> at hop 2
[00:54] <patdk-lap>  hop 1 works?
[00:54] <galore> yeah, hop 1 is the router/fwall, works
[00:54] <patdk-lap> then it's not that machine that is the issue
[00:54] <patdk-lap> must be the hardware firewall then
[00:55] <galore> ok
[00:55] <patdk-lap> is it some home router thing?
[00:55] <patdk-lap> I have seen so many of them that can't nat udp/icmp correctly
[00:55] <galore> but how can a traceroute from a macbook work perfect but not from the 12.10, they on same subnet, same router/fwall
[00:55] <patdk-lap> easily
[00:55] <patdk-lap> it's called a bug
[00:56] <galore> lol
[00:56] <patdk-lap> the ones I have seen
[00:56] <patdk-lap> will send ALL results back to the first one to do it
[00:56] <patdk-lap> so when you test from a second machine, it never sees the results
[00:56] <patdk-lap> cause they keep going to the first one
[00:56] <patdk-lap> but is just one issue I have seen though
[00:57] <galore> hmm ok, sounds weird though but it can be that of course
[00:57] <patdk-lap> you can always watch the endless flood of udp packets my server gets from bad nat routers
[00:57] <patdk-lap> I would see if there is a firmware update for it
[00:58] <patdk-lap> just most people don't feel like fixing these issues, cause all the users care about is if web works
[00:58] <galore> when you say 'the first one' what exactly do you mean by that ?
[00:58] <patdk-lap> exactly what I said
[00:58] <galore> traceroute from macbook works perfect, traceroute from 12.10 times out at 2nd hop
[00:59] <galore> they go through same router which is also the fwall
[00:59] <patdk-lap> turn off macbook, powercycle router, and try again on 12.10
[00:59] <galore> i'll do that
[00:59] <galore> patdk-lap: thanks a lot for helping, i was going nuts over this
[01:00] <galore> patdk-lap: that ipv6 tunnel, HE net ?
[01:05] <galore> nm
[01:10] <galore> i hope i can change isp soon anyhow, to fiber. static ups and superior hardware
[01:12] <galore> ok, gnite
[02:06] <jgcampbell300> is there anyone here that could help me out with fail2ban
[02:18] <cfhowlett> turtal
[04:26] <vedic> Hey friends, I want to remove (purge) popularity contest from my Ubuntu Lucid 64bit. Is there any harm in removing it via apt? I don't want cron job running this and if it is disabled, why should I keep it on the server?
[04:27] <vedic> I have simulated purging popularity contest and I see it also remove ubuntu-standard
[04:34] <vedic> Hey friends, I want to remove (purge) popularity contest from my Ubuntu Lucid 64bit. Is there any harm in removing it via apt? I don't want cron job running this and if it is disabled, why should I keep it. So how to remove it?
[04:55] <holstein> vedic: i would just look in the package manager of your choice and remove it.. check and see what all is going to be removed... you can always just see that it is not running...
[04:58] <vedic> holstein: It removes ubuntu-standard and debian-goodies
[04:58] <holstein> vedic: you can always just see that it is not running
[04:58] <vedic> holstein: I see cron running in on weekly basis
[04:59] <holstein> cool.. so you can disable that then, if you want, or remove the package
[08:12] <Daviey> Morning all!
[08:12] <cfhowlett> Daviey, yowza
[08:14] <histo> morning
[09:13] <Plizzo> I had a RAID5 with three 2TB drives, making 4TB of available space. A few days ago I added another drive to the array using mdadm --add /md0 /dev/sde, and then I ran mdadm --grow /md0/ --raid-devices=4 or similar. It then took 15 hours for the RAID to reorganize before I started using it again, but I still have 4TB of total storage, how can this be?
[09:14] <Plizzo> Do I also need to run: mdadm –grow /dev/md1 –size=max?
[09:14] <Plizzo> md0*
[09:17] <yolanda> Daviey, jamespage, i'm having this problem on the nova node i added using openstack charm: http://paste.ubuntu.com/1676247/
[09:22] <jamespage> yolanda, context would be good - which version of openstack, which version of ubuntu etc....
[09:22] <yolanda> jamespage, grizzly - raring
[09:23] <jamespage> yolanda, from the archive or from the trunk PPA?
[09:23] <yolanda> jamespage, from the testing ppa
[09:23] <jamespage> yolanda, ooo - looks like a bug then!
[09:24] <jamespage> yolanda, I've not seen one like that before; I would have to go dig!
[09:24] <yolanda> :(
[09:24] <yolanda> i had that working before, then i teared down the node and created a new one, to integrate with the ceilometer node, maybe i did something wrong?
[09:31] <alkisg> With the new lts-quantal server, smbd only works once for me. A user tries to connect to a share, he succeeds. After that all connections fail with http://paste.ubuntu.com/1675846/ - no other errors in dmesg nor in syslog. Apport tries to report a bug but after the samba-related questions it just exits with no messages to the user.
[09:31] <alkisg> After many minutes, e.g. 30, once smbd connection can be made again. After that connection smbd dies again for all  subsequent retries.
[09:31] <alkisg> *one
[09:31] <alkisg> Any hints?
[09:32] <alkisg> *the new lts-quantal **kernel**, not server
[09:33] <jamespage> yolanda, probably not - but I suspect you got a new version of nova!
[09:34] <yolanda> jamespage, i have that one: 2013.1.a4408.gae888be+git201302161902~raring-0ubuntu1
[09:40] <yolanda> jamespage, do you know about some workaround i could apply manually? i was just playing with openerp and found that my instances were down
[09:44] <jamespage> yolanda, not off the top of my head - sorry!
[09:47] <jamespage> yolanda, actually
[09:47] <jamespage> yolanda, you could try upgrading all nodes; I would suspect some sort of API version mismatch between cloud-controller and compute maybe
[09:48] <yolanda> i found that in conductor/rpcapi.py
[09:48] <yolanda>     def instance_get_all_by_host(self, context, host, node=None):
[09:48] <yolanda>         msg = self.make_msg('instance_get_all_by_host', host=host, node=node)
[09:48] <yolanda>         return self.call(context, msg, version='1.32')
[09:48] <yolanda> that 1.32 is the cause of that
[09:48] <yolanda> i'll try to upgrade nova-cloud, let me see
[09:49] <jamespage> Daviey, zul uploaded oslo-config to raring over the weekend
[09:49] <jamespage> Daviey, its in the new queue; I said if there where any problems today I would pickup
[09:49] <jamespage> Daviey, quantum has switched and I suspect other projects will do over the next fews days prior to g3
[09:50] <Daviey> jamespage: yeah, he asked me to NEW review it
[09:50] <jamespage> Daviey, great!
[10:02] <yolanda> jamespage, i upgraded nova-cloud node, and now i receive that error trying to start nova-compute
[10:02] <yolanda> http://paste.ubuntu.com/1676338/
[10:10] <jamespage> yolanda, can you paste your nova.conf please
[10:10] <jamespage> yolanda, something todo with this - https://github.com/openstack/nova/commit/78ebb445b349e9407c60c3fe2d21552977a3c299
[10:11] <yolanda> jamespage, yes, vif_type is not present on my nova.conf file
[10:11] <jamespage> yolanda, it won't be
[10:11] <jamespage> yolanda, that is set somewhere in nova based on what type of networking you are using
[10:12] <jamespage> yolanda, I'm more interested in libvirt_vif_driver
[10:13] <yolanda> jamespage, http://paste.ubuntu.com/1676360/
[10:13] <jamespage> yolanda, OK - so it should be using the default generic driver
[10:14] <yolanda> what's wrong?
[10:14] <yolanda> that is wrong? compute_driver=libvirt.LibvirtDriver
[10:17] <jamespage> yolanda, TBH with spending time on it I have no idea
[10:18] <jamespage> yolanda, is this happening when trying to create new instances? or is it just on startup?
[10:18] <yolanda> jamespage, it cannot start nova-compute
[10:18] <jamespage> yolanda, feels like an upgrade issue
[10:33] <yolanda> i'll try tearing down the node and installing it again
[11:10] <gordonjcp> hi there
[11:11] <gordonjcp> how can I stop /etc/resolv.conf getting clobbered on every boot?
[11:11] <RoyK> gordonjcp: add the settings to /etc/network/interfaces instead, which is a more proper way to do it :P
[11:11] <RoyK> as in 'dns-nameservers x.x.x.x y.y.y.y' and 'dns-search adsf.com'
[11:12] <gordonjcp> RoyK: oh, okay
[11:13] <gordonjcp> I did try that but it didn't seem to help
[11:24] <jamespage> Daviey, also 2013.1b3 is going to create issues I think - should be 2013.1~b3
[11:24] <jamespage> as 2013.1 < 2013.1b3
[11:30] <Daviey> jamespage: agreed
[11:30] <iclebyte_work> can anyone with a largish installation tell me how they manage updates across servers? we've looked at landscape but the cost is prohibitive.
[11:31] <jamespage> Daviey, OK - I'll fix that as well
[11:31] <ikonia> shell script ?
[11:31] <ikonia> update manager as a daemon ?
[11:32] <iclebyte_work> well we obviously don't want to allow automatic updates on the network so we're looking for a method to review updates before they are applied. i suppose this is as much a question around process as specific technology/software
[11:33] <ikonia> test the updates
[11:33] <ikonia> pick a test host
[11:33] <Daviey> iclebyte_work: Have you chatted to the landscape folks.. not just read the page?  failing that, debmarshall is reasonable
[11:33] <Daviey> or reprepro
[11:33] <ikonia> then if they they work, set up a sync script to pull down just those updates
[11:33] <ikonia> or host your own repo of "approved updates" and move them into place
[11:33] <gordonjcp> right, so now my server gets stuck at "Waiting for network configuration"
[11:33] <ikonia> then set all your servers to update
[11:33] <gordonjcp> how do I stop it doing that?
[11:34] <ikonia> iclebyte_work: also look at trying puppet ?
[11:34] <iclebyte_work> we do currently host our own repo anyway but so far it's only be used for deploying our own code and for masking of specific packages e.g. deploying puppet 2.7 to ubuntu 10.04LTS nodes.
[11:35] <iclebyte_work> we are using puppet for configuration management of specific services, but hadn't really considered it for the application of updates.
[11:35] <ikonia> iclebyte_work: have a test box, pick a "safe repo", test the udpates you want, them move them to the "safe repo" then set all your other servers to auto update from that safe repo
[11:35] <ikonia> iclebyte_work: keep it seperate from your custom code repo
[11:36] <iclebyte_work> i suppose with a combination of our own repo/packages specified in puppet with 'latest' against them we could devise something but I can't help but feel it's a bit dirty and much have been better solved before.
[11:36] <iclebyte_work> is this a common way around the problem?
[11:36] <ikonia> just an idea
[11:36] <ikonia> I've used puppet with debian and rhel type situations as you want
[11:37] <iclebyte_work> it's interesting that I can't seem to find many articles around the process/management of large installations
[11:37] <ikonia> iclebyte_work: either a.) enterprise tools, like landscape/satellite/spacewalk etc, or custom in house things
[11:38] <iclebyte_work> I have looked at spacewalk although we've not yet tried it out. just still looking for options at this stage
[11:38] <iclebyte_work> i must say, when we looked at satellite/spacewalk we almost wished we'd gone the CentOS route with out farm.
[11:38] <iclebyte_work> out* = our
[11:39] <ikonia> iclebyte_work: spacewalk isn't really for debian based distros
[11:39] <ikonia> just using it as an example
[11:39] <iclebyte_work> iclebyte, no i know, although deb's are supported.
[11:39] <ikonia> iclebyte_work: they aren't really
[11:40] <ikonia> "supported" and "'supported'"
[11:40] <iclebyte_work> :)
[11:40] <iclebyte_work> it is one area where RHEL/CentOS does have an upper hand
[11:41] <iclebyte_work> i'm supprised landscape hasn't been opened up a bit.
[11:42] <Daviey> iclebyte_work: Have you contacted the landscape team?  I would be quite surprised if the quote they give you is prohibitive
[11:42] <jamespage> Daviey, oslo-config uploaded with changes discussed (zul FYI)
[11:43] <Daviey> jamespage: ok, will review after luncheon
[11:43] <jamespage> Daviey, ack
[11:43] <iclebyte_work> Daviey, no i haven't. just read on line
[11:45] <Daviey> iclebyte_work: might be worth a quick chat.
[11:49] <ikonia> Daviey: is there development licenses for landscape, eg: 2 - 3 hosts being maanged for test/poc/development
[11:50] <ikonia> I mean development of landscape platforms/solutions, rather than software development
[11:56] <violinappren> Hello all. Is there a way to notify Apache2 to reread the htpasswd file without having to 'service apache2 reload' as root?
[11:58]  * melmoth bet for kill -HUP
[12:02] <violinappren> perhaps a configuration option to ask apache2 to reread the file every time an auth is needed
[12:07] <melmoth> none that i am aware of.. .what is the problem with reloading apache anyway ?
[12:08] <maxb> Apache *does* reread the htpasswd file every time anyway
[13:27] <balboah> anyone doing unattended-upgrades for a PPA?
[13:28] <balboah> I'm wondering how to figure out the correct origin tag to enlist
[13:31] <ikonia> brave
[13:31] <ikonia> auto updates from a PPA
[14:57] <RZAFC> can someone help?
 I can't compile c program in gcc
[14:59] <melmoth> ?
[15:00] <melmoth> well, either you are not giving gcc the right option, or your code is wrong.
[15:00] <melmoth> or a library is missing
[15:00] <melmoth> or the disk is full, or there s a hardware failure
[15:00] <melmoth> it s kind of difficult to guess without more info
[15:01] <iclebyte_work> :)
[15:01] <RZAFC> it says internal error in set_offset, at ../../gold/output.cc:4622
[15:02] <melmoth> so, it  s not about "compiling c progrmams" in general, but compiling this one in particular.
[15:03] <melmoth> try to find some error before, like may be some hint about a lib missing, a symbol unknown, something
[15:03] <melmoth> and read line 46222 of output.cc, you never know
[15:03] <RZAFC> collect2: ld returned 1 exit status
[16:52] <liltedot> olá pretendo ter no meu servidor 12.04 vários sites com contas ftp para cada site. podem-me indicar a direcção estou farto de googlar e nao encontro o que preciso
[16:53] <liltedot> já tenho lamp a funcionar
[17:39] <eutheria> hello, what would give me a quick dns server for a local network?
[17:53] <RoyK> eutheria: bind9
[17:54] <eutheria> too much :)
[17:54] <RoyK> it's pretty light...
[17:54] <eutheria> but heavy management :)
[17:54] <RoyK> not really
[17:54] <eutheria> yes it is
[17:55] <eutheria> i wanted something i would spent 5 mins over
[17:55] <eutheria> dnsmasq will probably do
[17:56] <RoyK> if you just want a caching nameserver, bind9 works as installed by "apt-get"
[17:57] <eutheria> i will have to setup things
[17:57] <eutheria> like the network subnet, the dhcp to dns hostname updates etc
[17:58] <RoyK> you didn't say you needed dynamic dns ;)
[17:59] <eutheria> in my world local network implies this ;) caching dns server implies not local network
[18:00] <RoyK> another world, then
[18:01] <eutheria> could be ;)
[19:20] <rapitivo> Hi there. how is that possible to the Load be too high and the server (mysql+apache+bind) keep fast?
[19:22] <RoyK> probably some process in D or Z state
[19:24] <rapitivo> there is one mysql on D state
[19:24] <blkperl> anyone have a good method to measure samba performance/stats  ?
[19:25] <rapitivo> RoyK: Should I kill it?
[19:32] <RoyK> if it's in one of those states, it may be hard to kill it
[19:32] <RoyK> rapitivo: what does ps axf has to say?
[19:34] <rapitivo> RoyK: There is no D or Z states. I was looking at htop before.
[19:36] <rapitivo> The Load now is 11.50. It was more than 30 before and I could access my system online (it was a little bit slow).
[19:36] <RoyK> that's high
[19:37] <RoyK> can you pastebin "ps axfv" ?
[19:38] <rapitivo> if there are lot of people trying to access the site but the apache has no available connections, does it make the Load high?
[19:38] <RoyK> run "sar 1 10"
[19:38] <RoyK> if the i/o load is high, it'll make a lot of processes/threads hang around waiting
[19:44] <RoyK> rapitivo: it shows a lot of apache processes in D state, meaning I/O is saturated
[19:45] <RoyK> rapitivo: can I post the ps output here?
[19:45] <rapitivo> yes
[19:45] <RoyK> http://pastebin.com/UjXkbdcs
[19:46] <RoyK> what sort of webserver is this running?
[19:46] <RoyK> it seems rather heavily loaded
[19:48] <rapitivo> there are 30 websites
[19:49] <rapitivo> a web based system for real estates
[19:52] <rapitivo> how can I find witch php script is doing too much IO? I can not see how is that possible to be too high. :/
[20:35] <RoyK> rapitivo: I guess something of a database connection is the source
[21:17] <rapitivo> RoyK: What do you mean?
[21:42] <pythonirc1011> if i have 4 disks and i make a raid 10 on ubuntu, how can i install the OS, so that if one/two drives fail, the machine still boots without any problem? (or extra tinkering)
[21:43] <RoyK> pythonirc1011: I don't think you can install on anything but plain disks or raid-1
[21:44] <RoyK> pythonirc1011: if you have big drives, stick to raid-5 or -6 and use some smallish drive for the root
[21:44] <pythonirc1011> What I am asking can't be done then?
[21:45] <RoyK> just trying to help here
[21:45] <RoyK> how large are the drives?
[21:45] <pythonirc1011> 2TB each
[21:45] <RoyK> then don't use them for the root
[21:45] <RoyK> get some cheap 80GB drive or even a thumb drive for the root
[21:46] <RoyK> use large drives for ata
[21:46] <RoyK> data
[21:46] <RoyK> always separate data and system
[21:47] <RoyK> and don't use raid-10 - it's not flexible
[21:47] <RoyK> rather raid-5 or raid-6
[21:47] <RoyK> those can be grown
[21:47] <RoyK> raid-6 for safety, raid-5 for a wee bit more space
[21:48] <RoyK> if you search the ubuntu-raid mailing list, there are numerous messages there "how to rescue a raid-5 from a double disk failure"
[21:48] <RoyK> so better use raid-6
[21:49] <RoyK> pythonirc1011: point taken?
[22:03] <ajp> I'm trying to get my upstart script (http://pastebin.com/xjACPjk4) to work, it launches this script (http://pastebin.com/zUjJ9G8c) but when I reboot the server (12.04 headless) the script does not work. I had it working a few weeks ago.
[22:10] <ajp> anyone know upstart?
[22:18] <ajp> I'm trying to get my upstart script (http://pastebin.com/xjACPjk4) to work, it launches this script (http://pastebin.com/zUjJ9G8c) but when I reboot the server (12.04 headless) the script does not work. I had it working a few weeks ago.
[22:18] <ajp> is there a command for upstart to rescan it's .conf files?
[22:34] <escott> ajp, perhaps this is an update-initramfs issue?
[22:34] <escott> ajp, is your $HOME encrypted?
[22:35] <escott> ajp, in general it is considered rather ugly/bad practice to have root messing around with scripts in individuals $HOMES
[22:35] <escott> ajp, put your script in /usr/local/bin or /opt
[22:41] <ajp> thanks for the help escott :D
[22:58] <xnox> pythonirc1011: yeah, just use the server cd to install.
[23:07] <JanC> RoyK: RAID-6 doesn't help in case of a triple disk failure though  ;)
[23:38] <RoyK> JanC: no, but that is rather rare
[23:41] <utter> Hello, i have a weird problem with traceroute, it always times out after hop 2. I guess i tried everything, turning ufw off, turning router fwall off, still problem persist.
[23:42] <JanC> RoyK: I'm not sure why it would be much more uncommon than double disk failure in certain cases (when you use disks from the same & apparently bad series)
[23:43] <escott> google published an in the field study of correlation affects of disk failures. that will provide some hard numbers, but there is a fairly high correlation
[23:49] <lifeless> JanC: because
[23:50] <lifeless> JanC: two-disk failures are dependent on the failure-rate of the drives *and* the time to repair the array
[23:51] <lifeless> JanC: a 4TB drive for instance, with a repair rate of 100MBps, or 10s/GB will take 4000*10s to repair a single drive, which is ~11 hours
[23:51] <lifeless> JanC: the chance of a single disk failure is only dependent on the failure rate of the drives.
[23:52] <JanC> lifeless: let's say I saw three drives from the same batch/series fail within a couple hours after each other
[23:52] <lifeless> JanC: so you multiple the failure rate of three drives together - gets you a very low number, * 11 hours
[23:53] <JanC> apparently some "feature" of that batch minimized their "life" in a very similar way (maybe because they were in the same RAID)
[23:54] <utter> Is the lrwxrwxrwx 1 root root    28 Feb  6 00:22 traceroute -> /etc/alternatives/traceroute a broken link, 12.10
[23:55] <utter> ?
[23:55] <utter> its highlighted in red
[23:56] <JanC> utter: if it runs, it's not broken...
[23:56] <utter> because tracepath works and traceroute does not, on a 12.10 server edi.
[23:56] <lifeless> JanC: so IIRC the google report wasn't about manufacturing defects, but about good drives running in arrays
[23:56] <utter> something is bork with traceroute on the 12.10 server,
[23:57] <JanC> utter: or something is borked with your second or third "hop"
[23:58] <lifeless> JanC: point being that that sort of failure - say a bad raid controller over-voltaging the drives
[23:58] <utter> JanC: no its not, works with tracepath and from all other systems behind same firewall
[23:58] <lifeless> JanC: or an actual broken batch
[23:58] <JanC> utter: tracepath doesn't use the same ICMP messages AFAIK
[23:58] <lifeless> JanC: will both affect ~all the drives, irrespective of size.
[23:58] <utter> JanC: hmm ok