[01:04] sarnold, what, there's crashes elsewhere in 13.10 too? [01:04] because this person upgraded to the PPA< then downgraded, and now is crashing nginx [01:04] * TheLordOfTime thinks it's because weirdness === peter is now known as Guest60640 [02:15] question that I'm not sure how to format for relevant results on the goog.... have a 12.04LTS x64 server.. the route cache is acting weird.. whenever one of my remote sites VPN drops (usually the isp being down), when the vpn restores, i have to actually manually login to the 12.04 server and do an "ip route flush cache" to get it to be able to route across the tunnel again.. very lame, and is annoying as all hell.. === wedgwood is now known as Guest48850 === jrib is now known as Guest10239 [02:46] may I restate my question? [02:46] question that I'm not sure how to format for relevant results on the goog.... have a 12.04LTS x64 server.. the route cache is acting weird.. whenever one of my remote sites VPN drops (usually the isp being down), when the vpn restores, i have to actually manually login to the 12.04 server and do an "ip route flush cache" to get it to be able to route across the tunnel again.. very lame, and is annoying as all hell.. can someon [02:55] dustinspringman: you're cut off at "can someon" [02:56] sarnold: orly? [02:56] can someone point me to what I should research to resolve this? I've tried numerous winded searches, but the results are all over the place... thanks in advance [02:57] sarnold: any ideas? [02:57] sarnold: I am happy to research a solution, but I don't know how to ask this question without putting it into literal speach.. [02:57] dustinspringman: no kidding, it wouldn't be easy to search for [02:59] sarnold: pisser is, it worked flawlessly for over a year... then all the sudden.. pooched.. [03:00] dustinspringman: my understanding is that the NIC bounces, routes get dropped, and the VPN doesn't handle bouncing NICs well.. [03:02] sarnold: close, but no.. the NIC itself doesn't bounce.... Server->ethernet->main location router........vpn.......remote-site.... [03:03] whenever the VPN drops (usually isp failure, or power outage at the remote site or some similar issue) I lose routing capability only from this Ubuntu server to that remote-site... [03:03] other machines re-establish the vpn fine? o_O [03:04] **I lose routing because its down, obviously.. the challenge is that when the vpn restores (sometimes in minutes, or like today when the isp failed hard and it was down for 8hrs), the routing still never restores... the route cache is effectively not detecting the reachability of the remote-site... [03:05] yes, because its a site-to-site vpn, all the hosts including the ubuntu-server use the main-site-router as the gateway.. the other machines and the main-site-router pick right back up with no special need.. but the ubuntu-server, every damn time, I have to do an "ip route flush cache" to get it to restore... [03:06] its super annoying and causing a lot of false positives/headaches as this ubuntu server is running my xymon monitoring system... =/ [03:07] real messed up thing is, i have another xymon on ubuntu server x64 12.04 LTS (exact same OS) hosted on AWS that never has this issue... i think something in the route cache settings or ethernet/route config settings is pooched here... [03:09] dustinspringman: time for me to quit.. if you get it sorted out, I'd be curious to know the solution :) good luck, have fun :) [03:10] arrgh.. thanks man, will do. gnite [05:03] Hey guys, is there a way to disable TLS compression system-wide in Ubuntu 12.04 server? [05:04] on CentOS6, this can be obtained by running: export OPENSSL_NO_DEFAULT_ZLIB=1 [05:04] I've seen guides for disabling it in Apache and Nginx but nothing for Squid (which is what I'm running) [06:41] 1 ubuntu server sharing a folder via samba | 2 clients: a) runs ubuntu and its locale is set to ca_ES@utf8 - b) runs crunchbang and its locale is set to ca_ES@UTF-8 | a) mounts shared folder correctly with no charset or codepage set to command - b) does not, even if i set all possible iocharsets to command...it never shows characters properly....WHAT CAN I DO? === styol__ is now known as styol === Nigel_ is now known as G [09:21] sgran, all of the rc's are now in the havana updates pocket; the only bit that is missing is mongodb - just working on a build failure associated with that [09:21] sgran, I'll stick out an announce on the openstack lists today [09:26] hello everyone [09:27] i have sort of a problem... probably something to do with my network configuration ..but anywayz... I get very slow ssh sessions on my ubuntu-server .. sometimes even stalling ... any common issues/fixes related? [09:28] My ISP gave me a modem/router to which i am connected to the internet using a dynamic ip - directly (not NAT), and the server is connected to a different port on the modem with a static ip adress... [09:28] i have no LAN connection between the server and the client machine [09:28] could that be the issue? [09:29] connecting through ssh via the external static ip address of my server? [09:29] also - the computer running the server is a slow 1.7 ghz celeron computer with 512mb ram [09:31] jamespage: \o/ [09:31] it used to work flawlessly when i had gentoo installed on it ... the issue started as i started using ubuntu-server 12.04 ... i've searched the forums with no luck of finding similar problems..... ppl only complaining about slow ssh LOGIN ... but my entire session is slow as hell [09:32] if i do a simple command like 'ps -aux' ... it shows half of the output, then stalls for about 30 secs. and then it shows the rest of it [09:32] it's pretty anoying... not to talk about file transfer - that's even slower [09:35] i've read on the internet that it could be a bad switch ... but the switch worked fine when i had the previous configuration at home [09:35] i use a switch as an extension because i have 2 short UTP cables instead of using 1 long ... [09:36] using the switch in between as a hub... [09:37] anyone? [09:46] ok [09:47] i just did a memory check command [09:47] and i got a reply that only 6 MB of ram is free [09:47] wtf [09:47] why does ubuntu-server with no GUI eat up so much ram? [09:48] are there that many processes that need to be running? [09:48] i'm seriously thinking of switching back to gentoo and have a nervous brakedown everytime i need to config smth, as i'm used to debian [09:51] chemist^ can you please paste the output of: free -m [09:54] /var/www$ free -m [09:54] total used free shared buffers cached [09:54] Mem: 495 488 7 0 33 328 [09:54] -/+ buffers/cache: 125 369 [09:54] Swap: 509 0 509 [09:56] you have 369 MB free not 7 [09:57] oh... [09:57] so what's the problem [09:57] why is my ssh connection so faulty [09:57] maybe a problem with my server's 'hostname' ? [09:58] i entered a random word as hostname when it asked me during the installation [09:58] "faulty"? [09:58] it's slow [09:58] chemist^: try connecting via ip address [09:58] sometimes it freezes [09:59] hitsujiTMO i am [09:59] how far away is the server? [09:59] and when putty or the terminal with ssh session freezes [09:59] my whole internet gets a little stalled at home [09:59] as i said...it's probably a network config issue [09:59] then you have packet loss or something [09:59] nothing to do with ubuntu [10:00] cause i have a server in the same room as my client machine [10:00] but not connected through lan [10:00] there is no lan [10:00] you are making zero sense [10:00] what do u mean there is no lan? [10:00] ok wait .. i'll explain [10:00] i have 5 ports on my ISP modem/router [10:00] 4 of them are bridged and 1 is NAT [10:01] i have 2 dynamic ips and one static provided by my ISP [10:01] if i want to use either i need to be connected to the bridge port [10:01] if i connect to the NAT port i get a local ip from the router [10:02] if i connect to the bridge port i get an ip directly from my ISP [10:02] so i have my client machine connected with automatic dhcp to the bridge port -> getting a dynamic ip from my ISP [10:02] and the server connected to another bridge port with static ip settings [10:02] chemist^ can you tracert to the server [10:03] so when i'm connecting via ssh to my server i enter my static ip address [10:03] how do you do that exactly? :) [10:03] your client windows? [10:03] no [10:04] ubuntu [10:04] desktop [10:04] traceroute ip [10:05] i've read somewhere on the internet that it might be a bad switch issue.... but i don't think so, cause my switch worked fine before reinstalling the system (although it had a different role that time) [10:05] installing traceroute ... wait a sec. [10:06] 'Name or service not known' [10:07] Cannot handle "host" cmdline arg `xxx.xxx.xxx.xxx' on position 1 (argc 1) [10:07] i x-ed out the ip address [10:07] oops [10:07] wrong ip...wait :D [10:08] ok it's doing it now [10:08] it got to 8 [10:08] and now just showing *** [10:08] *** [10:08] till 30 [10:09] hitsujiTMO :) [10:09] wtf is going on here [10:09] looks like your going half way around the world to ssh to a machine a few metres away from you [10:09] that is correct [10:10] is that the cause for stalling? and freezing my entire internet connection even on the client-side [10:10] can you XXX out your ips and post the output? [10:11] i don't have a monitor connected to my server and i don't want to carry one in the other room everytime i need to make a change to the system [10:11] i would really like to be able to do that via ssh [10:11] hitsujiTMO i'll post it in private so i don't get kicked for flooding [10:12] chemist^: paste.ubuntu.com [10:12] use that [10:13] hitsujiTMO here you go [10:14] hitsujiTMO did you get my notice? [10:14] yup [10:14] ok [10:14] looking now [10:15] erm, are you using a vpn also? [10:15] do you think that if i try to connect from anywhere else it would give me same problems? [10:15] hitsujiTMO ammm... i don't think so... or if i do, not to my knowing... [10:16] ok, your connection is coming from 'godaddy' [10:16] what does that mean? : [10:16] :) [10:16] actually never mond [10:16] mind* [10:16] :P [10:17] u know what...i'll try and connect via ssh with my mobile phone (mobile 3g internet) and do a simple command like ps -aux and see if it stalls as from my comp. [10:18] yeah, just seems your isp sucks, its routing packets all over europe before getting back to you [10:18] yeah... [10:18] they all suck [10:18] :D [10:18] prob hitting packet loss along the way [10:18] have problems with isps all the time [10:18] how many ethernet ports you got on the server? [10:18] do you think maybe they could do smth about it? [10:19] 2 ethernet cards [10:19] only 1 in use [10:19] buy a switch and connect your free port to that and ssh with that [10:19] before...i had my server running as a router/firewall also... so i had used both at that time, with no issues whatsoever [10:20] i have a switch which is in use now as an extension, as my cable was too short ;D i used 2 and a switch in between [10:20] could that be the issue? [10:20] it shouldn't... [10:20] doubt it [10:21] how did you configure the server as a router? with iptables? [10:21] yes [10:21] it worked well back then [10:22] ok [10:22] the response [10:22] to ps -aux [10:22] from my slow-connection mobile phone internet [10:22] works flawlessly [10:22] fast reply from the server [10:22] with no stopping at the middle [10:22] or stalling [10:23] shit man... :/ [10:23] is there a way to fix this... other than connecting physically to my server? [10:23] get a new isp would be the fix [10:24] chemist^: you have an mtu problem [10:24] do you think that the idiot-technitians of my isp could fix this? [10:24] adjust the mtu of your uplink interface on the server to 1450 [10:24] hitsujiTMO i just switched to them :D [10:24] they have fast internet [10:24] 100 mbit [10:24] optics [10:25] sgran u sure that's the issue? ... why does my mobile phone communicate normally then? shouldn't the MTU affect the comm with the phone as well? [10:25] chemist^ I'd certainly contact them to see if they can fix the issue [10:26] ok i'll do that right away [10:26] path mtu is negotiated for each new connection. Perhaps your phone connection has a lower mtu than the one that doesn't work, or perhaps pmtu discovery isn't broken between your phone and your server [10:26] who can say [10:27] but if a connection freezes when you're passing a large chunk of data back, but works for small data transfers [10:27] experience has shown it to be mtu [10:27] i think that the problem is actually as hitsujiTMO said...the connection hopping half of europe before returning to my server [10:27] ok, you should fix that then :) [10:27] sgran the problem is even with small data transfers [10:28] the size of the data transfer actually does not change the stall-time [10:28] or the actual connection freeze - sometimes [10:32] zul, bug 1231982 is probably worth a poke pre-release [10:32] Launchpad bug 1231982 in novnc "novnc crashes due undefined variable" [Medium,Confirmed] https://launchpad.net/bugs/1231982 [10:32] looks like sucky upstream orig.tar.gz from our upstream [10:35] hitsujiTMO [10:35] yo [10:35] do you think this could work.... [10:35] if i used [10:35] a wireless router (not in use currently) to create a LAN, and use wireless to connect to the server locally? [10:36] can i have 2 network connections running at the same time? [10:36] is that even possible? [10:36] i would connect the server with a cable to the router and my client computer through wifi [10:37] or just use wifi on both computers [10:37] chemist^ you can as long as the default gateway is set on one connection only [10:37] and they are 2 different subnets ofc [10:37] so if i connect to the wifi with my server i leave the gateway entry blank? [10:37] or do i use automatic dhcp [10:38] should work [10:38] the router will not be connected to the internet [10:38] just local [10:38] use static, dhcp will give a gateway most likely [10:38] ok [10:38] i'll try that [10:38] now i must go pick up my GF at work and go eat smth [10:39] i'll let you know later if u'll be online [10:41] zul, adam_g, smoser: I reviewed all current cloud-archive bugs and poked things accordingly - nothing aside from the novnc issue above that I can see right now for Havana [10:55] rbasak, around? I have an arm build failure for precise for golang which feels familiar but I can't remember the fix! - http://paste.ubuntu.com/6221908/ [10:55] jamespage: yes, looking [10:55] ^^thats on armhf [10:55] rbasak, thanks [10:56] jamespage: that's on Saucy? [10:56] * rbasak looks for the previous bug [10:56] rbasak, no - thats on 12.04 [10:57] but I think we saw the same bug on saucy - we are carrying a patch that fixes this on saucy but its not doing the magic on 12.04 [10:58] Ah. Yes, that'd be expected I think. Bug 1187722. dpkg-shlibdeps is making assumptions about the sf/hf-ness of the binary produced by golang toolchain since that toolchain wasn't using the ELF header flags that we were expecting and do with the gcc toolchain. [10:58] Launchpad bug 1187722 in golang "dpkg-shlibdeps fails on armhf ELF binaries that do not define architecture specific information" [High,Fix released] https://launchpad.net/bugs/1187722 [10:59] rbasak, do we need an associated dpkg change as well? [11:00] reading that bug it sounds like it [11:00] jamespage: sort of, yes. We did make one. I think it's an impedance mismatch that could in theory be fixed either side. [11:00] jamespage: I presume this is for the cloud-tools pocket and we'd prefer not to change dpkg there? [11:01] rbasak, preferably yes [11:01] and it is [11:01] I need to backport golang for armhf for the juju team as well so this will block in both locations [11:02] jamespage: davecheney is working on the fix upstream. He has done https://codereview.appspot.com/10171043 which I guess isn't complete but perhaps we can backport that? [11:02] (if completed) [11:03] Looks like someone else wrote it actually [11:04] * rbasak looks for the dpkg change [11:06] jamespage: http://launchpadlibrarian.net/144462697/dpkg_1.16.10ubuntu2_1.16.10ubuntu3.diff.gz [11:06] jamespage: do you have a built tree handy; could we see what "readelf -h" gives us? [11:07] Well I suppose it would likely be the same as the Saucy build actually. [11:11] rbasak, I would suspect so - but I don't have a handy built tree I'm afraid [11:15] jamespage: looks like it from the saucy armhf binary. I wonder if the dpkg fix would be considered SRUable. What do you think? [11:15] I guess that might change build behaviour on a wide variety of packages [11:15] So maybe too risky === Guest10239 is now known as jrib [11:16] rbasak, possibly - I don't really want to hold that in the cloud-archive particularly; I guess we could backport it in isolation and just use that as a build-dependency for the PPA's [11:16] that way we levarage it during build but don't actually ship it for the CA [11:17] * rbasak wonders if there's some way to patch the build to get the same effect [11:21] jamespage: would a modification to the golang package that is needed for Precise only be acceptable to the cloud-tools pocket? [11:21] rbasak, yes - that's OK [11:22] jamespage: I have two possible really horrible hacks in mind. [11:22] rbasak, I'd buy anything right now if it works us around this problem [11:23] jamespage: 1) modify the ELF binaries themselves, to manually give them the flags dpkg-shlibdeps is looking for. [11:23] jamespage: 2) wrap readelf, to provide what dpkg-shlibdeps is looking for but only during the dpkg-shlibdeps run [11:23] On armhf, we know it to be true, so if we make the forgery work only for armhf ELF binaries we know we're safe. It'd break cross-building but we don't care about that. [11:25] Wrap readelf -A to call readelf -h first, and if it says 0x500402, then return VFP registers. [11:25] Then modify PATH in the build process [11:33] jamespage: where can I get the ctools golang source that failed, please? I don't see it in https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-staging/+packages [11:33] rbasak, its exactly what is in saucy right now [11:34] OK [11:47] I need to open a bug against isc-dhcp-client available into the U.C.A, which package should I use ? [11:48] when installing MAAS 1.4* on precise, it installs isc-dhcp-client from U.C.A which depends on iproute2 which is unavailable in precise [11:48] hence it breaks the network if main interface is using dhcp [11:48] caribou, please use ubuntu-bug - it will end up in the right place (cloud-archive project) [11:48] jamespage: ok, will do [11:48] caribou, iproute2 should be in the cloud-archive [11:48] cloud tools that is [11:48] * jamespage looks [11:48] jamespage: lemme check [11:49] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/cloud-tools_versions.html [11:49] caribou, shows on the report which comes direct for the archive [11:50] jamespage: well, I need to investigate this one further then [11:50] caribou, OK - it only landed in the last 24 hrs [11:51] jamespage: but This is totally reproducible : I start on a pristine 12.04.03 VM, install maas+dhcp+dns, reboot and I no longer have network [11:51] jamespage: hmm, let me see which archive I'm using [11:52] jamespage: hmm, my VM was created before that,maybe that's why [11:53] caribou, bug reports work saved then! [11:55] jamespage: indeed, works well now and isc-dhcp-client does install correctly [11:56] jamespage: but it _was_ a problem yesterday when I started [11:56] jamespage: thanks! [11:56] jamespage: is golang a straight backport to precise or does it have any build-deps that needed backporting first? ie. should I be able to reproduce in a straight precise sbuild and save for this bug is that expected to work? [11:58] rbasak, straight backport [11:58] OK thanks. [11:59] Doing a build now to get me my build tree. Then see if I can implement this hack. [12:20] Does anyone in here happen to know a resource as to how to configure an ubuntu server to act as a broadband remote access server dealing with a DSLAM in a DSL environment? [12:21] dealing with a DSLAM? [12:21] you want to auth, eg RADIUS? [12:22] Somewhat along those lines, yes. It's a lab setting where I already have a DSLAM and a server, just no idea how to make the two talk. [12:22] Though it'd be lovely if it'd be just as easy as pointing the DSLAM to the server which just has to run RADIUS or something. [12:23] well, are you looking or authentication or something else? [12:25] I assume authentication. The thing is, I don't really know the technology that well. All I do know is that the DSLAM uses the BRAS to authenticate the users connecting to it, but I don't really know how exactly - and more importantly, how to configure that. [12:51] Cups is giving the error '**** Unable to open the initial device, quitting.' but this error is not in the cups source code, any body know which library has this error ? [12:52] I don't think it's that simple. I think the BRAS is the ppp logical termination for the end users [12:52] something like http://www.klick.us/?page_id=492 looks about right [12:53] which was, incidentally, the second or third hit in a search for 'linux bras server' [12:55] jamespage: zul can i start testing the recent pkgs for doc? [12:55] koolhead17: sure [12:56] zul: so i will use the testing repo from cloud archive correct? [12:56] can you pastebin that link for me [12:56] koolhead17, please do! [12:56] koolhead17, no - use the actual cloud archive repository [12:56] koolhead17: https://wiki.ubuntu.com/ServerTeam/CloudArchive [12:56] koolhead17, its in the email I just sent to list [12:57] thats the one! [12:57] jamespage: just in time. [13:01] sgran: Ah, thanks. Yeah, I searched for various combinations of "ubuntu" but never thought of just trying "linux". [13:18] jamespage: I think I have something that works. But I think there's a catch: you'll need my workaround in every package that produces golang toolchain binaries. Like, I presume, juju. [13:19] It's not too bad though. Just one file and a one line override_dh_shlibdeps in debian/rules. [13:19] just juju right now [13:19] sounds ugly but lets take a look [13:24] jamespage: I have yet to do a full build test on this. But it seems to work in principle anyway. http://paste.ubuntu.com/6222380/ [13:25] For some reason I am very much amused by this hack. [13:26] omg [13:26] 1 ubuntu server sharing a folder via samba | 2 clients: a) runs ubuntu and its locale is set to ca_ES@utf8 - b) runs crunchbang and its locale is set to ca_ES@UTF-8 | a) mounts shared folder correctly with no charset or codepage set to command - b) does not, even if i set all possible iocharsets to command...it never shows characters properly....WHAT CAN I DO? [13:27] Other options are: fix golang to actually produce the arch-specific information queried by "readelf -A". This is upstream bug http://code.google.com/p/go/issues/detail?id=5640 and we could backport a fix, but one isn't ready yet. [13:28] Or, fix dpkg-shlibdeps to do something different. But that involves a dpkg SRU or carrying it in the cloud archive build PPA or something. [13:28] I don't think there are any other options. [13:30] I should also probably additionally check that it is an ARM binary actually [13:30] As there's weird cross stuff happening in this build too [13:33] jamespage: building glance locally === chmurifree is now known as chmuri [13:48] Is it possible to use LXC containers as provisionned nodes on MAAS ? [13:52] jamespage: https://code.launchpad.net/~zulcss/glance/2013.2.rc2/+merge/190663 [13:53] caribou: directly? No, because MAAS expects to be able to run d-i or curtin on a node, and that requires a machine with a block device. If I understand you're asking. virtual maas uses KVM, AIUI. And you can do LXC using juju's container support. [13:53] rbasak: yeah, I just found out about juju-local, which is mostly what I wanted to test [13:53] zul, niggle [13:54] jamespage: bah [13:54] rbasak, caribou: the MAAS provider in >= 1.14.1 can manage LXC containers on physical servers [13:54] juju add-machine lxc:0 add's a new lxc container to machine 0 [13:54] jamespage: fixed [13:55] zul, my only concern is that we reference no bugs [13:55] but hey - lets see how it goes [13:59] jamespage: ok uploaded === rdw200169_ is now known as rdw200169 [15:12] hello. [15:12] i need to know if my computer supports ubuntu server. [15:12] you see it's from 2004 [15:12] or at least some time around that [15:13] figgycity50: probably [15:14] celeron d processor? [15:14] worst processor ever [15:14] figgycity50: boot a 32-bit desktop live cd on it [15:14] i know [15:14] that's what i am gonna do [15:14] i have no cds tho [15:15] i DO have a usb stick [15:15] mp3 to be exact [15:15] livecd is interchangeable with "LiveUSB" [15:15] i have a sandisk but i dunno where it is [15:15] but i don't think yo ucan use your MP3 player as a LiveUSB [15:15] it has usb tho [15:15] and i can access the files [15:16] an alba 4gb [15:16] doesn't mean it can actually handle being a LiveUSB [15:16] will it fit?? [15:16] fit? probably. boot? probably not [15:16] i'll see [15:16] it's got no uefi [15:16] and i can get into the bios settings [15:16] ahh [15:16] i see some dvds [15:16] i never said uefi or the bios were the issue. [15:17] dvd-rws [15:17] those work too :p [15:17] should i use those? [15:17] i would, i don't trust MP3 players to be decent LiveUSBs [15:17] ok [15:17] iso nearly done.. [15:17] although i'm going to refine what mdeslaur said... [15:17] any decent iso burners [15:17] and suggest an Lubuntu LiveCD [15:17] im using windows 8 [15:18] i'm using server [15:18] because Ubuntu 32bit Desktop LiveCD is ehhhh [15:18] [13/10/11 11:14:38] figgycity50: boot a 32-bit desktop live cd on it [15:18] there is no "server" LiveCD last i looked [15:18] because i'm gonna run minecraft server [15:18] there is [15:18] you can run minecraft on a GUI server. [15:18] ubuntu.com/server [15:18] which links to the server ISOs, which as I understand them... [15:18] (1) don't come wiht a GUI [15:18] ik [15:18] (2) odn't come with a live environment [15:18] but i don't need a gui [15:19] 70 [15:19] (3) are the installer [15:19] yes [15:19] i will partition it from the xp [15:19] god [15:19] these cds are mixed up [15:19] * TheLordOfTime points at the enter button. Don't constantly use it. [15:19] the cd-r cases have dvd-rws and the dvd-rws cases have cd-rs [15:20] weird right? [15:20] and i have a date with a pot of coffee... back in 5 minutes [15:21] (BTW, you don't need to run the ubuntu server edition to run a minecraft server, and in fact if you're new to the whole server thing I highly suggest you install Lubuntu, then work from the GUI terminal emulator to run the Minecraft server, if you're a newbie to the command line) [15:21] * TheLordOfTime doesn't know if you're a Linux CLI expert or not [15:21] okay, now seriously, i need my coffee, back in a few [15:22] i am not a cli newbie [15:22] i know ls [15:22] wget, cd, rm, touch, nano, apt-get, apt-cache, aptitude [15:22] loads more [15:22] and definitly sudo [15:22] again, the enter key. [15:22] don't constantly use it. [15:23] you can put more than one thought in a line. :) [15:23] ... grrrrrrrrr, stupid segfault crash bugs... [15:23] is 700mb enough [15:24] for ubuntu server install? [15:24] don't blame me for enter that time, i was putting in my cd [15:25] 700MB actually is about 658 - 698 MB on CDs, and you might have enough space for the Ubuntu Server ISO to fit, but... you also might not depending on the exact size of the CD [15:26] weird [15:26] it says 0 bytes [15:26] oh winndows cant access the disk [15:26] disk dead, getting another [15:27] http://www.ubuntu.com/download/desktop/burn-a-dvd-on-windows btw is your most helpful resource for burning the ISO to a disk [15:28] as for your disk being dead if all your disks return 0 byte size, that's an indication your CD/DVD Reader/Writer is broken [15:28] 'course that page explains it for win7 i dunno if win8 still has the same functionality [15:28] because windows 8 is worse than win7 [15:30] i know windo [15:30] burning [15:30] windows 8 has a built in iso burner [15:31] and i found a 4.7gb dvd-rw [15:31] and the disk is reading === wedgwood_ is now known as wedgwood [15:39] jamespage: as expected juju-core needed the same hack, but builds with my hacked golang. Next steps? We need to test the produced binaries on both Intel and ARM I think. Do you think it's OK to do that from the staging PPA? And what are your thoughts on the hack? [15:39] * rbasak wonders what tests we have for juju-core anyway [15:40] * rbasak finds the dep8 test [15:45] TheLordOfTime? [15:47] this burning is becoming a pain. why? ITS NOT WORKING [15:49] anyone got instructions for cd burning? [15:58] I want to set up a new email server and test it out. Is there some spam filter proxy or SMTP proxy that forward email to 2 backend servers, or filter by user. Ie. all mail for @domain.com goes to server1 except bob@doman.com goes to server2? [15:59] Breetai: doing that is more complicated than it seems, due to the need to avoid spam backscatter. [16:00] Breetai: for experimentation it's easier to use a test domain or a subdomain. [16:02] rbasak: with setting up postfix, dovecot, opendkim, spamd, z-push, postgrey, and roundcube, I thought it might be easier to set it up for the domain I will want to use it for, instead of for subdomain and then having to change the configs for all of those subsystems later. [16:03] Breetai: you should parameterise the domain to avoid that problem. [16:03] rbasak: any docs you can point me to on how to do that? [16:04] I have never heard of "parameterise the domain" before [16:04] Breetai: heard of "devops"? For such a complex set of pieces you certainly want to script and automate the deployment process. [16:05] rbasak: heard of yes, used no. [16:06] rbasak: Essentially I should use a deployment script. more work to set up, but if I set up a script that can deploy "domain.com" I can change "domain.com" in 1 place to "mycompany.com" and run it, and it will be bulletproof correct. [16:07] Breetai: right [16:08] rbasak: Since I am doing this on a lxc container, on top of zfs, tearing it down and doing it again should be very simple [16:21] rbasak, hmm [16:21] rbasak, I'm not over the moon about it but other than backport a patch to dpkg for the cloud-tools pocket [16:21] I can't think of another way around it [16:28] rbasak, what about the other approach? backporting the fix to dhshlibdeps to work correctly for the cloud-tools pocket? [16:28] the scope of potential impact is quite limited [16:29] jamespage: if it was restricted to the cloud-tools pocket, then I agree the regression risk is limited. [16:29] It's a relatively trivial backport, too. [16:29] rbasak, well I think thats a more sensible approach [16:29] rbasak, how about we sort that out Monday :-) [16:29] Sure [16:29] unless you are gunning for a friday evening of hacking.... [16:29] :-) [16:30] The disadvantage of that approach is that it's more complicated because it suddenly brings in the need to care about the environment you're building and testing in, and whether you have that backport in your build deps or not [16:33] rbasak, I have a helper wrapper for sbuild that configured the build to use the staging ppa's [16:34] jamespage: OK, if you're fine with that then we can backport the dpkg fix on Monday. It's trivial. [16:35] And then golang/juju-core should just build fine [16:35] (given that my hack works in its current form) [17:24] greetings [17:24] I seem to have lost my crontab - when I -e or -l it, there is nothing... yet cron still runs the jobs I had in there previously. [17:37] what files can i safely remove from the /boot partition manually? the fact that it's full has prevented me from running apt-get remove or purge for the old kernels [17:37] so i need to manually remove one of them or something to free up enough to properly remove the rest [17:40] irv maybe delete the oldest initrd.img- in /boot ... i'd also touch it before purging [17:42] hitsujiTMO: moved it to another drive, still not enough space.. gonna move a few more of 'em [17:42] thx [17:44] what should i do after apt-get -f install [17:44] like to properly remove those old kernels [17:44] or will it recognize that i manually moved the initrd files? [17:45] or do i need to move them back one by one and remove the corresponding kernel as they're back [17:45] apt-get purge the ones that you don't need that are still there, them move back those files and apt-get purge their respective packages [17:48] any ideas on how to get my crontab back? the jobs are definitely still running, I can see their effects - and the results in syslog... [17:48] hitsujiTMO: now i'm getting that linux-server depends on: linux-image-server = 3.2.0.52.62, but 3.2.0.54.64 is installed [17:48] and linux-headers-server same [17:49] is there a way i can tell it to manually install those versions? [17:49] cause -f is still failing [17:49] even with 102mb free on /boot [17:49] heh [17:50] irv: try dpkg --purge [17:51] sarnold: which packages? like remove the newer one all together? or [17:51] or which was that a response to :p [17:51] irv: I'd try first one of the ones you've already removed some of the files, make the package database happy and it won't cost you any more backup kernels :) [17:52] but like which actual package am i telling it to purge? linux-server ? [17:52] or the just the initrd bits [17:53] irv: ah, one of the linux-image-server-3.2.mumble... [17:53] irv: make sure you don't delete the current running kernel, and it'd be best to leave the newest installed kernel, and make sure to keep at least two kernels :) [17:53] i only have linux-image-3.2.0-39-generic and a bunch more and then one called linux-image-server [17:53] no other linux-image-server-xx [17:54] so just some of the generic ones, ya? [17:54] irv: can you pastebin your dpkg -l | grep ^linux output? (pastebinit is a nice tool for automating pastebins..) [17:54] linux-image- should be enough [17:54] just removed a few of those [17:54] sec [17:54] irv: ah, sure, those have changed names enough that I don't know them all any longer, hehe [17:55] :] it's goin' [17:55] yay [17:55] how do i run this pastebinit with the cmd [17:55] just pipe into it? [17:55] irv: dpkg -l | grep ^linux | pastebinit [17:55] pipe it [17:55] i love linux [17:55] oops, that won't work, my fault.. [17:55] http://paste.ubuntu.com/6223543 [17:56] dpkg -l | grep "^ii linux" [17:56] hoooray you fixed my stupid :) [17:56] :p [17:56] k now running autoremove [17:57] hoping it works [17:57] 1086mb to be freed [17:57] woo [17:57] gah, how big should i bemaking the /boot partitions? [17:57] irv: you can also clean up the linux-headers-* packages once you've removed their linux-image-* package... [17:57] assuming i would go in and update/remove old kernels once every 6 months or so [17:57] i guess it should never grow that big if i remove the old ones as i'm updating them [17:57] sarnold: awesome, thx [17:58] will apt-get autoremove not take care of those too? [17:58] woohoo, upgrading is working now =) [17:59] irv: somewhere along the way I think apt just takes care of it without hassle.. again, more details I've forgotten :( [17:59] that dpkg --purge the linux-images worked wonders [17:59] irv: I've got six installed kernels now, /boot takes 247 megabytes.. and I haven't done much manual maintenance of /boot data in ages... [17:59] how big is a normal /boot partition on the server verison? [17:59] i think mine is 200mb or so [17:59] the partition [18:00] irv: .. but I don' have a separate /boot on this system, so it might go way over that amount of space while doing upgrades and so forth [18:00] ahh, gotcha [18:02] sarnold: I usually reserve 500MB for /boot/ but sometimes use 100MB on limited devices - just have to keep the upgrades controlled [18:02] oops... that was for irv! ... my meds taking effect :D [18:03] 512mb is pretty standard [18:04] I think when I made a separate /boot I used 256; that was a few years back, today I'd probably go with 512 as well. that gives some room to breathe :) [18:05] Yeah, those upgrades come thick and fast [18:06] TJ-: lol, thanks [18:06] considering distros should only keep 3 kernels, max, 512mg is crazy [18:06] how about this, is there a way now to increase from 200mb to 500 or so [18:06] it's a virtual server, and i have plenty of storage [18:06] I dont think I've ever seen a properly managed distro go beyond 120mb for /boot [18:06] 512mb for /boot is not standard [18:07] ikonia: only three? no thanks, I've seen problems require way more than three kernels to troubleshoot and find solutions... [18:07] irv: Sure, resize the volume that contains it [18:07] i suppose it shouldnt' increase now that i've fixed that heh [18:07] sarnold: very doubtful that is the norm [18:07] ikonia: indeed [18:07] i had just been upgrading without removing any old ones [18:07] cause i got lazy [18:07] :p [18:07] irv: it should auto remove [18:07] ikonia: but the day you need it, you don't want to be swearing at a tiny /boot :) [18:08] Just wear even tinnier socks [18:28] jamespage, any idea why keystone installation would fail to start the service post-inst with: invoke-rc.d: policy-rc.d denied execution of start. ? [18:36] so... cron? I can't get crontab to load my jobs, although they still run [18:36] crontab -e shows nothing, -l says no crontab... [18:40] arrrghhh: check also /etc/cron* files, the jobs might be defined there [18:43] I found some daily jobs [18:44] but there's other jobs I had placed in root's crontab [18:44] I usually just sudo crontab -e [18:44] and it's empty :( [18:44] I spose something changed, but I am not sure what [18:45] arrrghhh: check out /var/spool/cron/ -- you might find what you need in there [18:45] hm ok [18:56] I just installed Splunk and the windows forwarders on the windows servers, and all I'm getting is this from the windows boxes http://pastebin.com/tYBExJw2 What am I doing wrong? [18:58] hello [19:06] hallyn: 1.1.3 will be available for testing in https://launchpad.net/~zulcss/+archive/libvirt-testing before pushing to ubuntu-virt === tom[] is now known as help === help is now known as tom[] [21:10] Anyone awake? [21:11] Got a weird question... If we disregard that file permissions is acted upon by the lovely bits set, can we instead of performing the lookup locally, use any modules for "remote management" of permissions? [21:23] ewook: what are you trying to accomplish? [21:40] sarnold: Not me. Question from a friend. I think the goal is to have a central point for file access control. [21:40] I [21:40] sarnold, thar? I just have a 'crontabs' folder in the /var/spool/cron folder... [21:40] am sorry for a badly formated question, but I am not sure how to go about it at all.. [21:41] arrrghhh: cron lives in /etc/[cron.d / cron.daily / etc etc etc]/ . Whata [21:41] arrrghhh: anything under that? [21:41] what is the issue [21:42] ewook: arrrghhh's cronjobs get executed but he can't find them with crontab -l or crontab -e -- a bit confusing :) hehe [21:42] sarnold: aah. yeah, scrolled up :p. [21:42] it's really confusing, I've never seen it happen. sarnold I had to be root, but it's empty [21:43] I can see in the syslog the jobs are running... and some affect the system in ways that are pretty obvious lol. so they are still workin... [21:45] arrrghhh: what is in /etc/crontab ? [21:46] there's my system crontab stuff [21:46] let me pastebin [21:46] /var/spool/cron/crontabs is like sarnold said, where [21:46] stuff is saved. [21:46] darnit, sorry for breaking lines, new keyboard. [21:46] http://hastebin.com/ladowubono.md [21:46] exactly. [21:47] that poiunts to the /etc/cron. [21:47] ... /etc/cron.stuff [21:47] and I see like my trim job in /etc/cron.daily [21:47] but there's several lines I have in the regular ole crontab for root [21:48] and the /var/spool/cron/crontabs contained nothing? [21:48] empty... [21:48] O_o [21:49] I've done a few updates recently, but nothing major [21:49] 12.04 [21:49] .3 [21:49] /var/spool is on tmpfs? [21:49] is that normal? [21:50] no [21:50] or... [21:50] wait [21:50] well, not on my system :p. [21:50] can you df to make sure I'm not crazy [21:51] I only do /run on tmpfs. [21:51] my /var/spool is on / -- dunno if that's a good idea for a server but seemed fine for a laptop [21:51] http://hastebin.com/dixagigagu.hs [21:51] those are all the weird things I have not mounted [21:51] tmp and /var/tmp make sense [21:51] but /var/spool on tmpfs... does not [21:52] sarnold: have you seen var being mounted on tmpfs? [21:52] I don´t think it is standard. [21:52] might be the reason that you cannot see any content on crontabs spool. [21:53] is it in your fstab? [21:53] let me look [21:53] ohgod [21:53] wow, that's ... odd. [21:53] was I drunk and hacking my fstab? [21:53] /var/spool/mail and so forth? [21:53] holdplease [21:54] http://hastebin.com/muyisagomo.vala [21:54] this is at the bottom of my fstab [21:54] wat have I done. removing it... [21:54] mkay. stickybits good. kill off the var/spool :p. [21:54] arrrghhh: hey, maybe if you umount /var/spool, you'll get all your data back on the underlying /var/spool directory... including the crontabs :) [21:54] and just unmount it. [21:54] sarnold: bingo ;) [21:55] yay [21:55] that was interesting [21:55] tmpfs [21:55] no kidding, most interesting thing in here in a long time ;) [21:56] darn keyboard! [21:56] ewook: haha, man, good luck. :) [21:56] tmpfs on spool is a new one for me at least :p [21:56] as usual, a self-inflicted wound. thx for the help :) [21:56] sarnold: yeah.. right shift is smaller, and "`" key is moved. thus, hitting enter... [21:57] arrrghhh: good catch =). Remember why you tmpfs´ed spool? [21:57] well I know there were some things I was doing to try and improve build times [21:57] wait.. it isn´t moved. it is gone. [21:57] I don't remember editing fstab, but hey [21:57] hahahah [21:57] ouchie. [21:57] there might have been some scotch involved [21:58] done that... that is how I confed my postfix last time...... [21:58] works like a charm, but cannot remember... well, much. [21:58] lol [21:58] but it works, who cares [21:59] grabbed the conf, so yeah ;). === Maple__ is now known as Guest95952 === Guest95952 is now known as Mapley === funkyHat_ is now known as funkyHat [23:08] I've been updating my cluster of servers (about 10 nodes) via a disk image. [23:08] Should I be looking at a combination of maas and juju? Has anyone her used it? [23:09] If not that, are their other tools which work well these days? I need the tool to update hostname on each node. Mount the 2 local drives. And add ceph and moosefs on each node. [23:09] pytrade: another option to investigate would be openstack and juju [23:10] openstack seems to be too heavy! [23:10] I would not know where to start. [23:10] heh, I can understand that, one of the guides starts with "to run a fully HA openstack you'll need 28 machines..." [23:12] pytrade: maas sure looks cool, and I know juju is cool, but I've never been responsible for maintaining such a setup. [23:13] pytrade: you'll probably want to run the 'juju-core' version of juju, from their PPA; it's under active development, and cool new features are being added regularly. The new juju-core version is also being deployed inside canonical for production services, so suppor ought to be good. :)