[01:04] <TheLordOfTime> sarnold, what, there's crashes elsewhere in 13.10 too?
[01:04] <TheLordOfTime> because this person upgraded to the PPA< then downgraded, and now is crashing nginx
[01:04]  * TheLordOfTime thinks it's because weirdness
[02:15] <dustinspringman> question that I'm not sure how to format for relevant results on the goog.... have a 12.04LTS x64 server.. the route cache is acting weird.. whenever one of my remote sites VPN drops (usually the isp being down), when the vpn restores, i have to actually manually login to the 12.04 server and do an "ip route flush cache" to get it to be able to route across the tunnel again.. very lame, and is annoying as all hell.. 
[02:46] <dustinspringman>  may I restate my question?
[02:46] <dustinspringman> question that I'm not sure how to format for relevant results on the goog.... have a 12.04LTS x64 server.. the route cache is acting weird.. whenever one of my remote sites VPN drops (usually the isp being down), when the vpn restores, i have to actually manually login to the 12.04 server and do an "ip route flush cache" to get it to be able to route across the tunnel again.. very lame, and is annoying as all hell.. can someon
[02:55] <sarnold> dustinspringman: you're cut off at "can someon"
[02:56] <dustinspringman> sarnold: orly?
[02:56] <dustinspringman> can someone point me to what I should research to resolve this? I've tried numerous winded searches, but the results are all over the place... thanks in advance
[02:57] <dustinspringman> sarnold: any ideas?
[02:57] <dustinspringman> sarnold: I am happy to research a solution, but I don't know how to ask this question without putting it into literal speach..
[02:57] <sarnold> dustinspringman: no kidding, it wouldn't be easy to search for
[02:59] <dustinspringman> sarnold: pisser is, it worked flawlessly for over a year... then all the sudden.. pooched..
[03:00] <sarnold> dustinspringman: my understanding is that the NIC bounces, routes get dropped, and the VPN doesn't handle bouncing NICs well..
[03:02] <dustinspringman> sarnold: close, but no.. the NIC itself doesn't bounce.... Server->ethernet->main location router........vpn.......remote-site....
[03:03] <dustinspringman> whenever the VPN drops (usually isp failure, or power outage at the remote site or some similar issue) I lose routing capability only from this Ubuntu server to that remote-site...
[03:03] <sarnold> other machines re-establish the vpn fine? o_O
[03:04] <dustinspringman> **I lose routing because its down, obviously.. the challenge is that when the vpn restores (sometimes in minutes, or like today when the isp failed hard and it was down for 8hrs), the routing still never restores... the route cache is effectively not detecting the reachability of the remote-site...
[03:05] <dustinspringman> yes, because its a site-to-site vpn, all the hosts including the ubuntu-server use the main-site-router as the gateway.. the other machines and the main-site-router pick right back up with no special need.. but the ubuntu-server, every damn time, I have to do an "ip route flush cache" to get it to restore...
[03:06] <dustinspringman> its super annoying and causing a lot of false positives/headaches as this ubuntu server is running my xymon monitoring system... =/
[03:07] <dustinspringman> real messed up thing is, i have another xymon on ubuntu server x64 12.04 LTS (exact same OS) hosted on AWS that never has this issue... i think something in the route cache settings or ethernet/route config settings is pooched here...
[03:09] <sarnold> dustinspringman: time for me to quit.. if you get it sorted out, I'd be curious to know the solution :) good luck, have fun :)
[03:10] <dustinspringman> arrgh.. thanks man, will do. gnite
[05:03] <paulz111> Hey guys, is there a way to disable TLS compression system-wide in Ubuntu 12.04 server?
[05:04] <paulz111> on CentOS6, this can be obtained by running: export OPENSSL_NO_DEFAULT_ZLIB=1
[05:04] <paulz111> I've seen guides for disabling it in Apache and Nginx but nothing for Squid (which is what I'm running)
[06:41] <CharSet> 1 ubuntu server sharing a folder via samba | 2 clients: a) runs ubuntu and its locale is set to ca_ES@utf8 - b) runs crunchbang and its locale is set to ca_ES@UTF-8 | a) mounts shared folder correctly with no charset or codepage set to command - b) does not, even if i set all possible iocharsets to command...it never shows characters properly....WHAT CAN I DO?
[09:21] <jamespage> sgran, all of the rc's are now in the havana updates pocket; the only bit that is missing is mongodb - just working on a build failure associated with that
[09:21] <jamespage> sgran, I'll stick out an announce on the openstack lists today
[09:26] <chemist^> hello everyone
[09:27] <chemist^> i have sort of a problem... probably something to do with my network configuration ..but anywayz... I get very slow ssh sessions on my ubuntu-server .. sometimes even stalling ... any common issues/fixes related?
[09:28] <chemist^> My ISP gave me a modem/router to which i am connected to the internet using a dynamic ip - directly (not NAT), and the server is connected to a different port on the modem with a static ip adress...
[09:28] <chemist^> i have no LAN connection between the server and the client machine
[09:28] <chemist^> could that be the issue?
[09:29] <chemist^> connecting through ssh via the external static ip address of my server?
[09:29] <chemist^> also - the computer running the server is a slow 1.7 ghz celeron computer with 512mb ram
[09:31] <sgran> jamespage: \o/
[09:31] <chemist^> it used to work flawlessly when i had gentoo installed on it ... the issue started as i started using ubuntu-server 12.04 ... i've searched the forums with no luck of finding similar problems..... ppl only complaining about slow ssh LOGIN ... but my entire session is slow as hell
[09:32] <chemist^> if i do a simple command like 'ps -aux' ... it shows half of the output, then stalls for about 30 secs. and then it shows the rest of it
[09:32] <chemist^> it's pretty anoying... not to talk about file transfer - that's even slower
[09:35] <chemist^> i've read on the internet that it could be a bad switch ... but the switch worked fine when i had the previous configuration at home
[09:35] <chemist^> i use a switch as an extension because i have 2 short UTP cables instead of using 1 long ...
[09:36] <chemist^> using the switch in between as a hub...
[09:37] <chemist^> anyone?
[09:46] <chemist^> ok
[09:47] <chemist^> i just did a memory check command
[09:47] <chemist^> and i got a reply that only 6 MB of ram is free
[09:47] <chemist^> wtf
[09:47] <chemist^> why does ubuntu-server with no GUI eat up so much ram?
[09:48] <chemist^> are there that many processes that need to be running?
[09:48] <chemist^> i'm seriously thinking of switching back to gentoo and have a nervous brakedown everytime i need to config smth, as i'm used to debian
[09:51] <hitsujiTMO> chemist^ can you please paste the output of: free -m
[09:54] <chemist^> /var/www$ free -m
[09:54] <chemist^>              total       used       free     shared    buffers     cached
[09:54] <chemist^> Mem:           495        488          7          0         33        328
[09:54] <chemist^> -/+ buffers/cache:        125        369
[09:54] <chemist^> Swap:          509          0        509
[09:56] <hitsujiTMO> you have 369 MB free not 7
[09:57] <chemist^> oh...
[09:57] <chemist^> so what's the problem
[09:57] <chemist^> why is my ssh connection so faulty
[09:57] <chemist^> maybe a problem with my server's 'hostname' ?
[09:58] <chemist^> i entered a random word as hostname when it asked me during the installation
[09:58] <mardraum> "faulty"?
[09:58] <chemist^> it's slow
[09:58] <hitsujiTMO> chemist^: try connecting via ip address
[09:58] <chemist^> sometimes it freezes
[09:59] <chemist^> hitsujiTMO i am
[09:59] <mardraum> how far away is the server?
[09:59] <chemist^> and when putty or the terminal with ssh session freezes
[09:59] <chemist^> my whole internet gets a little stalled at home
[09:59] <chemist^> as i said...it's probably a network config issue
[09:59] <mardraum> then you have packet loss or something
[09:59] <mardraum> nothing to do with ubuntu
[10:00] <chemist^> cause i have a server in the same room as my client machine
[10:00] <chemist^> but not connected through lan
[10:00] <chemist^> there is no lan
[10:00] <mardraum> you are making zero sense
[10:00] <hitsujiTMO> what do u mean there is no lan?
[10:00] <chemist^> ok wait .. i'll explain
[10:00] <chemist^> i have 5 ports on my ISP modem/router
[10:00] <chemist^> 4 of them are bridged and 1 is NAT
[10:01] <chemist^> i have 2 dynamic ips and one static provided by my ISP
[10:01] <chemist^> if i want to use either i need to be connected to the bridge port
[10:01] <chemist^> if i connect to the NAT port i get a local ip from the router
[10:02] <chemist^> if i connect to the bridge port i get an ip directly from my ISP
[10:02] <chemist^> so i have my client machine connected with automatic dhcp to the bridge port -> getting a dynamic ip from my ISP
[10:02] <chemist^> and the server connected to another bridge port with static ip settings
[10:02] <hitsujiTMO> chemist^ can you tracert to the server
[10:03] <chemist^> so when i'm connecting via ssh to my server i enter my static ip address
[10:03] <chemist^> how do you do that exactly? :)
[10:03] <hitsujiTMO> your client windows?
[10:03] <chemist^> no
[10:04] <chemist^> ubuntu
[10:04] <chemist^> desktop
[10:04] <hitsujiTMO> traceroute ip
[10:05] <chemist^> i've read somewhere on the internet that it might be a bad switch issue.... but i don't think so, cause my switch worked fine before reinstalling the system (although it had a different role that time)
[10:05] <chemist^> installing traceroute ... wait a sec.
[10:06] <chemist^> 'Name or service not known'
[10:07] <chemist^> Cannot handle "host" cmdline arg `xxx.xxx.xxx.xxx' on position 1 (argc 1)
[10:07] <chemist^> i x-ed out the ip address
[10:07] <chemist^> oops
[10:07] <chemist^> wrong ip...wait :D
[10:08] <chemist^> ok it's doing it now
[10:08] <chemist^> it got to 8
[10:08] <chemist^> and now just showing ***
[10:08] <chemist^> ***
[10:08] <chemist^> till 30
[10:09] <chemist^> hitsujiTMO :)
[10:09] <chemist^> wtf is going on here
[10:09] <hitsujiTMO> looks like your going half way around the world to ssh to a machine a few metres away from you
[10:09] <chemist^> that is correct
[10:10] <chemist^> is that the cause for stalling? and freezing my entire internet connection even on the client-side
[10:10] <hitsujiTMO> can you XXX out your ips and post the output?
[10:11] <chemist^> i don't have a monitor connected to my server and i don't want to carry one in the other room everytime i need to make a change to the system
[10:11] <chemist^> i would really like to be able to do that via ssh
[10:11] <chemist^> hitsujiTMO i'll post it in private so i don't get kicked for flooding
[10:12] <hitsujiTMO> chemist^: paste.ubuntu.com
[10:12] <hitsujiTMO> use that
[10:13] <chemist^> hitsujiTMO here you go
[10:14] <chemist^> hitsujiTMO did you get my notice?
[10:14] <hitsujiTMO> yup
[10:14] <chemist^> ok
[10:14] <hitsujiTMO> looking now
[10:15] <hitsujiTMO> erm, are you using a vpn also?
[10:15] <chemist^> do you think that if i try to connect from anywhere else it would give me same problems?
[10:15] <chemist^> hitsujiTMO ammm... i don't think so... or if i do, not to my knowing...
[10:16] <hitsujiTMO> ok, your connection is coming from 'godaddy'
[10:16] <chemist^> what does that mean? :
[10:16] <chemist^> :)
[10:16] <hitsujiTMO> actually never mond
[10:16] <hitsujiTMO> mind*
[10:16] <chemist^> :P
[10:17] <chemist^> u know what...i'll try and connect via ssh with my mobile phone (mobile 3g internet) and do a simple command like ps -aux and see if it stalls as from my comp.
[10:18] <hitsujiTMO> yeah, just seems your isp sucks, its routing packets all over europe before getting back to you
[10:18] <chemist^> yeah...
[10:18] <chemist^> they all suck
[10:18] <chemist^> :D
[10:18] <hitsujiTMO> prob hitting packet loss along the way
[10:18] <chemist^> have problems with isps all the time
[10:18] <hitsujiTMO> how many ethernet ports you got on the server?
[10:18] <chemist^> do you think maybe they could do smth about it?
[10:19] <chemist^> 2 ethernet cards
[10:19] <chemist^> only 1 in use
[10:19] <hitsujiTMO> buy a switch and connect your free port to that and ssh with that
[10:19] <chemist^> before...i had my server running as a router/firewall also... so i had used both at that time, with no issues whatsoever
[10:20] <chemist^> i have a switch which is in use now as an extension, as my cable was too short ;D i used 2 and a switch in between
[10:20] <chemist^> could that be the issue?
[10:20] <chemist^> it shouldn't...
[10:20] <hitsujiTMO> doubt it
[10:21] <hitsujiTMO> how did you configure the server as a router? with iptables?
[10:21] <chemist^> yes
[10:21] <chemist^> it worked well back then
[10:22] <chemist^> ok
[10:22] <chemist^> the response
[10:22] <chemist^> to ps -aux
[10:22] <chemist^> from my slow-connection mobile phone internet
[10:22] <chemist^> works flawlessly
[10:22] <chemist^> fast reply from the server
[10:22] <chemist^> with no stopping at the middle
[10:22] <chemist^> or stalling
[10:23] <chemist^> shit man... :/
[10:23] <chemist^> is there a way to fix this... other than connecting physically to my server?
[10:23] <hitsujiTMO> get a new isp would be the fix
[10:24] <sgran> chemist^: you have an mtu problem
[10:24] <chemist^> do you think that the idiot-technitians of my isp could fix this?
[10:24] <sgran> adjust the mtu of your uplink interface on the server to 1450
[10:24] <chemist^> hitsujiTMO i just switched to them :D
[10:24] <chemist^> they have fast internet
[10:24] <chemist^> 100 mbit
[10:24] <chemist^> optics
[10:25] <chemist^> sgran u sure that's the issue? ... why does my mobile phone communicate normally then? shouldn't the MTU affect the comm with the phone as well?
[10:25] <hitsujiTMO> chemist^ I'd certainly contact them to see if they can fix the issue
[10:26] <chemist^> ok i'll do that right away
[10:26] <sgran> path mtu is negotiated for each new connection.  Perhaps your phone connection has a lower mtu than the one that doesn't work, or perhaps pmtu discovery isn't broken between your phone and your server
[10:26] <sgran> who can say
[10:27] <sgran> but if a connection freezes when you're passing a large chunk of data back, but works for small data transfers
[10:27] <sgran> experience has shown it to be mtu
[10:27] <chemist^> i think that the problem is actually as hitsujiTMO said...the connection hopping half of europe before returning to my server
[10:27] <sgran> ok, you should fix that then :)
[10:27] <chemist^> sgran the problem is even with small data transfers
[10:28] <chemist^> the size of the data transfer actually does not change the stall-time
[10:28] <chemist^> or the actual connection freeze - sometimes
[10:32] <jamespage> zul, bug 1231982 is probably worth a poke pre-release
[10:32] <jamespage> looks like sucky upstream orig.tar.gz from our upstream
[10:35] <chemist^> hitsujiTMO
[10:35] <hitsujiTMO> yo
[10:35] <chemist^> do you think this could work....
[10:35] <chemist^> if i used
[10:35] <chemist^> a wireless router (not in use currently) to create a LAN, and use wireless to connect to the server locally?
[10:36] <chemist^> can i have 2 network connections running at the same time?
[10:36] <chemist^> is that even possible?
[10:36] <chemist^> i would connect the server with a cable to the router and my client computer through wifi
[10:37] <chemist^> or just use wifi on both computers
[10:37] <hitsujiTMO> chemist^ you can as long as the default gateway is set on one connection only
[10:37] <hitsujiTMO> and they are 2 different subnets ofc
[10:37] <chemist^> so if i connect to the wifi with my server i leave the gateway entry blank?
[10:37] <chemist^> or do i use automatic dhcp
[10:38] <hitsujiTMO> should work
[10:38] <chemist^> the router will not be connected to the internet
[10:38] <chemist^> just local
[10:38] <hitsujiTMO> use static, dhcp will give a gateway most likely
[10:38] <chemist^> ok
[10:38] <chemist^> i'll try that
[10:38] <chemist^> now i must go pick up my GF at work and go eat smth
[10:39] <chemist^> i'll let you know later if u'll be online
[10:41] <jamespage> zul, adam_g, smoser: I reviewed all current cloud-archive bugs and poked things accordingly - nothing aside from the novnc issue above that I can see right now for Havana
[10:55] <jamespage> rbasak, around? I have an arm build failure for precise for golang which feels familiar but I can't remember the fix! - http://paste.ubuntu.com/6221908/
[10:55] <rbasak> jamespage: yes, looking
[10:55] <jamespage> ^^thats on armhf
[10:55] <jamespage> rbasak, thanks
[10:56] <rbasak> jamespage: that's on Saucy?
[10:56]  * rbasak looks for the previous bug
[10:56] <jamespage> rbasak, no - thats on 12.04
[10:57] <jamespage> but I think we saw the same bug on saucy - we are carrying a patch that fixes this on saucy but its not doing the magic on 12.04
[10:58] <rbasak> Ah. Yes, that'd be expected I think. Bug 1187722. dpkg-shlibdeps is making assumptions about the sf/hf-ness of the binary produced by golang toolchain since that toolchain wasn't using the ELF header flags that we were expecting and do with the gcc toolchain.
[10:59] <jamespage> rbasak, do we need an associated dpkg change as well?
[11:00] <jamespage> reading that bug it sounds like it
[11:00] <rbasak> jamespage: sort of, yes. We did make one. I think it's an impedance mismatch that could in theory be fixed either side.
[11:00] <rbasak> jamespage: I presume this is for the cloud-tools pocket and we'd prefer not to change dpkg there?
[11:01] <jamespage> rbasak, preferably yes
[11:01] <jamespage> and it is
[11:01] <jamespage> I need to backport golang for armhf for the juju team as well so this will block in both locations
[11:02] <rbasak> jamespage: davecheney is working on the fix upstream. He has done https://codereview.appspot.com/10171043 which I guess isn't complete but perhaps we can backport that?
[11:02] <rbasak> (if completed)
[11:03] <rbasak> Looks like someone else wrote it actually
[11:04]  * rbasak looks for the dpkg change
[11:06] <rbasak> jamespage: http://launchpadlibrarian.net/144462697/dpkg_1.16.10ubuntu2_1.16.10ubuntu3.diff.gz
[11:06] <rbasak> jamespage: do you have a built tree handy; could we see what "readelf -h" gives us?
[11:07] <rbasak> Well I suppose it would likely be the same as the Saucy build actually.
[11:11] <jamespage> rbasak, I would suspect so - but I don't have a handy built tree I'm afraid
[11:15] <rbasak> jamespage: looks like it from the saucy armhf binary. I wonder if the dpkg fix would be considered SRUable. What do you think?
[11:15] <rbasak> I guess that might change build behaviour on a wide variety of packages
[11:15] <rbasak> So maybe too risky
[11:16] <jamespage> rbasak, possibly - I don't really want to hold that in the cloud-archive particularly; I guess we could backport it in isolation and just use that as a build-dependency for the PPA's
[11:16] <jamespage> that way we levarage it during build but don't actually ship it for the CA
[11:17]  * rbasak wonders if there's some way to patch the build to get the same effect
[11:21] <rbasak> jamespage: would a modification to the golang package that is needed for Precise only be acceptable to the cloud-tools pocket?
[11:21] <jamespage> rbasak, yes - that's OK
[11:22] <rbasak> jamespage: I have two possible really horrible hacks in mind.
[11:22] <jamespage> rbasak, I'd buy anything right now if it works us around this problem
[11:23] <rbasak> jamespage: 1) modify the ELF binaries themselves, to manually give them the flags dpkg-shlibdeps is looking for.
[11:23] <rbasak> jamespage: 2) wrap readelf, to provide what dpkg-shlibdeps is looking for but only during the dpkg-shlibdeps run
[11:23] <rbasak> On armhf, we know it to be true, so if we make the forgery work only for armhf ELF binaries we know we're safe. It'd break cross-building but we don't care about that.
[11:25] <rbasak> Wrap readelf -A to call readelf -h first, and if it says 0x500402, then return VFP registers.
[11:25] <rbasak> Then modify PATH in the build process
[11:33] <rbasak> jamespage: where can I get the ctools golang source that failed, please? I don't see it in https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-staging/+packages
[11:33] <jamespage> rbasak, its exactly what is in saucy right now
[11:34] <rbasak> OK
[11:47] <caribou> I need to open a bug against isc-dhcp-client available into the U.C.A, which package should I use ?
[11:48] <caribou> when installing MAAS 1.4* on precise, it installs isc-dhcp-client from U.C.A which depends on iproute2 which is unavailable in precise
[11:48] <caribou> hence it breaks the network if main interface is using dhcp
[11:48] <jamespage> caribou, please use ubuntu-bug - it will end up in the right place (cloud-archive project)
[11:48] <caribou> jamespage: ok, will do
[11:48] <jamespage> caribou, iproute2 should be in the cloud-archive
[11:48] <jamespage> cloud tools that is
[11:48]  * jamespage looks
[11:48] <caribou> jamespage: lemme check
[11:49] <jamespage> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/cloud-tools_versions.html
[11:49] <jamespage> caribou, shows on the report which comes direct for the archive
[11:50] <caribou> jamespage: well, I need to investigate this one further then
[11:50] <jamespage> caribou, OK - it only landed in the last 24 hrs
[11:51] <caribou> jamespage: but This is totally reproducible : I start on a pristine 12.04.03 VM, install maas+dhcp+dns, reboot and I no longer have network
[11:51] <caribou> jamespage: hmm, let me see which archive I'm using
[11:52] <caribou> jamespage: hmm, my VM was created before that,maybe that's why
[11:53] <jamespage> caribou, bug reports work saved then!
[11:55] <caribou> jamespage: indeed, works well now and isc-dhcp-client does install correctly
[11:56] <caribou> jamespage: but it _was_ a problem yesterday when I started
[11:56] <caribou> jamespage: thanks!
[11:56] <rbasak> jamespage: is golang a straight backport to precise or does it have any build-deps that needed backporting first? ie. should I be able to reproduce in a straight precise sbuild and save for this bug is that expected to work?
[11:58] <jamespage> rbasak, straight backport
[11:58] <rbasak> OK thanks.
[11:59] <rbasak> Doing a build now to get me my build tree. Then see if I can implement this hack.
[12:20] <Rasmus`> Does anyone in here happen to know a resource as to how to configure an ubuntu server to act as a broadband remote access server dealing with a DSLAM in a DSL environment?
[12:21] <mardraum> dealing with a DSLAM?
[12:21] <mardraum> you want to auth, eg RADIUS?
[12:22] <Rasmus`> Somewhat along those lines, yes. It's a lab setting where I already have a DSLAM and a server, just no idea how to make the two talk.
[12:22] <Rasmus`> Though it'd be lovely if it'd be just as easy as pointing the DSLAM to the server which just has to run RADIUS or something.
[12:23] <mardraum> well, are you looking or authentication or something else?
[12:25] <Rasmus`> I assume authentication. The thing is, I don't really know the technology that well. All I do know is that the DSLAM uses the BRAS to authenticate the users connecting to it, but I don't really know how exactly - and more importantly, how to configure that.
[12:51] <bananapie> Cups is giving the error '**** Unable to open the initial device, quitting.' but this error is not in the cups source code, any body know which library has this error ?
[12:52] <sgran> I don't think it's that simple.  I think the BRAS is the ppp logical termination for the end users
[12:52] <sgran> something like http://www.klick.us/?page_id=492 looks about right
[12:53] <sgran> which was, incidentally, the second or third hit in a search for 'linux bras server'
[12:55] <koolhead17> jamespage: zul can i start testing the recent pkgs for doc?
[12:55] <zul> koolhead17:  sure
[12:56] <koolhead17> zul: so i will use the testing repo from cloud archive correct?
[12:56] <koolhead17> can you pastebin that link for me
[12:56] <jamespage> koolhead17, please do!
[12:56] <jamespage> koolhead17, no - use the actual cloud archive repository
[12:56] <zul> koolhead17:   https://wiki.ubuntu.com/ServerTeam/CloudArchive
[12:56] <jamespage> koolhead17, its in the email I just sent to list
[12:57] <jamespage> thats the one!
[12:57] <koolhead17> jamespage: just in time.
[13:01] <Rasmus`> sgran: Ah, thanks. Yeah, I searched for various combinations of "ubuntu" but never thought of just trying "linux".
[13:18] <rbasak> jamespage: I think I have something that works. But I think there's a catch: you'll need my workaround in every package that produces golang toolchain binaries. Like, I presume, juju.
[13:19] <rbasak> It's not too bad though. Just one file and a one line override_dh_shlibdeps in debian/rules.
[13:19] <jamespage> just juju right now
[13:19] <jamespage> sounds ugly but lets take a look
[13:24] <rbasak> jamespage: I have yet to do a full build test on this. But it seems to work in principle anyway. http://paste.ubuntu.com/6222380/
[13:25] <rbasak> For some reason I am very much amused by this hack.
[13:26] <jamespage> omg
[13:26] <CharSet> 1 ubuntu server sharing a folder via samba | 2 clients: a) runs ubuntu and its locale is set to ca_ES@utf8 - b) runs crunchbang and its locale is set to ca_ES@UTF-8 | a) mounts shared folder correctly with no charset or codepage set to command - b) does not, even if i set all possible iocharsets to command...it never shows characters properly....WHAT CAN I DO?
[13:27] <rbasak> Other options are: fix golang to actually produce the arch-specific information queried by "readelf -A". This is upstream bug http://code.google.com/p/go/issues/detail?id=5640 and we could backport a fix, but one isn't ready yet.
[13:28] <rbasak> Or, fix dpkg-shlibdeps to do something different. But that involves a dpkg SRU or carrying it in the cloud archive build PPA or something.
[13:28] <rbasak> I don't think there are any other options.
[13:30] <rbasak> I should also probably additionally check that it is an ARM binary actually
[13:30] <rbasak> As there's weird cross stuff happening in this build too
[13:33] <zul> jamespage:  building glance locally
[13:48] <caribou> Is it possible to use LXC containers as provisionned nodes on MAAS ?
[13:52] <zul> jamespage:  https://code.launchpad.net/~zulcss/glance/2013.2.rc2/+merge/190663
[13:53] <rbasak> caribou: directly? No, because MAAS expects to be able to run d-i or curtin on a node, and that requires a machine with a block device. If I understand you're asking. virtual maas uses KVM, AIUI. And you can do LXC using juju's container support.
[13:53] <caribou> rbasak: yeah, I just found out about juju-local, which is mostly what I wanted to test
[13:53] <jamespage> zul, niggle
[13:54] <zul> jamespage:  bah
[13:54] <jamespage> rbasak, caribou: the MAAS provider in >= 1.14.1 can manage LXC containers on physical servers
[13:54] <jamespage> juju add-machine lxc:0 add's a new lxc container to machine 0
[13:54] <zul> jamespage:  fixed
[13:55] <jamespage> zul, my only concern is that we reference no bugs
[13:55] <jamespage> but hey - lets see how it goes
[13:59] <zul> jamespage:  ok uploaded
[15:12] <figgycity50> hello.
[15:12] <figgycity50> i need to know if my computer supports ubuntu server.
[15:12] <figgycity50> you see it's from 2004
[15:12] <figgycity50> or at least some time around that
[15:13] <mdeslaur> figgycity50: probably
[15:14] <figgycity50> celeron d processor?
[15:14] <figgycity50> worst processor ever
[15:14] <mdeslaur> figgycity50: boot a 32-bit desktop live cd on it
[15:14] <figgycity50> i know
[15:14] <figgycity50> that's what i am gonna do
[15:14] <figgycity50> i have no cds tho
[15:15] <figgycity50> i DO have a usb stick
[15:15] <figgycity50> mp3 to be exact
[15:15] <TheLordOfTime> livecd is interchangeable with "LiveUSB"
[15:15] <figgycity50> i have a sandisk but i dunno where it is
[15:15] <TheLordOfTime> but i don't think yo ucan use your MP3 player as a LiveUSB
[15:15] <figgycity50> it has usb tho
[15:15] <figgycity50> and i can access the files
[15:16] <figgycity50> an alba 4gb
[15:16] <TheLordOfTime> doesn't mean it can actually handle being a LiveUSB
[15:16] <figgycity50> will it fit??
[15:16] <TheLordOfTime> fit? probably.  boot?  probably not
[15:16] <figgycity50> i'll see
[15:16] <figgycity50> it's got no uefi
[15:16] <figgycity50> and i can get into the bios settings
[15:16] <figgycity50> ahh
[15:16] <figgycity50> i see some dvds
[15:16] <TheLordOfTime> i never said uefi or the bios were the issue.
[15:17] <figgycity50> dvd-rws
[15:17] <TheLordOfTime> those work too :p
[15:17] <figgycity50> should i use those?
[15:17] <TheLordOfTime> i would, i don't trust MP3 players to be decent LiveUSBs
[15:17] <figgycity50> ok
[15:17] <figgycity50> iso nearly done..
[15:17] <TheLordOfTime> although i'm going to refine what mdeslaur said...
[15:17] <figgycity50> any decent iso burners
[15:17] <TheLordOfTime> and suggest an Lubuntu LiveCD
[15:17] <figgycity50> im using windows 8
[15:18] <figgycity50> i'm using server
[15:18] <TheLordOfTime> because Ubuntu 32bit Desktop LiveCD is ehhhh
[15:18] <TheLordOfTime> [13/10/11 11:14:38] <mdeslaur> figgycity50: boot a 32-bit desktop live cd on it
[15:18] <TheLordOfTime> there is no "server" LiveCD last i looked
[15:18] <figgycity50> because i'm gonna run minecraft server
[15:18] <figgycity50> there is
[15:18] <TheLordOfTime> you can run minecraft on a GUI server.
[15:18] <figgycity50> ubuntu.com/server
[15:18] <TheLordOfTime> which links to the server ISOs, which as I understand them...
[15:18] <TheLordOfTime> (1) don't come wiht a GUI
[15:18] <figgycity50> ik
[15:18] <TheLordOfTime> (2) odn't come with a live environment
[15:18] <figgycity50> but i don't need a gui
[15:19] <Pici> 70
[15:19] <TheLordOfTime> (3) are the installer
[15:19] <figgycity50> yes
[15:19] <figgycity50> i will partition it from the xp
[15:19] <figgycity50> god
[15:19] <figgycity50> these cds are mixed up
[15:19]  * TheLordOfTime points at the enter button.  Don't constantly use it.
[15:19] <figgycity50> the cd-r cases have dvd-rws and the dvd-rws cases have cd-rs
[15:20] <figgycity50> weird right?
[15:20] <TheLordOfTime> and i have a date with a pot of coffee... back in 5 minutes
[15:21] <TheLordOfTime> (BTW, you don't need to run the ubuntu server edition to run a minecraft server, and in fact if you're new to the whole server thing I highly suggest you install Lubuntu, then work from the GUI terminal emulator to run the Minecraft server, if you're a newbie to the command line)
[15:21]  * TheLordOfTime doesn't know if you're a Linux CLI expert or not
[15:21] <TheLordOfTime> okay, now seriously, i need my coffee, back in a few
[15:22] <figgycity50> i am not a cli newbie
[15:22] <figgycity50> i know ls
[15:22] <figgycity50> wget, cd, rm, touch, nano, apt-get, apt-cache, aptitude
[15:22] <figgycity50> loads more
[15:22] <figgycity50> and definitly sudo
[15:22] <TheLordOfTime> again, the enter key.
[15:22] <TheLordOfTime> don't constantly use it.
[15:23] <TheLordOfTime> you can put more than one thought in a line.  :)
[15:23] <TheLordOfTime> ... grrrrrrrrr, stupid segfault crash bugs...
[15:23] <figgycity50> is 700mb enough
[15:24] <figgycity50> for ubuntu server install?
[15:24] <figgycity50> don't blame me for enter that time, i was putting in my cd
[15:25] <TheLordOfTime> 700MB actually is about 658 - 698 MB on CDs, and you might have enough space for the Ubuntu Server ISO to fit, but... you also might not depending on the exact size of the CD
[15:26] <figgycity50> weird
[15:26] <figgycity50> it says 0 bytes
[15:26] <figgycity50> oh winndows cant access the disk
[15:26] <figgycity50> disk dead, getting another
[15:27] <TheLordOfTime> http://www.ubuntu.com/download/desktop/burn-a-dvd-on-windows btw is your most helpful resource for burning the ISO to a disk
[15:28] <TheLordOfTime> as for your disk being dead if all your disks return 0 byte size, that's an indication your CD/DVD Reader/Writer is broken
[15:28] <TheLordOfTime> 'course that page explains it for win7 i dunno if win8 still has the same functionality
[15:28] <TheLordOfTime> because windows 8 is worse than win7
[15:30] <figgycity50> i know windo
[15:30] <figgycity50> burning
[15:30] <figgycity50> windows 8 has a built in iso burner
[15:31] <figgycity50> and i found a 4.7gb dvd-rw
[15:31] <figgycity50> and the disk is reading
[15:39] <rbasak> jamespage: as expected juju-core needed the same hack, but builds with my hacked golang. Next steps? We need to test the produced binaries on both Intel and ARM I think. Do you think it's OK to do that from the staging PPA? And what are your thoughts on the hack?
[15:39]  * rbasak wonders what tests we have for juju-core anyway
[15:40]  * rbasak finds the dep8 test
[15:45] <figgycity50> TheLordOfTime?
[15:47] <figgycity50> this burning is becoming a pain. why? ITS NOT WORKING
[15:49] <figgycity50> anyone got instructions for cd burning?
[15:58] <Breetai> I want to set up a new email server and test it out. Is there some spam filter proxy or SMTP proxy that  forward email to 2  backend servers, or filter by user. Ie. all mail for @domain.com goes to server1 except bob@doman.com goes to server2?
[15:59] <rbasak> Breetai: doing that is more complicated than it seems, due to the need to avoid spam backscatter.
[16:00] <rbasak> Breetai: for experimentation it's easier to use a test domain or a subdomain.
[16:02] <Breetai> rbasak: with setting up postfix, dovecot, opendkim, spamd, z-push, postgrey, and roundcube, I thought it might be easier to set it up for the domain I will want to use it for, instead of for subdomain and then having to change the configs for all of those subsystems later.
[16:03] <rbasak> Breetai: you should parameterise the domain to avoid that problem.
[16:03] <Breetai> rbasak: any docs you can point me to on how to do that?
[16:04] <Breetai> I have never heard of "parameterise the domain" before
[16:04] <rbasak> Breetai: heard of "devops"? For such a complex set of pieces you certainly want to script and automate the deployment process.
[16:05] <Breetai> rbasak: heard of yes, used no.
[16:06] <Breetai> rbasak: Essentially I should use a deployment script. more work to set up, but if I set up a script that can deploy "domain.com" I can change "domain.com" in 1 place to "mycompany.com" and run it, and it will be bulletproof correct.
[16:07] <rbasak> Breetai: right
[16:08] <Breetai> rbasak: Since I am doing this on a lxc container, on top of zfs, tearing it down and doing it again should be very simple
[16:21] <jamespage> rbasak, hmm
[16:21] <jamespage> rbasak, I'm not over the moon about it but other than backport a patch to dpkg for the cloud-tools pocket
[16:21] <jamespage> I can't think of another way around it
[16:28] <jamespage> rbasak, what about the other approach? backporting the fix to dhshlibdeps to work correctly for the cloud-tools pocket?
[16:28] <jamespage> the scope of potential impact is quite limited
[16:29] <rbasak> jamespage: if it was restricted to the cloud-tools pocket, then I agree the regression risk is limited.
[16:29] <rbasak> It's a relatively trivial backport, too.
[16:29] <jamespage> rbasak, well I think thats a more sensible approach
[16:29] <jamespage> rbasak, how about we sort that out Monday :-)
[16:29] <rbasak> Sure
[16:29] <jamespage> unless you are gunning for a friday evening of hacking....
[16:29] <jamespage> :-)
[16:30] <rbasak> The disadvantage of that approach is that it's more complicated because it suddenly brings in the need to care about the environment you're building and testing in, and whether you have that backport in your build deps or not
[16:33] <jamespage> rbasak, I have a helper wrapper for sbuild that configured the build to use the staging ppa's
[16:34] <rbasak> jamespage: OK, if you're fine with that then we can backport the dpkg fix on Monday. It's trivial.
[16:35] <rbasak> And then golang/juju-core should just build fine
[16:35] <rbasak> (given that my hack works in its current form)
[17:24] <arrrghhh> greetings
[17:24] <arrrghhh> I seem to have lost my crontab - when I -e or -l it, there is nothing... yet cron still runs the jobs I had in there previously.
[17:37] <irv> what files can i safely remove from the /boot partition manually? the fact that it's full has prevented me from running apt-get remove or purge for the old kernels
[17:37] <irv> so i need to manually remove one of them or something to free up enough to properly remove the rest
[17:40] <hitsujiTMO> irv maybe delete the oldest initrd.img- in /boot ... i'd also touch it before purging
[17:42] <irv> hitsujiTMO: moved it to another drive, still not enough space.. gonna move a few more of 'em
[17:42] <irv> thx
[17:44] <irv> what should i do after apt-get -f install
[17:44] <irv> like to properly remove those old kernels
[17:44] <irv> or will it recognize that i manually moved the initrd files?
[17:45] <irv> or do i need to move them back one by one and remove the corresponding kernel as they're back
[17:45] <hitsujiTMO> apt-get purge the ones that you don't need that are still there, them move back those files and apt-get purge their respective packages
[17:48] <arrrghhh> any ideas on how to get my crontab back?  the jobs are definitely still running, I can see their effects - and the results in syslog...
[17:48] <irv> hitsujiTMO: now i'm getting that linux-server depends on: linux-image-server = 3.2.0.52.62, but 3.2.0.54.64 is installed
[17:48] <irv> and linux-headers-server same
[17:49] <irv> is there a way i can tell it to manually install those versions?
[17:49] <irv> cause -f is still failing
[17:49] <irv> even with 102mb free on /boot
[17:49] <irv> heh
[17:50] <sarnold> irv: try dpkg --purge
[17:51] <irv> sarnold: which packages? like remove the newer one all together? or
[17:51] <irv> or which was that a response to :p
[17:51] <sarnold> irv: I'd try first one of the ones you've already removed some of the files, make the package database happy and it won't cost you any more backup kernels :)
[17:52] <irv> but like which actual package am i telling it to purge? linux-server ?
[17:52] <irv> or the just the initrd bits
[17:53] <sarnold> irv: ah, one of the linux-image-server-3.2.mumble...
[17:53] <sarnold> irv: make sure you don't delete the current running kernel, and it'd be best to leave the newest installed kernel, and make sure to keep at least two kernels :)
[17:53] <irv> i only have linux-image-3.2.0-39-generic and a bunch more and then one called linux-image-server
[17:53] <irv> no other linux-image-server-xx
[17:54] <irv> so just some of the generic ones, ya?
[17:54] <sarnold> irv: can you pastebin your dpkg -l | grep ^linux  output?  (pastebinit is a nice tool for automating pastebins..)
[17:54] <hitsujiTMO> linux-image- should be enough
[17:54] <irv> just removed a few of those
[17:54] <irv> sec
[17:54] <sarnold> irv: ah, sure, those have changed names enough that I don't know them all any longer, hehe
[17:55] <irv> :] it's goin'
[17:55] <sarnold> yay
[17:55] <irv> how do i run this pastebinit with the cmd
[17:55] <irv> just pipe into it?
[17:55] <sarnold> irv: dpkg -l | grep ^linux | pastebinit
[17:55] <hitsujiTMO> pipe it
[17:55] <irv> i love linux
[17:55] <sarnold> oops, that won't work, my fault..
[17:55] <irv> http://paste.ubuntu.com/6223543
[17:56] <sarnold> dpkg -l | grep "^ii  linux"
[17:56] <sarnold> hoooray you fixed my stupid :)
[17:56] <irv> :p
[17:56] <irv> k now running autoremove
[17:57] <irv> hoping it works
[17:57] <irv> 1086mb to be freed
[17:57] <sarnold> woo
[17:57] <irv> gah, how big should i bemaking the /boot partitions?
[17:57] <sarnold> irv: you can also clean up the linux-headers-* packages once you've removed their linux-image-* package...
[17:57] <irv> assuming i would go in and update/remove old kernels once every 6 months or so
[17:57] <irv> i guess it should never grow that big if i remove the old ones as i'm updating them
[17:57] <irv> sarnold: awesome, thx
[17:58] <irv> will apt-get autoremove not take care of those too?
[17:58] <irv> woohoo, upgrading is working now =)
[17:59] <sarnold> irv: somewhere along the way I think apt just takes care of it without hassle.. again, more details I've forgotten :(
[17:59] <irv> that dpkg --purge the linux-images worked wonders
[17:59] <sarnold> irv: I've got six installed kernels now, /boot takes 247 megabytes.. and I haven't done much manual maintenance of /boot data in ages...
[17:59] <irv> how big is a normal /boot partition on the server verison?
[17:59] <irv> i think mine is 200mb or so
[17:59] <irv> the partition
[18:00] <sarnold> irv: .. but I don' have a separate /boot on this system, so it might go way over that amount of space while doing upgrades and so forth
[18:00] <irv> ahh, gotcha
[18:02] <TJ-> sarnold: I usually reserve 500MB for /boot/ but sometimes use 100MB on limited devices - just have to keep the upgrades controlled
[18:02] <TJ-> oops... that was for irv! ... my meds taking effect :D
[18:03] <hitsujiTMO> 512mb is pretty standard
[18:04] <sarnold> I think when I made a separate /boot I used 256; that was a few years back, today I'd probably go with 512 as well. that gives some room to breathe :)
[18:05] <TJ-> Yeah, those upgrades come thick and fast
[18:06] <irv> TJ-: lol, thanks
[18:06] <ikonia> considering distros should only keep 3 kernels, max, 512mg is crazy
[18:06] <irv> how about this, is there a way now to increase from 200mb to 500 or so
[18:06] <irv> it's a virtual server, and i have plenty of storage
[18:06] <ikonia> I dont think I've ever seen a properly managed distro go beyond 120mb for /boot
[18:06] <ikonia> 512mb for /boot is not standard
[18:07] <sarnold> ikonia: only three? no thanks, I've seen problems require way more than three kernels to troubleshoot and find solutions...
[18:07] <TJ-> irv: Sure, resize the volume that contains it
[18:07] <irv> i suppose it shouldnt' increase now that i've fixed that heh
[18:07] <ikonia> sarnold: very doubtful that is the norm
[18:07] <sarnold> ikonia: indeed
[18:07] <irv> i had just been upgrading without removing any old ones
[18:07] <irv> cause i got lazy
[18:07] <irv> :p
[18:07] <ikonia> irv: it should auto remove
[18:07] <sarnold> ikonia: but the day you need it, you don't want to be swearing at a tiny /boot :)
[18:08] <TJ-> Just wear even tinnier socks
[18:28] <adam_g> jamespage,  any idea why keystone installation would fail to start the service post-inst with: invoke-rc.d: policy-rc.d denied execution of start. ?
[18:36] <arrrghhh> so... cron?  I can't get crontab to load my jobs, although they still run
[18:36] <arrrghhh> crontab -e shows nothing, -l says no crontab...
[18:40] <sarnold> arrrghhh: check also /etc/cron* files, the jobs might be defined there
[18:43] <arrrghhh> I found some daily jobs
[18:44] <arrrghhh> but there's other jobs I had placed in root's crontab
[18:44] <arrrghhh> I usually just sudo crontab -e
[18:44] <arrrghhh> and it's empty :(
[18:44] <arrrghhh> I spose something changed, but I am not sure what
[18:45] <sarnold> arrrghhh: check out /var/spool/cron/ -- you might find what you need in there
[18:45] <arrrghhh> hm ok
[18:56] <jeeves_moss> I just installed Splunk and the windows forwarders on the windows servers, and all I'm getting is this from the windows boxes http://pastebin.com/tYBExJw2   What am I doing wrong?
[18:58] <jarkinox> hello
[19:06] <zul> hallyn:  1.1.3 will be available for testing in https://launchpad.net/~zulcss/+archive/libvirt-testing before pushing to ubuntu-virt
[21:10] <ewook> Anyone awake?
[21:11] <ewook> Got a weird question... If we disregard that file permissions is acted upon by the lovely bits set, can we instead of performing the lookup locally, use any modules for "remote management" of permissions?
[21:23] <sarnold> ewook: what are you trying to accomplish?
[21:40] <ewook> sarnold: Not me. Question from a friend. I think the goal is to have a central point for file access control.
[21:40] <ewook> I
[21:40] <arrrghhh> sarnold, thar?  I just have a 'crontabs' folder in the /var/spool/cron folder...
[21:40] <ewook>  am sorry for a badly formated question, but I am not sure how to go about it at all..
[21:41] <ewook> arrrghhh: cron lives in /etc/[cron.d / cron.daily / etc etc etc]/ . Whata
[21:41] <sarnold> arrrghhh: anything under that?
[21:41] <ewook> what is the issue
[21:42] <sarnold> ewook: arrrghhh's cronjobs get executed but he can't find them with crontab -l or crontab -e   -- a bit confusing :) hehe
[21:42] <ewook> sarnold: aah. yeah, scrolled up :p.
[21:42] <arrrghhh> it's really confusing, I've never seen it happen.  sarnold I had to be root, but it's empty
[21:43] <arrrghhh> I can see in the syslog the jobs are running... and some affect the system in ways that are pretty obvious lol.  so they are still workin...
[21:45] <ewook> arrrghhh: what is in /etc/crontab ?
[21:46] <arrrghhh> there's my system crontab stuff
[21:46] <arrrghhh> let me pastebin
[21:46] <ewook> /var/spool/cron/crontabs is like sarnold said, where
[21:46] <ewook> stuff is saved.
[21:46] <ewook> darnit, sorry for breaking lines, new keyboard.
[21:46] <arrrghhh> http://hastebin.com/ladowubono.md
[21:46] <ewook> exactly.
[21:47] <ewook> that poiunts to the /etc/cron.
[21:47] <ewook> ... /etc/cron.stuff
[21:47] <arrrghhh> and I see like my trim job in /etc/cron.daily
[21:47] <arrrghhh> but there's several lines I have in the regular ole crontab for root
[21:48] <ewook> and the /var/spool/cron/crontabs contained nothing?
[21:48] <arrrghhh> empty...
[21:48] <ewook> O_o
[21:49] <arrrghhh> I've done a few updates recently, but nothing major
[21:49] <arrrghhh> 12.04
[21:49] <arrrghhh> .3
[21:49] <arrrghhh> /var/spool is on tmpfs?
[21:49] <arrrghhh> is that normal?
[21:50] <ewook> no
[21:50] <ewook> or...
[21:50] <ewook> wait
[21:50] <ewook> well, not on my system :p.
[21:50] <arrrghhh> can you df to make sure I'm not crazy
[21:51] <ewook> I only do /run on tmpfs.
[21:51] <sarnold> my /var/spool is on / -- dunno if that's a good idea for a server but seemed fine for a laptop
[21:51] <arrrghhh> http://hastebin.com/dixagigagu.hs
[21:51] <arrrghhh> those are all the weird things I have not mounted
[21:51] <arrrghhh> tmp and /var/tmp make sense
[21:51] <arrrghhh> but /var/spool on tmpfs... does not
[21:52] <ewook> sarnold: have you seen var being mounted on tmpfs?
[21:52] <ewook> I don´t think it is standard.
[21:52] <ewook> might be the reason that you cannot see any content on crontabs spool.
[21:53] <ewook> is it in your fstab?
[21:53] <arrrghhh> let me look
[21:53] <arrrghhh> ohgod
[21:53] <sarnold> wow, that's ... odd.
[21:53] <arrrghhh> was I drunk and hacking my fstab?
[21:53] <sarnold> /var/spool/mail and so forth?
[21:53] <arrrghhh> holdplease
[21:54] <arrrghhh> http://hastebin.com/muyisagomo.vala
[21:54] <arrrghhh> this is at the bottom of my fstab
[21:54] <arrrghhh> wat have I done.  removing it...
[21:54] <ewook> mkay. stickybits good. kill off the var/spool :p.
[21:54] <sarnold> arrrghhh: hey, maybe if you umount /var/spool, you'll get all your data back on the underlying /var/spool directory... including the crontabs :)
[21:54] <ewook> and just unmount it.
[21:54] <ewook> sarnold: bingo ;)
[21:55] <arrrghhh> yay
[21:55] <arrrghhh> that was interesting
[21:55] <ewook> tmpfs
[21:55] <sarnold> no kidding, most interesting thing in here in a long time ;)
[21:56] <ewook> darn keyboard!
[21:56] <sarnold> ewook: haha, man, good luck. :)
[21:56] <ewook> tmpfs on spool is a new one for me at least :p
[21:56] <arrrghhh> as usual, a self-inflicted wound.  thx for the help :)
[21:56] <ewook> sarnold: yeah.. right shift is smaller, and "`" key is moved. thus, hitting enter...
[21:57] <ewook> arrrghhh: good catch =). Remember  why you tmpfs´ed spool?
[21:57] <arrrghhh> well I know there were some things I was doing to try and improve build times
[21:57] <ewook> wait.. it isn´t moved. it is gone.
[21:57] <arrrghhh> I don't remember editing fstab, but hey
[21:57] <ewook> hahahah
[21:57] <ewook> ouchie.
[21:57] <arrrghhh> there might have been some scotch involved
[21:58] <ewook> done that... that is how I confed my postfix last time......
[21:58] <ewook> works like a charm, but cannot remember... well, much.
[21:58] <arrrghhh> lol
[21:58] <arrrghhh> but it works, who cares
[21:59] <ewook> grabbed the conf, so yeah ;).
[23:08] <pytrade> I've been updating my cluster of servers (about 10 nodes) via a disk image.
[23:08] <pytrade> Should I be looking at a combination of maas and juju? Has anyone her used it?
[23:09] <pytrade> If not that, are their other tools which work well these days? I need the tool to update hostname on each node. Mount the 2 local drives. And add ceph and moosefs on each node.
[23:09] <sarnold> pytrade: another option to investigate would be openstack and juju
[23:10] <pytrade> openstack seems to be too heavy!
[23:10] <pytrade> I would not know where to start.
[23:10] <sarnold> heh, I can understand that, one of the guides starts with "to run a fully HA openstack you'll need 28 machines..."
[23:12] <sarnold> pytrade: maas sure looks cool, and I know juju is cool, but I've never been responsible for maintaining such a setup.
[23:13] <sarnold> pytrade: you'll probably want to run the 'juju-core' version of juju, from their PPA; it's under active development, and cool new features are being added regularly. The new juju-core version is also being deployed inside canonical for production services, so suppor ought to be good. :)