[00:06] <harushimo> I didn't get chance to install my gui
[00:06] <harushimo> I was wondering what is a good light weight gui
[00:06] <harushimo> I was debating between openbox or fluxbox
[00:16] <stgraber> SpamapS: thanks!
[00:32] <techquila> hi all.. can someone offer any solution to this problem for me please: http://ubuntuforums.org/showthread.php?t=1990842
[02:18] <harushimo> someone told me two good sites for maas
[02:18] <harushimo> what was it
[02:18] <twb> !log
[02:18] <harushimo> !maas
[02:19] <bigjools> maas.ubuntu.com
[02:53] <Fidelix> Hello. I'm having a serious problem here on my server. My USB drive is not appearing, and dmesg has this: usb 3-1: new full speed USB device number 5 using ohci_hcd
[02:53] <Fidelix> Then: usb 3-1: device descriptor read/8, error -110
[02:53] <Fidelix> Can someone, PLEASE, help me fix this? I tried to reboot the server twice. One with "init 6" and another one with shutdown -r now
[02:54] <lifeless> does the drive work on a different machine?
[02:54] <Fidelix> lifeless, I don't know. I can't try it in another machine, it's all remote
[02:55] <lifeless> the symtpoms you are reporting make me thing 'hardware failure'
[02:57] <Fidelix> lifeless, you mean, physical damage?
[02:57] <Fidelix> It was working normally before
[02:58] <Fidelix> Then I was rsync'ing over 1TB of files, and in 60% it suddenly stopped
[03:03] <lifeless> yes, physical issue
[03:03] <lifeless> its failing to read from the drive, I believe
[03:04] <twb> Try plugging it back in?
[03:04] <Fidelix> twb, and how do I do that?
[03:04] <Fidelix> I have no physical access to it
[03:05] <twb> Call the NOC monkey
[03:11] <JoeCoder> I'm using openssl as part of an installation script, but I can't find a way to make it unattended.
[03:12] <JoeCoder> in generating X.509 certs, it always asks me for country, city, etc.
[03:12] <twb> JoeCoder: because you need to mash your head on the keyboard to generate entropy?
[03:12] <twb> Oh that.  Just use certtool, it's much easier to get the hang of
[03:12] <JoeCoder> nope;  Im actually not sure where it's getting entropy from
[03:12] <JoeCoder> oh, thanks.  I'll take a look
[03:12] <twb> JoeCoder: it gets it from headdesking
[03:13] <JoeCoder> in 12.04, it doesn't ask me for any entropy.
[03:13] <twb> Seriously I built some live PXE images of ubuntu, and they would hang during boot until you mashed the keyboard spastically
[03:13] <twb> because something during boot was generating keys (maybe SSH) and there was no entropy on the freshly booted system
[03:13] <twb> EPIC symptoms
[03:15] <JoeCoder> I don't really know what I'm doing or what the various crt, .pem, etc. files do (but I understand the concepts of private key cryptography), and openssl is already working well, so I'm nervous about switching.
[03:15] <JoeCoder> the tutorial I'm using has me making about 7 different files.
[03:15] <twb> The file extension doesn't really matter
[03:16] <twb> Are you familiar with how key-based auth works in SSH, or in GPG?
[03:16] <JoeCoder> I understand that the client is given a public key, it encrypts the message, and the server uses the private key to decrypt it.
[03:16] <twb> http://paste.debian.net/172286/ are some notes I made early on when learning TLS
[03:16] <JoeCoder> I'm using startssl as an authority
[03:19] <JoeCoder> ok, I follow your notes, but I don't understand why I need 7 different key files
[03:19] <JoeCoder> http://lowtek.ca/roo/2012/ubuntu-apache2-trusted-ssl-certificate-from-startssl/ is the tutorial I used.
[03:19] <JoeCoder> openssl gives me 2 files, I give one to startssl, and it gives me back 3 in return.
[03:19] <JoeCoder> then 2 more files are created by concatenating those first 5 together in various ways.
[03:20] <JoeCoder> startssl gives me ssl.crt, sub.class1.server.ca.pem, and ca.pem
[03:20] <JoeCoder> oh well, this is a side-rant and isn't as important.
[03:21] <JoeCoder> I'll take a  look at certtool
[03:21] <JoeCoder> if that fails, is there a general tool I can use to provide input to programs that ask questions?
[03:22] <JoeCoder> since I don't know what I'm doing, and openssl is already worknig well, that would be a faster route for me.
[03:23] <twb> Well there is always a private key and a public key.
[03:24] <twb> In TLS the public key is usually embedded in the cert
[03:24] <twb> A CA cert is a key signed by itself.  verisign et al keys are special only because people include them in their default trust list.
[03:25] <JoeCoder> that's part of the browser.
[03:25] <twb> When you want someone to sign your key, you give them a Certificate Signing Request (CSR).  That is basically your public key plus a note that says "please sign this"
[03:26] <JoeCoder> yep, I give the .csr file from openssl to startssl
[03:26] <JoeCoder> but I didn't know what csr stood for
[03:26] <twb> They take that and their the CA private key, and sign your public key and send you back a cert
[03:27] <twb> So in total, you should have a private key, a CSR and a cert, and they should have a private key and a CA cert.  Note sure how you get to 7 files.
[03:27] <JoeCoder> startssl gives me ssl.crt.  But they also give me sub.class1.server.ca.pem and ca.pem
[03:27] <JoeCoder> so what are those .pem files?
[03:27] <patdk-lap> well, the csr is really just a temp file
[03:27] <JoeCoder> yeah
[03:27] <patdk-lap> you need all those
[03:27] <patdk-lap> you have your certificate, but you also need that certificate chain
[03:28] <patdk-lap> without that chain, a user can't trace back the trust path, back to the root certificate they trust
[03:28] <JoeCoder> those two pem files are concatenated together into the chain file.
[03:29] <JoeCoder> and that's given to apache as part of the SSLCertificateChainFile configuration setting.
[03:29] <patdk-lap> yep
[03:29] <patdk-lap> the ca.pem and sub*.pem
[03:30] <patdk-lap> you can check it's all good using: http://www.networking4all.com/en/support/tools/site+check/
[03:30] <JoeCoder> oh good.  I had just been loading it up in chrome.
[03:30] <JoeCoder> yes, it likes it!
[03:30] <patdk-lap> ya, using a webbrowser isn't good to check the chain
[03:30] <patdk-lap> cause the browsers cache it
[03:30] <twb> PEM is the encoding format, more or less like base64 or gzip
[03:31] <patdk-lap> well, pem is base64 der
[03:31] <twb> openssl s_client -CApath /etc/ssl/certs/ -connect epoxy:443 <<<QUIT   # debug SSL
[03:31] <twb> gnutls-cli -s --crlf <hostname> -p <port>                    # Raw SSL connection using GNUTLS
[03:31] <twb> openssl s_client -crlf -quiet -connect <hostname>:<port>     # Raw SSL connection using OpenSSL
[03:31] <twb> ...those might help re testing
[03:32] <patdk-lap> ya, if you understand the ssl specs :)
[03:32] <JoeCoder> any advantage to those over networking4all ?
[03:32] <patdk-lap> you can look at more info
[03:32] <patdk-lap> oscp stapling, and other fun things :)
[03:32] <patdk-lap> ssl session resume
[03:33] <twb> I have no idea what "networking4all" is
[03:33] <patdk-lap> the url I posted
[03:33] <JoeCoder> yes
[03:33] <twb> Oh
[03:33] <twb> Sorry it had scrolle doffscreen :-)
[03:34] <patdk-lap> https://www.ssllabs.com/ssltest/index.html
[03:34] <patdk-lap> is an interesting one
[03:34] <twb> patdk-lap: I guess I never ran into the chain thing because I operate my own CA
[03:34] <patdk-lap> though I don't agree much with it's *scoring*
[03:34] <patdk-lap> twb, you run a very flat CA then
[03:34] <patdk-lap> you don't use sub-ca's?
[03:35] <twb> Yes, one CA and then all the per-host keys
[03:35] <patdk-lap> ya, that is plain evil, but no chain needed
[03:35] <twb> Why is it evil?
[03:35] <patdk-lap> cause your CA is basically online all the time to sign stuff, and always exposed
[03:35] <patdk-lap> your should make your CA cert, and sign sub-CA's with it
[03:36] <patdk-lap> then you only have to invalidate one sub if it's compromised
[03:36] <twb> Why would it have to be online?
[03:36] <JoeCoder> what does PEM stand for?
[03:36] <twb> I could be doing it on an airgapped host
[03:36] <twb> JoeCoder: portable encoding message or something.  Ask wikipedia
[03:36] <patdk-lap> twb, airgapped in a vault?
[03:36] <patdk-lap> airgapped isn't secure
[03:36] <patdk-lap> it's just internet secure :)
[03:36] <JoeCoder> http://acronyms.thefreedictionary.com/PEM
[03:36] <JoeCoder> wasn't sure which one :)
[03:37] <twb> patdk-lap: you said "basically online all the time"
[03:37] <twb> I don't see how my hierarchy implies that
[03:37] <patdk-lap> online == installed on a computer
[03:37] <patdk-lap> offline it, stored in a vault
[03:37] <twb> I could be doing that now
[03:38] <twb> As it happens I'm not, but I *could* do it
[03:38] <twb> Just take it out once every twelve months or so when I provision a new host
[03:38] <twb> Or rather, when the old certs expire :-)
[03:38] <patdk-lap> I'm just stuck to this, based on all security principles, and DOD rules
[03:38] <JoeCoder> hm, certtool asks a lot more questions when generating a certificate request.
[03:39] <twb> AFAICT in practice x509 is screwed /a priori/ in its common usage in browsers
[03:39] <JoeCoder> and it would only accept empty string for the domain name question
[03:39] <twb> But sure I would probably do what you suggest if I could be arsed.
[03:40] <twb> JoeCoder: you can supply those answers in a pre-written answer file, as described in certtool's info manual
[03:40] <JoeCoder> I had expected so; I'm still at the point of figuring out how to answer them
[03:41] <patdk-lap> twb, ya, it's a pain after the fact, not hard if you think about it when you do it
[03:41] <patdk-lap> I feel like I waste 2gb microsd cards
[03:41] <twb> Well in practice I could probably do it in half an hour.  I only have about 20 certs
[03:41] <patdk-lap> have hundreds of them
[03:41] <patdk-lap> many of them with just one cert on them
[03:42] <ScottK> patdk-lap: Or use an old laptop that doesn't get turned on except for this and fits in the safe.
[03:42] <JoeCoder> https://gist.github.com/2848539
[03:42] <twb> ScottK: until one day it doesn't turn on anymore :P
[03:42] <JoeCoder> the unindented questions are the ones I'm not sure about
[03:42] <patdk-lap> never liked that idea much
[03:42] <ScottK> twb: Sure.  Make a backup.
[03:42] <twb> I'm not even bothering to make a CSR, I just exploit the fact that the provisioning host can see inside all the containers
[03:42] <patdk-lap> but ya, we do one onsite and one offsite
[03:43] <JoeCoder> the dnsName question on line 8 would only accept an empty string.  I specified the domain name for Common name.
[03:43] <twb> Realistically I'm using TLS for wire encryption, not for x509 trust hierarchy
[03:43] <patdk-lap> twb, that is fine, if you don't bother to trust two you connect to :)
[03:43] <patdk-lap> or you setup static trust
[03:44] <twb> Well sadly some of it is built on DNS and I'm not doing dnssec yet
[03:44] <JoeCoder> Is there a tool I can use to supply arguments to a command line program that asks questions?  This would be much faster than figuring out certtool.
[03:44] <ScottK> For the hierarchy to work you have to assume the entire CA chain is secure and that's sadly often not the case.
[03:44] <JoeCoder> and it would be useful in the future
[03:44] <twb> And e.g. for wifi my attempts to use EAP-TLS fell down because hostapd doesn't implement a working CRL
[03:45] <twb> It would be nice to have all that stuff working but there are lower-hanging fruit IME re security, like "stop using PPTP" or "stop using PHP"
[03:46] <patdk-lap> never attempted eap-tls
[03:46] <patdk-lap> it was too imature back last I played with eap
[03:47] <twb> patdk-lap: what's REALLY stupid is that EAP-TLS is the only WPA Enterprise that's required for WiFi Alliance certification... but x360 and printers don't support it, and n900 doesn't support it without replacing its wifi manager, and iphones don't support it without deploying a configuration management server, and ....
[03:47] <twb> Not to mention all the users whinging because you force them to generate a key and a CSR
[03:47] <patdk-lap> yep
[03:48] <twb> So what I am doing at the moment, which sucks, is to use WPA2 PSK with a per-MAC PSK list
[03:48] <twb> i.e. if you want wifi you tell me your MAC, I generate it a PSK and I add that pair to hostapd.
[03:48] <twb> Your client side just sees ordinary PSK and so everything Just Works
[03:48] <twb> But I can still, at least theoretically, revoke individual users' access
[03:50] <twb> JoeCoder: FYI here is an answer file I used with certtool: http://paste.debian.net/172288/
[03:51] <JoeCoder> those don't match the questions I'm asked:  https://gist.github.com/2848539
[03:51] <JoeCoder> such as the uid, and it won't let me specify anything besides empty string for the domain name.
[03:53] <JoeCoder> is there a way to make openssl unattended?  a generic command line tool that will let me provide answers to programs that ask questions?  It seems like I've used something like that before, but I can't remember the name.
[03:54] <twb> JoeCoder: they do correspond, it's just not obvious from the long question vs. answer string
[03:55] <twb> e.g. signing_key = Will the certificate be used for signing (DHE and RSA-EXPORT ciphersuites)?
[03:55] <JoeCoder> ah, ok
[03:55] <JoeCoder> and it's ok to leave the dnsName blank?
[03:56] <twb> That depends on the client
[03:56] <JoeCoder> well, it won't let me specify one.
[03:56] <twb> IME all clients will believe you're you if DNS matches the dn, but some will also allow it if your DNS matches the dnsName
[03:56] <JoeCoder> no matter what I type, it keeps asking the same question until I enter empty string.
[03:56] <twb> JoeCoder: that's because you can have >1 dnsname
[03:56] <JoeCoder> ah, ok
[03:57] <twb> What that is for is, suppose you have a webserver called www.example.net but it also serves webmail.example.net and arthur.example.net
[03:57] <patdk-lap> cn?
[03:57] <patdk-lap> common name is the default
[03:57] <twb> So you have a dn of www.exampl.net, but also a dnsName for the other two
[03:57] <patdk-lap> unless you enable subjectalt, then dnsname overrides
[03:57] <JoeCoder> how do I pass a config file to certtools?
[03:57] <twb> Er, sorry, I think I mean cn not dn
[03:57] <twb> Too much LDAP :-/
[03:58] <patdk-lap> ya, and funny certificates use ldap type syntax too
[03:58] <JoeCoder> I'm primarily a developer, which is why I'm so confused about system administration.
[03:59] <JoeCoder> --template=
[04:01] <twb> patdk-lap: do not talk to me about zimbra
[06:19] <obelus> Hey - I'm trying to get a local ubuntu mirror onto a machine that's behind a proxy, and download limits prevent it from mirroring the archive itself; I was thinking that I could mirror the archive with rsync or apt-mirror from home and transfer it via a portable hard drive, and use apt-mirror to update it after it's on the target machine.
[06:20] <obelus> Is this possible? And is there a bettter way to do it?
[07:18] <rbasak> obelus: that should work fine. debmirror can help you mirror a subset which might help too.
[07:19] <obelus> rbasak: the idea is to mirror all the packages that we'll need. I'm happy to do the full ~380GB, my only issue would be at the other end, is it enough to move it to the correct folder for apt-mirror and run the update command for it to pick up and use it as a base?
[07:22] <obelus> rbasak: looking up debmirror atm, it seems that it downloads for given architectures and releases, is that correct? Isn't that what apt-mirror does anyway?
[07:29] <rbasak> obelus: I'm not familiar enough with the different tools to answer your question, sorry. I'm pretty sure that most tools don't care about where the destination is providing that you move everything across identically - they should be able to resume from that point fine.
[07:33] <obelus> rbasak: Thanks for your help :) I'll be starting mirroring it on Monday afternoon when I get the hard drive. That'll be a long download before I know if it's going to work properly at the other end though ;p
[07:38] <_ruben> the stuff you mirror using debmirror is usualy made available through a webserver so your (other) machines can you it is an (alternative) repo
[07:41] <twb> Or NFS
[07:42] <twb> Here is a debmirror wrapper script I use: http://paste.debian.net/172318/
[07:42] <twb> IIRC ubuntu main, one arch, one release, no sources, is only a few tens of GB
[08:09] <obelus> _ruben, twb: The plan is to make it available locally through http as the primary repository for all machines in the LAN
[08:10] <obelus> I want to grab amd64 and i386 for both current and lts versions (atm, only 12.04 because it's both)
[08:11] <obelus> All packages, extras, universe, multiverse, etc. And a few other 3rd party package archives.
[08:13] <twb> obelus: I'm not stopping you
[08:13] <obelus> twb: O.o I know. I just thought I'd mention it.
[08:25] <_ruben> I mirror gutsy-precise, i386+amd64, binary+source, main+universe+multiverse+restricted .. only close to 700G ;)
[08:28] <obelus> Holyshit. I don't need that much lol
[08:28] <ikonia> easy on the language please.
[08:29] <obelus> Just precise for me. I'm not quite sure at this point how to mirror only amd64 and i386 with apt-mirror, or if I'll have to use debmirror, but I have a bit of time before I need to work that out
[08:29] <obelus> ikonia: Sorry, I forgot that there's strict language rules in here, I'll be good.
[08:29] <_ruben> !info apt-mirror
[08:29] <ikonia> obelus: not a problem at all
[08:30] <_ruben> iirc apt-mirror only downloads what it's clients request and then cache it, unless i'm mixing stuff up
[08:30] <_ruben> debmirror mirrors a complete pocket
[08:30] <obelus> I use apt-mirror currently to archive the google-chrome repositories
[08:30] <_ruben> been meaning to ditch debmirror in favor of plain rsync, but i dont think old-releases.ubuntu.com offers rsync access
[08:31] <obelus> I'd use rsync completely but I haven't been able to make rsync work through the HTTP auth proxy that the server's behind
[08:31] <_ruben> rsync wont let you sync just precise
[08:31] <obelus> Oh, no use then ;p I don't want EVERY release
[08:32] <obelus> I'd be happy with getting 10.04 and 12.04, as some people still like 10.04 there, and one server hasn't been upgraded yet
[08:32] <obelus> But I don't want all of them
[08:32] <lambda_engineer> hi there, got a question on bonding and vlans
[08:38] <twb> obelus: FYI, my metrics (cf. my script posted earlier): http://paste.debian.net/172324/
[08:40] <obelus> What's included in that ubuntu folder? I mean, is that the one release and arch or more than one?
[08:41] <twb> 17:42 <twb> Here is a debmirror wrapper script I use: http://paste.debian.net/172318/
[08:41] <lambda_engineer> I tried to outline the bonding-vlan problem here: http://pastebin.com/T6cK8xfQ
[08:41] <lambda_engineer> anyone up for some help ;)
[08:42] <twb> lambda_engineer: sorry, I'm too drunk to deal with that right now
[08:42] <obelus> twb: Ah, okay. That's really good that all that fits in 180gb.
[08:43] <obelus> I might make a debmirror script to do the mirror I'm doing. looks easy to customise one.
[08:43] <lambda_engineer> twb: bad for me, probably so else here can deal with it?=
[08:45] <obelus> lambda_engineer: Never tried using vlans on Ubuntu, so sorry, not really sure. I've only done vlans on cisco hardware.
[08:45] <twb> obelus: given I'm tracking 3×LTS releases and 2×arches, it's probably reasonable to expect to fit one release into, say, a bit over one sixth of that. Say about 40G.
[08:45] <ikonia> lambda_engineer: https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/352384
[08:46] <ikonia> lambda_engineer: there is a a similar issue raised on Fedora 8 instances.
[08:46] <ikonia> lambda_engineer: it appears to be a limitation of how the bond is formed.
[08:46] <obelus> I'm planning for 1 or 2 releases with 2 arches. Would it be okay if I use your script and modify it to limit it down a lot?
[08:46] <twb> IIRC there are several kinds of bonding, does it affect all of them?
[08:46] <twb> obelus: I don't care, go nuts
[08:46] <obelus> twb: thans :)
[08:47] <obelus> thanks*
[08:47] <ikonia> lambda_engineer: the suggestion in the ubuntu bug report should resolve your issue, although RedHat seem to be phasing in over a LONG period of time a more technical solution
[08:47] <twb> obelus: talk to #bash re scripting if necessary
[08:48] <obelus> twb: Thanks, but I should be fine to modify this. I'm pretty used to bash scripting
[08:49] <obelus> Is main/debian-installer for network installs?
[08:49] <twb> Yes
[08:49] <twb> It's for PXE installing
[08:49] <twb> If you have a CD installer that's >>20MB in size, you can ignore it
[08:50] <obelus> Awesome, thanks. My plan is to mirror releases too, so I can have a set of CDs (desktop, alternate, server) for i386 and amd64 and update from there.
[08:51] <twb> You can just build the CDs from the mirror using jigdo
[08:51] <twb> Well, maybe not the desktop one
[08:52] <obelus> twb: ... I should have thought of that, I've used it before. But yeah, I'll need to take copies of the desktop ones. Server/alternate will be good with jigdo though. One problem is, does debmirror only get the latest version of the packages? Because I think the jigdo templates need the same version that was available at their release
[08:55] <twb> When a release is released, it's contents doesn't change
[08:55] <twb> New versions go into the -updates or so repo
[08:56] <obelus> twb: Ah, right. Thanks.
[08:56] <twb> (There are a few exceptions, e.g. IIUC sun-java was actively removed from releases because Oracle are so douche-y)
[08:56] <twb> But yeah I just use PXE installs and don't bother with a CD at all.
[08:57] <lambda_engineer> ikonia: thx a lot
[08:57] <twb> You can build the desktop versions in theory using live-build but I wouldn't want to guarantee it'll behave the same as the ones ubuntu orll
[08:57] <twb> *roll
[08:58] <obelus> twb: I can't really use network boot because I don't control the DHCP server, and the server that's going to hold the mirror is going to be moving to a different subnet soon anyway. Ubuntu also doesn't provide the tiny netinstall images anymore do they?
[08:59] <twb> Yes they do.
[08:59] <twb> http://cyber.com.au/~twb/.bin/twb-d-i has links
[09:00] <twb> If you grovel around near those links, there are USB-HDD and CD versions as well as the PXE versions I am wgetting
[09:00] <twb> Also re DHCP server, you only really need the DHCP server to send the filename and next-server options, and you can host the TFTP server on another IP
[09:01] <obelus> twb: I know, but the person that administers the DHCP server isn't going to set that. I'm pretty sure he'd rather that not every computer tries to boot into an ubuntu installer
[09:02] <twb> Fair enough
[09:02] <twb> Also obviously PXE can present a menu and timeout to booting off the local media...
[09:03] <obelus> Yep, but that's not going to be agreeable, I don't think. Although I'd like that idea very much.
[09:03] <obelus> I'll ask, but I think the answer will be no.
[09:06] <lambda_engineer> ikonia: seems like this is not my problem, this problem is only on top of bonding with LACP/802.3ad
[09:06] <lambda_engineer> ikonia: even when i do all the settings they supplied and worked for them... they don't in my case... still the error in /var/log/syslog
[09:06] <twb> You can also tell pxelinux to skip the menu based on MAC or IP address or IP network
[09:07] <lambda_engineer> sooo still a problem with vlans on bonded interface: http://pastebin.com/T6cK8xfQ
[09:07] <lynxman> morning o/
[09:08] <twb> obelus: http://cyber.com.au/~twb/tmp/tmp.png
[09:08] <twb> For extra sexiness, that's a serial port on a headless router.
[09:10] <obelus> twb: Looks great, but the problem is that the area that we'll be using the server in is a network lab, things are reinstalled constantly, and the person that runs the network lab rathers booting via ghost CDs to do network installs, and really doesn't like ubuntu. From experience, he's not going to change the settings to set our ubuntu server as the primary boot device for the entire netlab.
[09:10] <obelus> Though, I will set it up so it's possible and propose it, just in case he does.
[09:11] <twb> I understand.  The point was to make you go "wow cool" so you can show him and have him go "hmph, maybe it is worth a demo"
[09:11] <twb> ghost is pretty sucky by comparison
[09:12] <obelus> ;p I'm convinced it's awesome. And yeah, but the reason he uses it is that he deploys windows, freebsd and other stuff through it, with preinstalled apps and customisations.
[09:12] <obelus> And the multicast is helpful when we're doing it to several PCs at once.
[09:12] <_ruben> twb: what do you use for the menu stuff ?
[09:12] <twb> multicast doesn't actually save any bandwidth if you're on a flat switched network
[09:13] <twb> I checked because I am deploying IPTV to a 600 seat prison in the next 18 months
[09:13] <twb> (Over, btw, netbooting lucid desktops)
[09:13] <twb> _ruben: that's just pxelinux 4.xx menu.c32
[09:13] <twb> _ruben: there is vesamenu.c32 but it 1) requires vesa; and 2) is fugly
[09:14] <_ruben> twb: ah .. the stuff i never to got around to dive into
[09:14] <_ruben> (as with many things)
[09:14] <_ruben> syntax seemed rather odd to me
[09:14] <_ruben> been ages since i looked into it tho
[09:14] <twb> The syntax used in the boot CDs is the ugly old way
[09:14] <_ruben> care to share yours? :)
[09:15] <twb> http://paste.debian.net/172326/
[09:16] <obelus> Multicast does help though with the disk i/o on the server when it's sending 7gb images ;o
[09:16] <twb> The 01-* stuff is a custom PXE-booting OS for extra extra sexy.
[09:16] <twb> obelus: hm, good point
[09:16] <twb> obelus: but if your OS needs 7 flipping GB it is about 36 times too big
[09:16] <obelus> He has custom builds of FreeBSD on there that are pretty big.
[09:17] <obelus> That and the Windows images. Those are only about 4gb for the Win7 ones with compression though.
[09:18] <twb> obelus: http://paste.debian.net/172327/
[09:19] <twb> obelus: he clearly sucks at saving space
[09:19] <obelus> Hah, yes. Compared to that, most people do. We do need stuff on it that's a bit bigger than 71mb though. But 7gb is excessive.
[09:21] <_ruben> twb: that looks way cleaner than i seem to recall
[09:21] <twb> The host that OS is for, has "only" 512MB RAM and since I'm copying the entire OS into RAM over TFTP I riced the size down a bit
[09:21] <rbasak> lambda_engineer: my understanding of the newest docs is that you should no longer define bond_slaves or auto on bond0
[09:21] <_ruben> auto generating stuff like this does make sense, given the numerous repetitions
[09:22] <_ruben> twb: the "live" menu entries are dummies right? failing to see where it'd actually boot into a live os
[09:22] <twb> rbasak: hmm, http://paste.debian.net/172329/ is a bond I'm using, but no tagging
[09:22] <twb> _ruben: they worked until I deleted the backend files
[09:22] <rbasak> twb: it has changed: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/precise/ifenslave-2.6/precise/view/head:/debian/README.Debian
[09:23] <twb> rbasak: OK.  My paste is from lucid
[09:23] <_ruben> twb: ok, then i'm not misinterpreting things :)
[09:23] <twb> Oh yes the live *menus* are autobuilt and they can't find any files to add to them
[09:24] <twb> I was thinking of the two "awesome" entries in default file
[09:24] <_ruben> ah
[09:27] <lambda_engineer> rbasak: which documentation are you talking about? this one: https://help.ubuntu.com/12.04/index.html ??
[09:28] <rbasak> lambda_engineer: /usr/share/doc/ifenslave I think. You can see it online here: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/precise/ifenslave-2.6/precise/view/head:/debian/README.Debian
[09:39] <lambda_engineer> rbasak: arghtzzpftsss...problem solved... actually i missed the line "auto bond0", just retyped it because i'm on the machine with video redirection, so no copy-paste
[09:40] <lambda_engineer> rbasak: thx
[09:40] <rbasak> np
[09:49] <bau-> hi all, how can I add ssl on my ubuntu server?
[09:51] <andol> bau-: For what daemon/service?
[09:52] <bau-> andol, I need to run a fb app on my server
[11:42] <mattcen> Hi all. I'm tearing my hair out trying to work out why my Ubuntu 10.04 i386 install isn't showing libvirt-bin as available in the repo, when I know it's there. Anybody got any ideas? I suspect it's user error, but have no idea where to look.
[11:49] <mattcen> Nevermind. Looks like my package cache was corrupt... or something :S. All good now
[12:27] <ea1het_sleep> morning
[12:28] <ea1het> i need help in terms of filesystems
[12:28] <ea1het> anyone can help
[12:28] <ikonia> ea1het: in what respect
[12:28] <ea1het> the doubt: best filesystem to hold virtual machines
[12:28] <ikonia> ea1het: apologies for missing you last night
[12:28] <ea1het> no worries ;)
[12:28] <ikonia> ea1het: well, it depends on a lot of things, but there ins't really a "best"
[12:28] <ikonia> ea1het: just use what meets your needs
[12:28] <ikonia> ea1het: what do you want to use ?
[12:29] <ea1het> KVM to hold 5 VM.... aprox. size per VM 10Gb each
[12:30] <ea1het> and another filesystem to hold ISO images
[12:30] <ikonia> ok, what file system do you want to use ?
[12:30] <_ruben> give each vm its own lvm lv .. no need to deal with stacking fs's
[12:30] <ikonia> is there a reason you're doubting just using ext4 or something like that ?
[12:30] <ea1het> yes... the doubt is about the posibility to use or not LVM
[12:30] <ikonia> ok, lvm is not a file system
[12:31] <ea1het> yes...a volume manager
[12:31] <ea1het> on top of that... i thought XFS.... or EXT4
[12:31] <ikonia> so you are not asking about file systems, you're asking if you should use a volume manager ?
[12:31] <ikonia> ea1het: do you have a need for lvm ?
[12:31] <ea1het> ikonia: in fact... i want to ask for both :)
[12:31] <ea1het> i'm a big doubt myself :)
[12:31] <ikonia> ea1het: why do you think you'll need/want lvm ?
[12:31] <ea1het> i think my VM will not raise over 10Gb
[12:32] <ikonia> the size of the VM is not really relevent
[12:32] <ea1het> i think.... i mean... sure 100% i'm not
[12:32] <ikonia> the size of the file system that holds your VM's is
[12:32] <ikonia> ea1het: where are you storing your VM's eg: /mnt ?
[12:32] <ea1het> ok ,let me explain my idea... very quick
[12:32] <ea1het> yes
[12:32] <ikonia> ea1het: do you have a disk you're going to mount ?
[12:32] <ikonia> or a partition ?
[12:32] <ea1het> a dedicated /mount-point for VM, yes
[12:32] <ea1het> a partition
[12:33] <ea1het> it's better to have a disk itself?
[12:33] <ikonia> just one partition  ?
[12:33] <ikonia> it doesn't matter a disk or a partition
[12:33] <ikonia> ea1het: why not keep it simple, put a partition /mnt
[12:33] <ikonia> then just put two directories under it "images" and "media"
[12:33] <ikonia> then store virtual machines in "images" and install/data media in medai
[12:34] <ea1het> right now this is the situation: /vm --> the VM store  && /images --> the ISO images to install from
[12:34] <ea1het> each one is a partition
[12:34] <ikonia> ea1het: ok, so they are hanging off your root file system partition correct ?
[12:34] <ikonia> ah, so each one has a partition that's mount correct ?
[12:34] <ea1het> yes
[12:35] <ea1het> yes
[12:35] <ikonia> ok, so what's the current problem ? what do you want to change and why ?
[12:36] <ea1het> Q1: The introduction of LVM will allow me to be more flexible in terms of filesystems and/or volumes ?
[12:36] <ikonia> ea1het: yes and no
[12:36] <ikonia> it will allow you to resize volumes based on how much storage is in the volume group
[12:37] <ikonia> ea1het: keep in mind if you put a 100G partition on /vm (for example) and you only use 5 machines at 10GB each, do you need to dynamically resize the /vm volume ?
[12:37] <ikonia> or are you very tight on space ?
[12:38] <ea1het> nice situation... close to reality... i'm asking because my current server has 2 HDD but, but, it can admit up to 8. I thought using LVM below the FS i can add and resize my /vm partition and increase its size....
[12:39] <ea1het> just in case my VM store raise...
[12:39] <ea1het> and... not... by now... not 250Gb on /vm and 5 x 10GB VM's
[12:39] <ea1het> not so tight
[12:40] <ikonia> you can use lvm in that way sure, but it's up to you if you want to
[12:40] <ea1het> i never used before LVM. What is the administration learning curve for someone it never used LVM ?
[12:40] <ikonia> to do what you want, not much to be honest
[12:41] <ikonia> but I wouldn't say do it unless you have a genuine need for it (in your opinion)
[12:42] <ea1het> To be honest i just want to have the whole picture in mind. My expected raise ratio is 1:7 (Hypervisor : VM's)
[12:42] <ea1het> More than that... i'm not sure... so right now... with my actual VM's i'm plenty of free space....
[12:42] <ea1het> but want to be ready to dimensionate if necessary
[12:43] <ea1het> Q2: My actual FS probably is not the best for large files. What is the best FS in terms of large files like VM's ?
[12:45] <ikonia> ea1het: unless you've got a problem why not just use ext4
[12:45] <koolhead11> hi all
[12:45] <ea1het> hi koolhead11
[12:45] <koolhead11> hi ea1het
[12:45] <ea1het> ikonia: is it the best? have support for large files?
[12:46] <ea1het> I heard about XFS but i'm not sure... i don't know it
[12:46] <ikonia> ea1het: define larger files ?
[12:46] <ea1het> large files las virtual machines
[12:46] <ea1het> sorry
[12:46] <ikonia> ea1het: that's not a file size
[12:46] <ea1het> large files like virtual machines
[12:46] <ikonia> ea1het: have you actually tried ext4 ?
[12:47] <ikonia> ea1het: virtual machines is not a size, how big are you defining as "big"
[12:47] <ea1het> various GB
[12:47] <ikonia> ea1het: various GB......come on
[12:47] <ikonia> how big do you call big
[12:47] <ea1het> in general terms probably some TB but i'm only focussing on VM's
[12:47] <ikonia> ea1het: what ????
[12:48] <ikonia> ea1het: in your example - what is the size that you consider a big size
[12:48] <ikonia> this is not a hard question
[12:48] <ikonia> how big do you call a "big file"
[12:48] <ea1het> i understand you think EXT4 is nice in order to operate with 10Gb files
[12:48] <ea1het> a 10Gb file is "nice" enough
[12:48] <ikonia> ea1het: ok, so 10GB files are what you are calling as "big"
[12:48] <ea1het> in my case 10Gb might be the mid-point
[12:48] <ikonia> ea1het: I don't see you having any problems with multiple 10GB files using ext4
[12:49] <ea1het> ikonia: what do you understand for large files?
[12:49] <ea1het> just to learn, not kidding
[12:49] <ikonia> 100G +
[12:49] <ikonia> 50+ would be a "large file"
[12:49] <ikonia> 5 x 10G files is not a "large file"
[12:49] <ikonia> (for my view)
[12:49] <ea1het> i think i won't face such kind of large files. In case of a 100Gb large file... EXT4 keep on being your main option?
[12:50] <ikonia> to be honest, just use ext4 unless you actually have a problem
[12:50] <ikonia> which I can't see you having a problem
[12:51] <ea1het> Q3: is there any incompatibility in terms of FS selection having in mind a NFS export would be used?
[12:51] <ikonia> no
[12:51] <ea1het> EXT4 fs exported over NFS--> OK?
[12:51] <ikonia> fine
[12:51] <ea1het> good!
[12:52] <ea1het> last one
[12:52] <ea1het> the difficult... (for me)
[12:53] <ea1het> Q4: KVM -> Move VM over a NFS export using a cross-over cable (1Gb or 10Gb Eth) between 2 servers  ----> Will run? Reliable?
[12:53] <Jeeves_> I've not tested that for a while
[12:54] <ikonia> not really
[12:54] <Jeeves_> It didn't work for me two years ago (using iscsi)
[12:54] <ikonia> I wouldn't suggest running vm's on an network mounted file system
[12:54] <Jeeves_> The machine would migrate, and then crash
[12:54] <patdk-wk> wouldn't that highly depend on the kvm host machine to transparently do that move?
[12:54] <patdk-wk> why not use some kind of clustering filesystem?
[12:54] <patdk-wk> then put your vm's on top of that
[12:55] <patdk-wk> something like gfs, glusterfs, ...
[12:55] <Jeeves_> Because that would be slow?
[12:55] <ikonia> even then running them on a network file system is not good
[12:55] <patdk-wk> Jeeves_, slower than running the vm over a network anyways?
[12:56] <Jeeves_> Yes, and less reliable (imho)
[12:56] <patdk-wk> I haven't done that, just know many people doing it for email mainly
[12:56] <patdk-wk> I do use iscsi and nfs like nuts for vm's though
[12:56] <patdk-wk> and they run speedy
[12:57] <ikonia> patdk-wk: you run your kvm hosted vm's on NFS mounts ?
[12:57] <patdk-wk> currently I run all my vm's over fc or iscsi
[12:58] <patdk-wk> I only use nfs for mounting like iso's and things for the vm's
[12:58] <ikonia> ok, so no NFS mounts
[12:58] <patdk-wk> I know hundreds of people doing nfs though, cause they say iscsi is not stable for them
[12:58] <patdk-wk> atleast for me, I found iscsi to be more stable than nfs
[12:58] <ea1het> here is a cost-effective situation men
[12:58] <patdk-wk> probably all depends on exact software versions and hardware
[12:59] <ikonia> I just can't see NFS on a remote host being network mounted on a kvm host to run a virtual machines root disk as "good"
[12:59] <patdk-wk> now, none of them dares to attempt using nfs for vm's, except using 10g network
[12:59] <ea1het> ikonia: so your best cost-effective option would be a iSCSI, right?
[13:00] <patdk-wk> ikonia, it's a very common vmware config
[13:00] <ikonia> ea1het: again, depends on cost effective, I use local disks on a raid card or a fibre card attatched to an array
[13:00] <patdk-wk> depends on the design goals, local is always going be faster
[13:00] <ikonia> patdk-wk: I've never seen it run well on VMware, NFS mounted root disks
[13:01] <ikonia> I've never seen NFS be acceptable let alone "faster"
[13:01] <patdk-wk> nfs mounted root disks?
[13:01] <ikonia> yes
[13:01] <ikonia> (for vm root disks I mean)
[13:02] <patdk-wk> it all depends on your workloads really
[13:02] <patdk-wk> nfs can be good, or it can defently get in the way
[13:02] <ikonia> nfs is good, I don't believe it's effective for running vm root disks
[13:03] <patdk-wk> well, I personally didn't find it good
[13:03] <patdk-wk> but based on all the people I talked to, that are scaled much larger than me
[13:03] <patdk-wk> they had no issues
[13:03] <patdk-wk> so I don't discount it
[13:03] <patdk-wk> actually, vmware using nfs ontop of netapp was ok
[13:04] <patdk-wk> if the netapp was speced better, I probably would still be doing that
[13:08] <ea1het> we have to understand NFS is much less expensive that any iSCSI implementation... and much much more than any SAN...
[13:09] <patdk-wk> hmm? I find nfs and iscsi to be the same price
[13:09] <patdk-wk> unless your getting an iscsi san
[13:10] <ea1het> patdk-wk: what kind of iSCSI implementation do you do?
[13:10] <patdk-wk> I am very iscsi heavy here though, moved lots of workstations, to diskless iscsi backed systems now
[13:11] <patdk-wk> well, I started with iet, but that didn't last long, played with scst some
[13:12] <patdk-wk> but have settled with openindiana now, iscsi + multipath works great
[13:13] <ea1het> any link i can follow to learn a bit?
[13:18] <patdk-wk> looks like LIO is coming along nicely
[13:19] <ea1het> LIO?
[13:20] <patdk-wk> it's suppost to replace all the target stacks in linux
[13:20] <_ruben> patdk-wk: what was your main reason(s) to move from scst to openindiana?
[13:22] <patdk-wk> well, mainly cause of the nfs stuff I needed
[13:22] <patdk-wk> and I needed snapshots of it
[13:22] <_ruben> ah
[13:22] <patdk-wk> and I needed some way to back it up, other than filecopy
[13:23] <patdk-wk> the fact it did iscsi was just a side benifit at the time
[13:24] <patdk-wk> on that system, all my vm's, about 20 of them, use about 200gigs of space, and hardly and iops at all, but nfs load in insane
[13:24] <patdk-wk> but that was my first openindiana test case
[13:24] <patdk-wk> on the system next to me now, it is only used for iscsi, and has high iops
[13:24] <_ruben> low iops due to zfs cache?
[13:25] <patdk-wk> low iops due to, the vm's never read/write anything ever
[13:25] <patdk-wk> all processing is done via nfs
[13:25] <patdk-wk> webservers/mailservers/...
[13:25] <patdk-wk> once they start, they don't produce disk activity, except for the content that is nfs mounted
[13:26] <patdk-wk> it's centeral logging, so no log diski/o
[13:26] <_ruben> oh right
[13:27] <patdk-wk> it's kind of funny though
[13:27] <patdk-wk> my openindiana system peeks at 8k iops using nfs there
[13:28] <patdk-wk> another system, using fc backed san, peeks at 4k iops
[13:28] <patdk-wk> the openindiana system has 20disks, the san has 74 disks
[13:29] <_ruben> heh
[13:29] <patdk-wk> just the san maxs out at a few gigs of ram, so the cache is pretty much useless
[13:29] <patdk-wk> whereas the opendiana system cache just scales better, cause I can scale it
[13:30] <patdk-wk> 4k iops is maxed out on those 74disks in raid10
[13:30] <patdk-wk> where as 1k iops would max out my 20disks
[13:31] <patdk-wk> or maybe it was around 1.5k
[13:31] <_ruben> i looked at nexenta(stor) ages ago .. did seem interesting .. tho adding solaris based stuff to our mix is not something i'm fond of .. cow-orkers have enough trouble already dealing with linux :p
[13:31] <patdk-wk> I haven't touch nexenta, though I hear it's debian like
[13:31] <patdk-wk> I think I have a good grip on solaris now, only started last sept
[13:32] <_ruben> i'll likely be sticking with scst for now .. unless i can find really compelling reasons to go nexenta/openindana/etc
[13:32] <patdk-wk> is LIO too unstable yet?
[13:33] <_ruben> haven't given that any attention yet really
[13:33] <patdk-wk> it looked promising, but was not really even beta when I started
[13:37] <_ruben> hmm .. enterprise edition has vaai support
[13:37] <_ruben> of lio that is
[13:38] <patdk-wk> ya, I didn't even know lio wasn't fully opensource till today and saw that
[13:38] <patdk-wk> kind of makes me wonder why that would be the offical linux kernel one, unless they changed their minds again
[13:39] <jsmith-argotec> I'm confused about some changes that seemed to happen to logging when I upgraded from 10.04 to 12.04
[13:39] <jsmith-argotec> nothing is getting logged in /var/log/messages or daemon or others any longer
[13:39] <jsmith-argotec> and (coincidently??) logwatch doesn't report half of the items it used to before the upgrade..
[13:40] <jsmith-argotec> everything seems to be only in syslog
[13:40] <jsmith-argotec> anyone know if this is something by design that I missed in the release notes or something?
[13:42] <_ruben> i can't find anything about the enterprise edition... :p
[13:43] <patdk-wk> looks like it's been renamed TCM
[13:43] <ea1het> what FS is used on openindiana? ZFS?
[13:43] <patdk-wk> heh, looks more like, it's getting along very badly, even though they managed to get it shoved into the kernel
[13:43] <patdk-wk> yep
[13:43] <ea1het> what are Zones?
[13:45] <patdk-wk> kind of like lxc
[13:45] <genk1> hello all I have just installed a new Ubuntu server station, my only problem is that I can't made my NIC up
[13:45] <genk1> when I do : ifconfig -a I got only the lo interface information
[13:46] <genk1> what are steps to follow to make this card working ?
[13:46] <genk1> thank you
[13:48] <ea1het> look in /etc/network/interfaces
[13:49] <genk1> ea1het only lo is configured there
[13:49] <ea1het> and do a dmesg to see if you os recognized the board
[13:50] <_ruben> patdk-wk: seems openindiana hasn't seen a stable release yet, that's a shame
[13:51] <patdk-wk> depends on what you call stable
[13:51] <patdk-wk> it feels good, in server mode
[13:51] <patdk-wk> there are issues in desktop mode still
[13:51] <_ruben> they label it themselves as being development releases
[13:51] <patdk-wk> yep
[13:52] <_ruben> then again, that doesn't alway mean all tha tmuch
[13:52] <patdk-wk> I feel they are like me, it will always have bugs
[13:52] <_ruben> hehe
[13:52] <patdk-wk> at what point do you label it *stable*
[13:53]  * patdk-wk notes the *stable* crapsan that keeps crashing he has to deal with
[13:53] <_ruben> :P
[13:53] <_ruben> well .. features (dis)apearing every other (dev) release is not somethign i'd like for instance ;)
[13:54] <patdk-wk> personally, only had oi faul up once, and once I took time to figure out what was going on, instead of panic during the emergancy
[13:54] <streulma> hello, my system boots and stops after scripts/init-bottom
[13:54] <patdk-wk> it was a known issue, and over sysadmin stimulas
[13:55] <patdk-wk> streulma, has to help you, we can't look at your screen and see why it stopped
[13:55] <patdk-wk> though normally it's cause it failed to mount root
[13:56] <_ruben> or any other mountpoint listed (as auto) in fstab
[13:56] <patdk-wk> ruben, no
[13:56] <patdk-wk> that would come after root started
[13:56] <patdk-wk> fstab isn't *mounted* yet in initrd
[13:57] <streulma> patdk-wk: fstab is fine
[13:57]  * patdk-wk notes he never blamed fstab
[13:58] <_ruben> i had one box halt at that point (iirc), and / was just fine .. was a mdadm volume with data that caused issues
[13:58] <patdk-wk> ya, mdadm mounts in inittab
[13:58] <patdk-wk> initrd that is
[13:58] <streulma> what to check ?
[13:58] <patdk-wk> the error messages?
[13:59] <streulma> var log boot.log ?
[13:59] <patdk-wk> stuff doesn't just go wrong, without yelling
[13:59] <_ruben> tho i really needa reinstall that box .. it boots into initrd .. and boots on just fine after hitting ctrl-d .. and `halt` wont power down the box either (since upgrade to precise)
[13:59] <_ruben> now that i think of it, the power down issue i also have on vms
[14:00] <patdk-wk> hmm, not sure I tested poweroff yet on precise
[14:00] <patdk-wk> I'm just annoyed with the lucid->precise grub fail
[14:00] <_ruben> EDIT: After further testing, "halt -p", "shutdown -h now", and "poweroff" all correctly power-off the machine but "halt" (without parameters) does not. However, in Ubuntu 10.04, "halt" did power-off the machine. Is this simply a difference between the two versions of Ubuntu?
[14:01] <_ruben> should try that
[14:01] <JanC> might be unclean FS/raid because you stopped the machine with the "big red button"  ;)
[14:01] <_ruben> havent had any grub issues with upgrades
[14:01] <streulma> XServ, failed to open listener for inet6
[14:01] <patdk-wk> _ruben, I only have with vmware so far
[14:01] <patdk-wk> every single vmware guest fails to upgrade grub
[14:02] <patdk-wk> there is a bug about it, fix might exist too, but hasn't been pushed anywhere usable
[14:03] <_ruben> i think i upgraded a few vms from lucid to precies without issues
[14:03] <patdk-wk> did you do a fresh install of lucid on them?
[14:04] <_ruben> both upgrades and cleans i think
[14:04] <_ruben> havent upgraded many boxes to precise just yet, so i might just got lucky not running into it
[14:04] <patdk-wk> oh ya, I also had a fun e1000 driver issue
[14:05] <patdk-wk> I have done a few test upgrades
[14:05] <patdk-wk> mainly to check my stuff, but waiting for .1 for any real upgrades
[14:06] <patdk-wk> https://bugzilla.redhat.com/show_bug.cgi?id=754589
[14:06] <patdk-wk> I have that issue with precise
[14:06] <patdk-wk> on real hardware, not a vm
[14:07] <patdk-wk> both problem 1 and 2
[14:13] <streulma> plymouth stop terminated with status 1
[14:15] <streulma> can plymouth ntpd disabled ?
[14:21]  * _ruben tries to open bugreport
[14:35] <streulma> patdk-wk: filesystem is mounted read-only
[14:35] <patdk-wk> it normally should be at that point
[14:50] <streulma> patdk-wk: no errors on disk
[14:50] <patdk-wk> anything from dmesg?
[14:50] <streulma> profile replace mtpd
[14:51] <streulma> ntpd is not installed anymore...
[14:59] <stgraber> gary_poster: thanks for testing the SRU
[15:03] <hallyn> zul: do you have any objections to my pushing http://people.canonical.com/~serge/libvirt.debdiff ?
[15:03] <jamespage> hallyn, stupid question time - does ipxe support ARM architectures or is it just for x86?
[15:03] <zul> hallyn: checking
[15:03] <hallyn> jamespage: no idea.  lynxman might know
[15:03] <jhobbs> no ipxe for arm
[15:03] <jamespage> :-(
[15:03] <hallyn> I can look through the source in a bit otherwise and check
[15:04] <hallyn> there you go :)
[15:04] <jamespage> thanks jhobbs
[15:04] <jamespage> allenap, ^^
[15:04] <zul> hallyn: nope
[15:04] <hallyn> arm does uboot right, does it not even do pxe at all?
[15:04] <hallyn> zul: thanks (gonna test a bit more first)
[15:04] <jhobbs> uboot has pxelinux like support
[15:04] <jamespage> hallyn, kinda - I think it implements a subset
[15:04] <jhobbs> yeah, a subset of what pxelinux does, plus the normal dhcp parts of pxe
[15:04] <hallyn> jamespage: this is for maas?
[15:04] <zul> jamespage:  always assume arm is weird and does things non-standard
[15:05] <jamespage> hallyn, yes
[15:06] <gary_poster> welcome stgraber.  thanks for the huge improvements
[15:20] <streulma> patdk-wk kernel-panic it is
[15:21] <lynxman> jamespage: afaict it's only x86
[15:21] <lynxman> jamespage: at least the assembler parts are
[15:30] <streulma> patdk-wk: he don't update the logs
[15:35] <hallyn> zul: well this is odd.  dpkg -x libvirt-bin*.deb x shows x/etc/dnsmasq.d/libvirt-bin is there, but dpkg -i libvirt-bin*.deb does not create that file
[15:36] <zul> hallyn: hmm?
[15:36] <zul> zul: bug in your debian/rules perhaps
[15:37] <hallyn> hm, maybe dpkg was trying to be too smart.  i had installed dnsmasq after installing that libvirt-bin.  maybe dnsmasq deleted it, and then after that dpkg -r thought i had manually deleted it?
[15:38] <stgraber> hallyn: it looks like lxc nesting recently broke on 12.04, looking at it now
[15:38] <hallyn> stgraber: and that's with your custom policy?
[15:39] <stgraber> hallyn: yes
[15:39] <stgraber> hallyn: lxc fails to install in the container because of apparmor, then once forced, it refuses to start, still because of it being unable to load apparmor profiles
[15:40] <hallyn> stgraber: we don't still drop CAP_MAC_ADMIN from policy do we?
[15:40] <hallyn> in config that is
[15:41] <stgraber> hallyn: weird, can't reproduce on my laptop (also on 12.04)
[15:41] <hallyn> exact same kernel version?
[15:42] <stgraber> yeah, 3.2.0-24-generic on both
[15:42] <stgraber> diffing the apparmor profiles and lxc configs now
[15:45] <stgraber> hallyn: oh, and I just found a nasty bug in the SRU currently in -proposed ... lxc.devttydir isn't set properly
[15:46] <stgraber> hallyn: I'll fix in quantal and upload a fix to -proposed, at least we should just loose a day of testing, so not too bad
[15:46] <hallyn> stgraber: not set properly how?  what is it doing?
[15:47] <hallyn> (I"ll add a test to test suite if testable)
[15:47] <stgraber> hallyn: basically the logic is wrong, it's setting lxc.devttydir = lxc for releases that do NOT have /etc/init/container-detect.conf
[15:48] <stgraber> hallyn: that's a regression in quantal that was SRUed to precise :(
[15:48] <hallyn> ah!  i see
[15:49] <stgraber> hallyn: http://paste.ubuntu.com/1018119/
[15:50] <hallyn> i thought all the '$release = precise' checks were out of the template :)  drat
[15:51] <hallyn> btw i may email dlezcano soon to ask him about the lxc api.  i'm wondering whether he'd prefer to have a long-running daemon (like libvirt).  I assume not, but if he did it would require some changes
[15:52] <stgraber> hallyn: did you forward 0083-ubuntu-simplify-template yet? otherwise I'll simply patch the patch in quantal too
[15:54] <stgraber> hallyn: hallyn all the "release = <something>" checks are gone, it's now checking for presence of /etc/init/container-detect.conf instead (that patch I linked before clearly removes the release = check and replaces it with the init job check)
[15:54] <hallyn> i thought i had but i don't see it
[15:54] <hallyn> stgraber: well yes that patch did, which would have meant that the check was still there before that patch :)  got it now
[15:55] <stgraber> hallyn: do you mind me simply patching the patch then?
[15:55] <hallyn> sure
[16:03] <stgraber> hallyn: wrt lxc api, I certainly hope we won't run an lxc daemon, but talking to daniel about the API is certainly a very good idea, we shouldn't start doing that kind of work in upstream's back :)
[16:04] <hallyn> yup
[16:05] <hallyn> for now i've only started with the most rudamentary functions (create and free in-memory container image; and locking) with testing.  Unfortunately, I get mysterious segvs from libc :)
[16:05] <hallyn> I think sem_post is messing with me, tbh
[16:11] <stgraber> hallyn: fix uploaded to quantal and new sru in -proposed, hopefully it'll be approved soon, so users running with -proposed will stop creating containers that will fail to upgrade (main consequence of not having lxc.devttydir)
[16:12] <hallyn> stgraber: I should've asked you if you thought yo'ud be pushing anything before pushing -ubuntu3 to q two hours ago :)
[16:12] <hallyn> or, :(
[16:13] <hallyn> -ubuntu76, here we come
[16:20] <stgraber> hallyn: oh, and I think I found the reason for my weird apparmor issues with nesting, my template on that host dates back from around precise beta1 :)
[16:30] <stgraber> SpamapS: if you have a sec, would you mind reviewing the lxc currently in -proposed, it fixes a regression introduced in the previous SRU (I pushed the fix to quantal too), it's a one line change that I believe is "obviously right" :)
[16:32] <stgraber> SpamapS: (let me know if you're busy with other things and I'll go nag another SRU team member ;) the regression currently produces containers that won't be able to upgrade to 12.10, so even though it's only in -proposed, it's really quite bad)
[16:34] <hallyn> stgraber: hm, yeah, i'm not getting a console on my containers
[16:35] <stgraber> hallyn: yeah, I'm kind of surprised nobody saw it during the week it was in quantal...
[16:36] <hallyn> i need to get my little quantal lab up.  just haven't had time yet.
[16:36] <soren> bug 1000000
[16:37] <ogra_> lovely
[16:37] <stgraber> ;)
[16:37] <soren> Daviey: It works somewhat ^
[16:37] <Daviey> soren: but new bugs don't show.
[16:38] <SpamapS> stgraber: accepted
[16:38] <stgraber> SpamapS: thanks!
[16:41] <stgraber> hallyn, smoser: http://paste.ubuntu.com/1018196/
[16:42] <stgraber> just noticed that in a clean lxc container (12.04)
[16:42] <stgraber> that basically happens when you install lxc in lxc, but the problem is a file conflict between openssl and euca2ools
[16:42] <stgraber> not sure whether you're aware of it already
[16:44] <smoser> stgraber, i'm confused.
[16:44] <smoser> how do i have both openssl and euca2ools installed?
[16:44] <smoser> (no, i was not aware)
[16:45] <stgraber> smoser: it seems to only hit when installing both at the same time
[16:46] <stgraber> smoser: if I run that apt-get again, it succeeds
[16:46] <smoser> this is strange, no?
[16:46] <stgraber> oh not, actually it doesn't... running apt-get -f install fixes the situation though
[16:47] <stgraber> hmm, no, I'm confused ...
[16:47] <stgraber> I thought it would fix itself, but no, in lxc's case, both being recommends, openssl simply doesn't get installed when running apt-get -f install which "fixes it"
[16:48] <stgraber> and yeah, my machine also has both installed, but I can't reproduce that on a clean box
[16:49] <smoser> you got openssl from -updates
[16:50] <stgraber> smoser: yeah
[16:50] <smoser> so its a reression at that version maybe?
[16:50] <stgraber> smoser: anyway, found a way of reproducing both the failing and working scenario
[16:50] <stgraber> works: install openssl, then install euca2ools
[16:50] <stgraber> fails: install euca2ools, then install openssl
[16:50] <smoser> well, stgraber maybe you can help. i'll bow to your packaging knowledge.
[16:51] <smoser> euca2ools installs that via debian/links
[16:51] <smoser> its just a link  usr/share/euca2ools/cert-ec2.pem etc/ssl/certs
[16:51] <smoser> err.. contents of debian/links are:
[16:51] <smoser>  usr/share/euca2ools/cert-ec2.pem etc/ssl/certs
[16:51] <smoser> so i really could not care less about the directory itself. it just needs to house a symlink appropriately.
[16:53] <stgraber> ok, I see what's wrong :)
[16:53] <stgraber> root@weblive:~# ls -l /etc/ssl
[16:53] <stgraber> total 0
[16:53] <stgraber> lrwxrwxrwx 1 root root 33 Mar 22 16:31 certs -> /usr/share/euca2ools/cert-ec2.pem
[16:53] <stgraber> you're missing a trailing / in your .install
[16:53] <stgraber> it should be etc/ssl/certs/
[16:53] <stgraber> otherwise when /etc/ssl/certs/ doesn't exist, it creates a symlink called /etc/ssl/certs pointing to /usr/share/euca2ools/cert-ec2.pem, instead of creating /etc/ssl/certs/ and putting the symlink in it
[16:54] <stgraber> smoser: ^
[16:54] <smoser> well, there is nothing in .install
[16:55] <smoser> you're meaning debian/install?
[16:55] <smoser> youmeant debian/links
[16:55] <stgraber> oh, I mean debian/links, yeah
[16:55] <smoser> which makes sense. yeah.
[16:55] <smoser> care to open a bug?
[16:56] <stgraber> sure, will do that after lunch. I'll also test that the fix actually works, looking at dh_link, it's not clear whether it'll create any missing directory or not
[17:06] <zoski> hello there ! I'm new and i have a question...
[17:07] <zoski> I run a ubuntu server and i can't find the command to see how much memory is left on my hard disk...
[17:08] <med_> df -h
[17:12] <zoski> thank you so much med_ !!
[17:17] <hdb2> hi everyone! I would like to deploy ubuntu to a number of machines in our small office, but I need the configurations to be consistent. I would prefer not to do that manually. clonezilla is an obvious option, but all my hard drive sizes are different, and I don't care for cloning very much (prefer good configs to OS images). is there some tool/method I can use to accomplish this? I love doing my RTFM, I just need some pointers as to what to look for.  (i
[17:17] <hdb2>  helps, I'm very familiar with Linux and Debian…I'm not a guru, just not at n00bie level.)
[17:19] <RoyK> hdb2: perhaps this might help? https://help.ubuntu.com/12.04/installation-guide/i386/automatic-install.html
[17:20] <hdb2> RoyK wow! on first glance this looks like exactly the kind of pointer I needed.  thank you!
[17:28] <stgraber> smoser: bug 1007533
[17:28] <stgraber> smoser: I'm uploading the fix now
[17:28] <smoser> stgraber, thanks.
[17:28] <smoser> was it just hte traling /
[17:29] <stgraber> smoser: nope, it's a tiny bit more complicated, you actually need to give the full path of the target and list /etc/ssl/certs in debian/dirs so that it's created if missing
[17:29] <stgraber> but yeah, final fix is just a two lines fix :)
[17:29] <smoser> hm..
[17:30] <stgraber> smoser: http://paste.ubuntu.com/1018276/
[17:31] <smoser> stgraber, thank you.
[17:51] <hallyn> stgraber: it occurs to me we never put any sort of 'create(template="ubuntu",, release="current")' call in the api.
[17:51] <hallyn> did you think we'd want that?
[17:54] <ea1het> good evening
[17:55] <ea1het> anyone who has an adaptec raid controller on a ubuntu server working?
[17:58] <_ruben> got several of those, never had any issues with it .. the ubuntu+adaptec combo that is .. the adaptecs themselves have proven themselves to not being very trustworthy :/
[17:59] <_ruben> then again, we dont use supported disks on 'em, which adaptec claims as being part of issues we see, obviously
[18:00] <stgraber> hallyn: right, I seem to remember us briefly talking about it a while ago. Basically coming to the conclusion that it's a nice to have more than an initial requirement. Though I guess it doesn't hurt to have it in the API design from the beginning.
[18:00] <stgraber> hallyn: not sure whether it makes sense to have it part of the container struct though, probably makes more sense to have it out of it (similar to list())
[18:01] <ea1het> _ruben: adaptec model? disk brand?
[18:01] <ea1het> _ruben: i have in mind adaptec aar-2610sa with seagate 1tb disks
[18:01] <_ruben> ea1het: 3 and 5 series .. mostly 51245 and 51645 .. seagate desktop disks 1TB
[18:01] <stgraber> hallyn: so you'd call lxc_create(...) and then get the struct once it's done
[18:01] <_ruben> ah, the low end stuff
[18:02] <ea1het> yes :)
[18:02] <_ruben> no experience with the 2610sa, but we did use similar ones in the past .. tho not in ubuntu boxes i think
[18:03] <hallyn> stgraber: hm, i still thought it would make sense as part of the container.  So you do c = lxc_container_new('name'); set some settings; then c->create()
[18:03] <hallyn> stgraber: ok, we don't need to decide that now, i was just wondering.  (writing the email to dlezcano;  cc:ing you)
[18:05] <ea1het> _ruben: it seems you have experience with Adaptec controllers
[18:05] <ea1het> do you think the 2610sa is quite poor?
[18:05] <stgraber> hallyn: yeah, but then we have to decide what happens if you call create() on an existing container and what to do when you call create() before you have any config loaded. So definitely possible but we need to think about exactly what we want there.
[18:05] <_ruben> ea1het: did you 2410sa or actually 2610sa ?
[18:05] <_ruben> cant find the 2610sa
[18:05] <stgraber> hallyn: I can see myself using ->create() on an existing container to replace its rootfs, but that may be a bit confusing to some users ;)
[18:05] <hallyn> stgraber: agreed we need to decide that :)  agreed we don't need it immediately.
[18:05] <ea1het> for me seems to be cost-effective and it is for a small raid, for a small project...
[18:05] <ea1het> let me see.. one sec...
[18:06] <hallyn> i think i figured it would refuse to run if c->is_defined() returned true
[18:06] <_ruben> hmm .. it doesnt seem to be on adaptec site, but i do see other sites mentioning it
[18:06] <hallyn> meaning c->configfile exists
[18:06] <hallyn> stgraber: and really i figured you would be the one really wanting it for arkose :)
[18:06] <_ruben> ea1het: we used the 2410sa in some small windows based fileservers .. they did the job ok for the money
[18:06] <ea1het> under ubuntu?
[18:06] <hallyn> if we can punt on it, then we can punt on the thought of whether we wrap the lxc-create script or rewrite it all in c.  So I"m happy to delay
[18:07] <ea1het> ok... under windows... i realized now....
[18:07] <ea1het> i'm not sure if this board is in the HCL of ubuntu server
[18:07] <ea1het> ADAPTEC AAR-2410SA/64M S-ATA SATA RAID 0 1 5 10 CONTROLLER
[18:07] <ea1het> that is the board _ruben
[18:08] <_ruben> ea1het: ah ok, same we used a few of
[18:08] <_ruben> ea1het: what will you be using it for?
[18:08] <stgraber> hallyn: well, arkose doesn't use the templates at all, so I'd effectively never "create" an arkose container. I'd just do abc=container("tmp-name") => write the fstab file to do the overlay magic I need => set the lxc config keys for fstab and rootfs => start/run_command/stop
[18:10] <ea1het> _ruben: expected to be used for 4Tb mirrored (1Tb+1Tb mirrored to 1Tb+1Tb) for Virtual Machine store and run
[18:10] <_ruben> ea1het: if you hookup 4 1TB disks, you must realize it has a 2TB max for the raid volume, so raid0 wouldnt work (nor would i even consider it), raid10 would work, raid5 would work if you used one as hotspare
[18:10] <hallyn> stgraber: ah, ok, cool.  good to know
[18:10] <_ruben> ea1het: so raid10 i guess?
[18:10] <ea1het> yup
[18:11] <ea1het> _ruben: didn't realize before the 2Tb limitation
[18:11] <_ruben> ea1het: don't expect stellar performance, since it doesn't have the bbu option, enabling write-cache is dangerous .. and without write-cache, performance wouldn't be very good (write performance that is)
[18:12] <ea1het> _ruben: another controller supporte under ubuntu that would make this job?
[18:13] <_ruben> ea1het: lsi has very good cards, but they're a bit more expensive than adaptec i think .. and to get write-cache options, entry-level cards aren't an option
[18:13] <_ruben> ea1het: it really depends on how much disk io you want/need/expect/etc
[18:14] <_ruben> then again, with just 4 disks, with or without write-cache, performance won't be all that great either way :)
[18:14] <ea1het> _ruben: to be honest i don't know... the server, an hypervisor (vm host) will run as much as 7 VM at the time....
[18:14] <_ruben> or actually, i have had several 4 disk raid10 volumes on 5 series adaptecs, and the performance was pretty ok
[18:15] <_ruben> the number of vms is much less important than how busy each vm is ;)
[18:15] <ea1het> _ruben: a raid 1 of a raid 5 ???
[18:15] <ea1het> _ruben: a raid 1 of a raid 0 ???
[18:16] <_ruben> ea1het: what's the question? :)
[18:16] <ea1het> _ruben: you said... several 4 disk raid10 volumes on a 5 series adaptecs.....
[18:16] <ea1het> that is a raid of a raid ?  :o
[18:17] <_ruben> oh, no .. just multiple seperate raid volumes consistig of 4 disks each :)
[18:17] <ea1het> ups.... i thought you reinvented the wheel.... :) ... and of course wanted to know how....  :)
[18:17] <stgraber> hallyn: email looks good, thanks for sending it.
[18:19] <_ruben> ea1het: these 7 vms, do they exist already or will they be new ones?
[18:19] <ea1het> _ruben: new ones
[18:20] <_ruben> ea1het: ok, then determining the iops requirements will be very tricky
[18:20] <ea1het> _ruben: how can i do it?
[18:20] <_ruben> tho for just 7 vms, the card might just do the trick .. unless one or more of those vms has very disk intensive tasks
[18:20] <_ruben> ea1het: making educated guesses is as good as it gets in those cases :)
[18:21] <ea1het> _ruben: don't know how to query iops ... sorry :(
[18:21] <_ruben> a vm used to run an irc clients doesnt require any iops at all .. a fileserver used for video editing on the other hand :)
[18:21] <ea1het> and 1 of the VM will be a Solr repository
[18:22] <ea1het> (Solr -> documental database)
[18:22] <_ruben> remember, 4 disks / 7 vms = roughly half the performance of a single disk for each vm
[18:22] <_ruben> in ideal world
[18:23] <_ruben> write performance is likely half of that (it needs to write to 2 disks, so it can stripe each write over only 2 disks)
[18:24] <ea1het> _ruben: so you understand in some cases it is best to write only over 1 disk and use another solution like GFS or DRDB ?
[18:24] <_ruben> ea1het: no, but spec'ing up a raid volume is far from trivial .. especially when no existing performance data is available
[18:25] <_ruben> i'd start with this card .. and if it turns out to be slow, start saving for a higher end card and use this card for a simple (future) file server ;)
[18:25] <_ruben> gotta go now tho, g'luck :)
[18:26] <ea1het> _ruben: thanks
[18:35] <hallyn> aahahahaha.  figured out my bug.  malloc(sizeof(s)) instead of malloc(sizeof(*s)).  i told myself years ago not to do that.
[18:47] <akoumjian> Anyone else using repo supervisord on 12.04?
[18:53] <SpamapS> jamespage: tsk tsk.. you forgot -v in your merge of erlang from Debian
[19:02] <roaksoax> SpamapS: so I think that part was taken out from the documentation
[19:07] <SpamapS> roaksoax: which documentation? I think there are 2 documented ways to do merges
[19:07] <SpamapS> the merge-o-matic 'grab-merge' way gives you an automatic debuild command to use
[19:08] <SpamapS> the UDD way, I would suspect, might be the culprit
[19:09] <roaksoax> SpamapS: so with grab-merge you could still debuild -S -sa, but in the old complete packaging guide it used to say to use debuild -S -vXYZ -sa which doens't anymore
[19:09] <roaksoax> SpamapS: in UDD, i though there was the command that would generate the changes correctly when it comes to merges
[19:09] <roaksoax> SpamapS: bzr builddeb -S --package-merge --> (This will add the appropriate -v and -sa)
[19:12] <SpamapS> roaksoax: right, the UDD one does that right
[19:12] <SpamapS> roaksoax: so the packaging guide probably took out the -v unintentionally
[19:13] <SpamapS> roaksoax: with grab-merge I always just use the generated debuild script
[19:27] <roaksoax> SpamapS: it does
[19:40] <hallyn> all right, that's much better.  lxcapicore branch fixed, now i just need to move it back to lxcwithapi branch
[20:07] <jamespage> SpamapS, so I did
[20:47] <mgw> Hi, is there any reason some server installs would have a sudo group and some not?
[20:47] <mgw> and osme would have an admin group and some not?
[20:47] <mgw> (on 12.04)
[20:49] <RoyK> mgw: afaik that's a small change from lucid to precise
[20:49] <RoyK> from admin to sudo
[20:49] <mgw> hmm… so sudo should totally replace admin?
[20:49] <guntbert> mgw: the move from admin to sudo group happened from ... , ah what RoyK says
[20:50] <mgw> ok, so if I'm using ldap, is it safe to have an entry for 'sudo' group in ldap as well?
[20:50] <mgw> It seems to work
[20:50] <mgw> getent group shows two entries for 'sudo'
[20:50] <smoser> SpamapS, https://bugs.launchpad.net/ubuntu/+source/zookeeper/+bug/1007433
[20:50] <smoser> did you test that?
[20:50] <mgw> and anybody that's in either the local file or in ldap as a ssudo user
[20:51] <mgw> … will have sudo privs
[20:52] <smoser> that seems like red herring. as the problem is fix/worked around by first installing the 'zookeeper' package (not zookeeperd). which surely doesn't create or modify that directory path.
[20:54] <mgw> guntbert: does that sound right?
[20:54] <mgw> That is, I can have two groups with the same names but different gids?
[20:54] <guntbert> mgw: I fear there will be conflicts
[20:55] <mgw> so maybe use a different name for our ldap admin group?
[20:55] <mgw> it looks like vmbuilder adds the admin group
[21:11] <SpamapS> smoser: the problem is the dir
[21:11] <smoser> doesnt make sense.
[21:12] <SpamapS> smoser: the dir ends up owned by root
[21:12] <SpamapS> zookeeperd does not run as root
[21:12] <SpamapS> fail
[21:12] <smoser> see my comment.
[21:12] <smoser> (just posted comment 6)
[21:13] <SpamapS> smoser: I'm not sure that makes any sense though. zookeeper Depends on default-jre-headless
[21:13] <SpamapS> smoser: so it is already installed and configured at that point
[21:13] <smoser> look at the apt install log
[21:13] <smoser> the alternatives get setup at the end.
[21:14] <SpamapS> by what package?
[21:14] <smoser> i'm not making this up (i dont think)
[21:14] <smoser> https://launchpadlibrarian.net/106616529/apt-get-install.log
[21:14] <SpamapS> I'm reading that
[21:14] <smoser> i'm certain that it "just works" if you install default-jre-headless in a separate 'apt-get install'
[21:14] <SpamapS> ah its a trigger
[21:15] <SpamapS> wait no
[21:15] <SpamapS> smoser: ok this is weird
[21:15] <SpamapS> smoser: ok agreed that its a red herring (but must be fixed anyway)
[21:16] <SpamapS> smoser: does not make *any* sense that dpkg configured default-jre-headless before openjdk-7-jre-headless
[21:17] <SpamapS> smoser: since default-jre-headless depends on openjdk-7-jre-headless
[21:18] <SpamapS> I wonder if there is a circular dep there somewhere
[21:19] <SpamapS> $ apt-cache show ca-certificates-java|grep Depend
[21:19] <SpamapS> Depends: ca-certificates (>= 20090814), openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless, libnss3-1d (>= 3.12.9+ckbi-1.82-0ubuntu3~)
[21:19] <SpamapS> I wonder if thats going to screw things up
[21:20] <smoser> jamespage, had said he'd bother doku on monday. but you may be onto something.
[21:23] <SpamapS> smoser: its the only way I can resolve in my head why dpkg would do things in such a wrong order
[21:23] <SpamapS> smoser: its the | java6-runtime-headless
[21:24] <SpamapS> smoser: java6-runtime-headless *is* already set up
[21:24] <SpamapS> smoser: so, chalk it up to an incomplete transition
[21:24] <SpamapS> probably anything that does default-jre-headless | *java6* needs to be re-evaluated
[21:26] <SpamapS> smoser: it may even be that zookeeper doesn't work w/ java7.. testing that now
[21:27] <smoser> no. dont think so.
[21:27] <SpamapS> right, looks like it installed 7
[21:27] <smoser> but that could be it.
[21:27] <smoser> but i thouht you could start it fine after the fact
[21:27] <smoser> anyway
[21:28] <smoser> i'm out for the day
[21:28] <SpamapS> well maybe I don't understand the bug
[21:28] <smoser> have a nice weekend all.
[21:28] <SpamapS> it shows zookeeperd running
[21:28] <smoser> fresh instance (with no java installed), 'apt-get install zookeeperd'
[21:28] <smoser> you'll reproduce
[21:28] <SpamapS> Yeah I'm doing that
[21:28] <SpamapS> but reproduce "what" ?
[21:28] <smoser> zookeeperd is not running.
[21:28] <smoser> status zookeeperd
[21:29] <smoser> i just terminated my instance.
[21:29] <smoser> i've got to run'
[21:29] <smoser> later.
[21:31] <SpamapS> buhbye
[21:31] <SpamapS> start-stop-daemon: unable to stat /usr/bin/java (No such file or directory)
[22:12] <chmac> Quick random question. I've got access to a server for the next ~24 hours, I want to wipe the 2T disks, but /dev/urandom is too slow, and I can't get /dev/frandom to compile, no kernel headers or something.
[22:13] <chmac> Realistically, short of forensic analysis, `shred -n 0 -z /dev/sda` should do a good job of deleting data, right?
[22:13] <chmac> I mean, somebody else given the server after us without physical access to it isn't going to be able to recover anything, that's what I'm thinking.
[22:14] <ikonia> chmac: just write 0's to it
[22:14] <chmac> ikonia: That's what that command effectively does.
[22:14] <chmac> ikonia: But with the -v flag, it tells me how far along it is, so I can watch it... :-)
[23:00] <maco3> i was just using do-release-upgrade to upgrade my server from oneiric to precise. when it got to the part where it updates python, byobu was killed. i can see that dpkg and such are still running, but now i have no way to see their output to answer debconf questions. can i reconnect to that process? or is it safe to kill dpkg and then run dpkg --configure -a?
[23:03] <maco3> hah, i can actually see that its asking me a question about postgresql right now because the whiptail process shows up when i grep ps for "upgrade"
[23:03] <maco3> don't know how to answer it though :-/
[23:12] <pmatulis> maco3: you should be able to reconnect to an ssh daemon
[23:13] <maco3> pmatulis: i never disconnected from ssh
[23:14] <maco3> if i try to do "screen -r" it tells me there is no screen to resume, but there is a dead screen instance to be wiped
[23:14] <pmatulis> ah
[23:15] <maco3> so i cant figure out how to answer the questions dpkg is trying to ask me so the upgrade can continue
[23:15] <pmatulis> i don't think you can tbh
[23:16] <maco3> so kill it and dpkg --configure -a... bleh, that sounds like losing the tweaks do-release-upgrade applies that make it recommended over change-sources.list-and-dist-upgrade
[23:20] <pmatulis> maco3: i would try the command a second time
[23:20] <maco3> pmatulis: which command? re-run do-release-upgrade?
[23:20] <maco3> that gets me "no new release found"
[23:20] <maco3> thinks im already there i guess
[23:22] <JanC> you might also need apt-get install -f
[23:24] <pmatulis> maco3: ah ok
[23:28] <pmatulis> maco3: but you hit a snag that should be reported, do-release-upgrade should notice byobu is running and provide at least a warning.  dunno, maybe just file a bug against 'update-manager'
[23:30] <maco3> pmatulis: i'm talking to someone in #ubuntu-devel about it, and we're debating whether its a bug. theoretically you should close all running apps before doing a dist upgrade, but...
[23:31] <JanC> actually, running an upgrade inside screen is recommended
[23:32] <JanC> especially when you upgrade remotely
[23:32] <maco3> mm point
[23:32] <maco3> because then you can reconnect
[23:32] <maco3> so screen crapping itself when libs are upgraded is extra bad
[23:33] <JanC> right, that should neer happen
[23:33] <JanC> never