[06:38] <cyclobs> hi all, I'm trying to do something which i'm not sure is entirely possible with pam authing against an mysql database. Is it possible to have pam run the plain text passwords into a hashing script that i made before it gets checked against the mysql database?
[06:48] <sarnold> cyclobs: you may be able to get pam_exec to do it, but I'd be scared about doing it myself
[06:49] <sarnold> cyclobs: I'd be more inclined to take pam_userdb or a similar module and see if you can slightly modify it
[06:50] <cyclobs> ah pam_exec might do what i'm looking for. the next option really is to edit the source and add my own crypt function
[08:58] <lordievader> Good morning.
[09:00] <stemid> I patched a precise server last monday, january 5th, and for some reason the patching left me without the directory /opt/tivoli. this dir completely vanished and of course all TSM related services stopped working.
[09:00] <stemid> discovered it now.
[09:00] <stemid> did anyone else notice this?
[13:30] <^^rcaskey> hey all, the interactive menus on -server are just too tough for me to figure out, can I instead download the configuration options via http via some kind of argument passed by dhcp?
[13:33] <tehgooch> So I've got a client with Ubuntu Server 12.04 that hangs on boot. Last think on console is the e1000 NIC showing up. There is an error about remounting / earlier in the boot, but it continues to boot. Initially there wad an error about the encrypted swap so I commented it out in rescue mode and formatted it as regular swap. I'm sure I left details out feel free to ask. I'm on my phone at the console.
[13:36] <tehgooch> Last thing in dmesg is saying eth0 link not ready
[14:06] <AdventureTime> Hello everyone
[14:06] <AdventureTime> I badly need help.
[14:12] <collizion> AdventureTime: What's up?
[14:13] <AdventureTime> Oh thanks. Well, I was just wondering what happened with this server. Why the php5-cgi has a high usage. Is it because of the 15,000+ visitors everyday? Here are the screenshots http://imgur.com/a/FSnGc and the specs of the server: http://www.serverloft.eu/rootservers/rootservers-compare.php?server=RootServer-L
[14:16] <collizion> AdventureTime: Depends on how PHP-heavy your application is.
[14:17] <AdventureTime> It uses Wordpress with MySQL.
[14:17] <AdventureTime> I’m thinking of upgrading to a newer distro but people from reddit said I better not upgrade to a newer distro, instead do a fresh install.
[14:17] <jamespage> coreycb, zul: we'll need todo a no-change rebuild on most openstack packages - I just fixed a problem with the upstart configuration generation
[14:19] <collizion> AdventureTime: What distro are you running at the moment? And "nuke and pave" is not always the best solution.
[14:19] <AdventureTime> I don’t understand what “nuke and pave” is.
[14:19] <AdventureTime> I think it is the LTS Ubuntu v10
[14:25] <collizion> AdventureTime: It's an Americanism for completely wiping out what's there and reinstalling a fresh system.
[14:25] <collizion> AdventureTime: Ubuntu 10.04 LTS?
[14:25] <AdventureTime> Yes, that is correct. Oh thanks for the FYI :)
[14:29] <coreycb> jamespage, ok
[14:30] <collizion> AdventureTime: If you're running a version THAT old, then a full reinstall might be a good idea.
[14:30] <coreycb> jamespage, want me to handle them?
[14:30] <jamespage> coreycb, sure - I'm working on neutron so stay clear of that one - but all others +1 that would be great
[14:30] <AdventureTime> Oh, sorry. They can’t afford a downtime. The site is fully functional.
[14:30] <coreycb> jamespage, ok will do.
[14:31] <AdventureTime> They are just concerned with the memory usage/processor usage.
[14:35] <collizion> AdventureTime: I'd look at optimizing the application itself. What are you running? Wordpress, Drupal, etc?
[14:36] <AdventureTime> just wordpress
[14:36] <AdventureTime> so this is not a server issue?
[14:36] <AdventureTime> did you see the screenshots
[14:37] <collizion> AdventureTime: It may not be. Just because you see high CPU usage in php doesn't mean it's a server problem. There could be something in the actual application itself generating that activity.
[14:37] <AdventureTime> but disabling the plugins in a production server will procude a  downtime. the owner of the site does not want that :(
[14:38] <collizion> AdventureTime: I hate to be blunt about this, but... tough? You've got a problem. That requires maintenance.
[14:38] <collizion> AdventureTime: You've also got the problem that 10.04 goes EOL in three months. You won't receive security updates after that, which is a Bad Thing for a web server.
[14:39] <collizion> (Someone else please back me up on that. EOL means no more security updates, right?)
[14:39] <maswan> yeah
[14:39] <collizion> Thanks.
[14:40] <AdventureTime> holy crap
[14:40] <AdventureTime> so downtime is needed?
[14:40] <maswan> or install a new server, and then move over the laod
[14:55] <AdventureTime> yeah but they use Plesk.
[15:18] <coreycb> jamespage, can I get a +1 on this before moving on to the rest?  https://code.launchpad.net/~corey.bryant/ceilometer/2015.1-b1-0ubuntu4/+merge/246437
[15:18] <jamespage> coreycb, I'd be tempted to bump the version dependency on openstack-pkg-tools  to 21ubuntu6~
[15:18] <jamespage> that will make sure you get the fix irrespective of the order in which things are built
[15:19] <coreycb> jamespage, good point, will do
[15:22] <coreycb> jamespage, I'm seeing 21ubuntu5~ as the latest
[15:24] <jamespage> coreycb, yep that's the one
[15:24] <coreycb> jamespage, k
[15:32] <AdventureTime> do i have to install centos now?
[15:32] <AdventureTime> or debian maybe?
[15:32] <collizion> AdventureTime: If you like Ubuntu, use Ubuntu.
[15:33] <collizion> Just use a current version.
[16:12] <coreycb> jamespage, mp's for upstart generation rebuilds - http://pastebin.ubuntu.com/9749470/
[16:30] <jamespage> coreycb, ok most of those done
[16:30] <jamespage> sahara we can skip as its not built yet...
[16:32] <coreycb> jamespage, ok thanks
[16:33] <jamespage> coreycb, btw the cinder disabling of SSL based tests patch could be reworked to make them pass as I did for neutron
[16:34] <coreycb> jamespage, ok I can do that
[16:34] <jamespage> coreycb, https://review.openstack.org/#/c/145208/
[16:46] <jcastro> jamespage, this seem right? http://askubuntu.com/questions/573761/error-instaling-openstack-with-juju-due-to-kvm-ok-not-being-installed/573766
[16:49] <rbasak> jcastro: good shout that KVM will need to work inside there. But I don't think that failure would cause that error message. He should have kvm-ok installed OK and then see kvm-ok fail if that were the case.
[16:49] <rbasak> jcastro: sounds like a bug or at least a use case that should be investigated.
[16:50] <justus_> Hello everybody, I have a question concerning networking and routing. I have a vpn connection running on one machine. Now i want to connect 3 other machines to route all their traffic through the machine with the vpn running...
[16:52] <jcastro> rbasak, indeed
[16:53] <jcastro> rbasak, any idea which package I should file that bug in?
[16:53] <justus_> has anyone experience with routing or ip-forwarding?
[16:54] <rbasak> jcastro: I'm not sure. Is that the cloud-installer package he's using? I'd start there if so. It might need to be punted to Juju, but I'm not sure how it's setting up the local environment and that looks like the faulty bit.
[16:55] <rbasak> tych0: ^^ can you help?
[16:59] <jrwren> justus_: if the vpn connected machine is not the default route for the lan, you don't have many options. You might get away with proxy arp, but typically you'd need your VPN endpoint to be default route or a route along the way.
[17:01] <justus_> jwren: thank you for the answer. Im not sure if I understood it correct. the target machine itself is running a vpn (it is not an endpoint), but it is still reachable by other machines from the same network. I just want the other machines from the same network to use this machine to connect to the internet so to say...
[17:06] <jrwren> justus_: you would need to change the default route on all those machines to be that vpn machine. It gets tricky, because that machine would then need to know to route for that subnet. Basically, this is not how ip routing works ;(
[17:07] <jrwren> justus_: it becomes easy if your node that is already your default route is the same node which does the VPN connection.
[17:11] <justus_> jrwren: it is not possible to run the vpn on the already configure default route. Do you think an ssh tunnel would be an easier solution? I thought it would be easy to setup a machine, to just channel all incoming and outgoing traffic :/
[17:11] <jrwren> justus_: maybe we think different things are easy :)
[17:12] <jrwren> justus_: if you have limited services you are accessing, ssh tunnels might be easier, yes.
[17:12] <justus_> jrwren: hehe ^^ I actually have no clue about ip routing, but I am here to learn :)
[17:13] <justus_> jrwren: ok, the only problem i was having with an ssh tunnel was that it was not as stable as i might have wished. And as I do not need the traffic to be encrypted I thought there might be a better solution...
[17:14] <jrwren> justus_: no need for encryption? In that case, can you use ipv6 at both sides? :)
[17:15] <justus_> yes
[17:15] <justus_> jwren: yes
[17:16] <justus_> jrwren: yes (now i got it right...)
[17:16] <jrwren> justus_: then do that and you are done. :)
[17:16] <justus_> jrwren: do what? ^^
[17:16] <jrwren> justus_: use ipv6.
[17:18] <justus_> jrwren: how can i use ipv6 to route traffic from one machine to another?
[17:19] <jrwren> justus_: public ipv6. They should already have routes. That is the luxury of ipv6, there is no nat.
[17:19] <jrwren> justus_: I should have asked, do you have public ipv6.
[17:20] <justus_> I actually have public ipv4 addresses
[17:20] <jrwren> justus_: ah, ok, nevermind.
[17:20] <jrwren> justus_: do you control LAN at both sides of the connection?
[17:20] <justus_> jrwren: i only control the machines
[17:21] <jrwren> justus_: in that case, maybe each machine could connect to VPN?
[17:22] <justus_> jrwren: yes that is the actual problem :/ only one machine can connect to the vpn. that is the only reason why i want the other machines to use this machine to connect to the internet
[17:22] <jrwren> justus_: I see. I think it is possible with some tricks.
[17:22] <jrwren> justus_: you want all traffic to go through VPN, or only to certain subnet?
[17:23] <justus_> jrwren: I still need to be able to log into the machine via ssh. but that is already configured in the routes if that is sufficient
[17:25] <X123> Greetings
[17:26] <X123> I'm trying to track down some weird tcp stalling on initial connections.
[17:26] <X123> Has anyone seen an issue with that and 3.13+ kernel?
[17:27] <X123> (example ssh to 127.0.0.1 and put in password, and then it hangs for a minute and sometimes goes through and sometimes resets)
[17:28] <X123> same with http requests
[17:29] <jrwren> X123: is dns resolving quickly? is localhost in /etc/hosts and getting used?
[17:29] <X123> yeah
[17:30] <X123> that wouldn't stall curls to 127.0.0.1 though
[17:30] <jrwren> X123: you'd be surprised :)
[17:30] <X123> it only does it on 3.13+ kernel though lol
[17:33] <X123> hrm
[17:34] <X123> I'm also noticing that i can't open a listen socket
[17:34] <X123> basically rebooting the machine, there's no problem at all for 5-10 mins
[17:35] <X123> then the problem happens, and i can't even start a new service listening on a port
[17:35] <X123> and almost all connections hang forever before connecting, or they get reset after a while
[17:35] <X123> (Broken pipe, reset by peer)
[17:35] <X123> if i kill a bunch of processes that are listening on ports, i can then start the process that i was trying to start before and it listens
[17:36] <X123> but the delay /reset is still there
[17:36] <tych0> rbasak: jcastro: stokachu: just saw this; stokatchu is probably the right guy to help
[17:36] <X123> something is whacked with 3.13+ :)
[17:38] <X123> anyone else seeing this?
[17:40] <X123> 1:~# ssh ::1 root@::1's password: Write failed: Broken pipe
[17:40] <rbasak> Thanks tych0. I wasn't sure.
[17:45] <tych0> rbasak: sure, np
[17:59] <X123> sure is quiet in here :>
[18:13] <ertyi> hello there
[18:13] <ertyi> anyone tested with iscsi features ?
[18:21] <k2gremlin> Anyone around that runs a squid3 proxy transparent on ubuntu server?
[18:26] <numkem> what is the proper way of reload /etc/sysctl.conf and /etc/sysctl.d/ ?
[18:27] <numkem> there is a file in /etc/sysctl.d/ that talks about using the procps service. But the service doesn't start, just says stopped
[18:28] <lnxmen> Hello.
[18:28] <lnxmen> Could anyone help me with mail server configuration?
[18:28] <lnxmen> I can't send email to my domain from GMail.
[18:28] <lnxmen> relay=local, delay=0.08, delays=0.05/0/0/0.03, dsn=5.1.1, status=bounced (unknown user: "admin")
[18:29] <numkem> lnxmen: do you have a user with that name or with that alias?
[18:29] <lnxmen> I created admin@domain.com in ispconfig
[18:29] <lnxmen> numkem: So I have an alias.
[18:29] <numkem> can you send it locally?
[18:30] <lnxmen> I will check.
[18:31] <lnxmen> numkem: No, I can't
[18:31] <lnxmen> The same error.
[18:33] <numkem> lnxmen: have you tried doing a newaliases or something along that? I think you problem is the aliases aren't fresh
[18:34] <numkem> ispconfig is some kind of webmin correct?
[18:34] <lnxmen> Yes, something like that.
[18:34] <lnxmen> I tried doing new ones.
[18:35] <lnxmen> But I want to create mailboxes rather than store everything on one account.
[18:35] <lnxmen> A z tym na razie ciężko. ;<
[18:35] <lnxmen> Uops, sorry.
[18:38] <numkem> I really don't know how your setup is like mta and such or it's configuration if you did it with ispconfig
[18:39] <numkem> something that is rather standard is to have unix accounts as mailbox users too
[18:39] <k2gremlin> Squid3 transparent on Ubuntu 14 anyone?
[18:40] <lnxmen> numkem: It's mail server for site support.
[18:40] <numkem> but there is a millions way of configurating the mta
[18:41] <jrwren> k2gremlin: i've used squid. Do you have a specific question?
[18:43] <lnxmen> numkem: Is there any file I can paste to let you know how mta is configured?
[18:45] <numkem> lnxmen: a list of your processes would be a good start
[18:45] <lnxmen> I'll find postfix, dovecot...
[18:45] <k2gremlin> jrwren, Im trying to setup a transparent squid. Right now I have a VM with squid running in non transparent.
[18:46] <k2gremlin> Im making another VM using 2 OTHER vswitches connected to 2 other physical ports.
[18:46] <k2gremlin> 1 of those ports is connected to a test laptop. the other port is connected on my normal router
[18:47] <k2gremlin> The part I can't for the life of me figure out is the iptables crap
[18:47] <k2gremlin> jrwren, I tried following this... http://ubuntuserverguide.com/2012/06/how-to-setup-squid3-as-transparent-proxy-on-ubuntu-server-12-04.html
[18:49] <jrwren> k2gremlin: you need to run the iptables rules on your default gateway for it to be transparent.
[18:50] <k2gremlin> Can't this server be the gateway for the lan?
[18:50] <jrwren> k2gremlin: maybe it could. you'd need to configure it correctly.
[18:50] <k2gremlin> jrwren, and therein lies the problem... me and iptables have never worked lol
[18:51] <jrwren> k2gremlin: :)  because packets are never getting to that VM running squid.
[18:51] <k2gremlin> jrwren, well they are.
[18:51] <jrwren> k2gremlin: how?
[18:52] <k2gremlin> My outside is 192.168.1.0   and the LAN side is 192.168.2.0
[18:52] <k2gremlin> sec ill pastebin my network/infaces file
[18:53] <k2gremlin> jrwren, http://pastebin.com/bTkXECSD
[18:54] <k2gremlin> so the laptop is connected to eth0 directly.
[18:54] <jrwren> k2gremlin: and you want trasparent to work only for the laptop?
[18:55] <k2gremlin> well this is just a test enviorment. Once I get it working... my router with all clients will be moved to that port
[18:55] <k2gremlin> and the eth1 port will plug into my cable modem
[18:55] <k2gremlin> if that makes sense
[18:55] <jrwren> k2gremlin: sure. these are test nets.
[18:56] <k2gremlin> correct. Ill probably leave the client net on 192.168.2.0, but the outside net will change to match my ISP
[18:56] <k2gremlin> Eth1 will probably need to change to dhcp as I don't own a static IP
[18:56] <k2gremlin> (home network) lol
[18:56] <jrwren> k2gremlin: lets say your laptop is 192.168.1.31. How is a connect request to 192.0.2.0:80 going to get to this VM running squid?
[18:57] <k2gremlin> the laptop is 192.168.2.2
[18:57] <k2gremlin> err 2.10
[18:57] <k2gremlin> but still
[18:57] <jrwren> ok, same question :)
[18:57] <k2gremlin> it is directly connected to the Eth0 interface on the server
[18:57] <jrwren> k2gremlin: can it talk to anything? because it really shouldn't be able to.
[18:57] <k2gremlin> Eth0 is on the VM running squid
[18:58] <k2gremlin> Ok right now, all I have configured on the VM is...
[18:58] <k2gremlin> those 2 interfaces...
[18:58] <jrwren> k2gremlin: how does DNS even work on laptop then?
[18:58] <k2gremlin> idk yet.. lol
[18:58] <jrwren> k2gremlin: I see.
[18:58] <k2gremlin> But basic install atm
[18:59] <k2gremlin> nics are setup and squid3 is in with initial install
[18:59] <k2gremlin> When I try to goto google.com, I get the squid3 block page
[18:59] <jrwren> k2gremlin: transparent squid doesn't substitute the need for working inet. Still need basic ipv4 for DNS and connectivity to that squid host.
[18:59] <k2gremlin> which is expected
[18:59] <jrwren> I'd not expect that give the config you have described as I understand it.
[18:59] <k2gremlin> ill draw a visio up... maybe that will help
[18:59] <jrwren> k2gremlin: it may help to describe everything and maybe ask on askubuntu.com
[18:59] <k2gremlin> ok
[19:00] <RoyK> k2gremlin: look at the acl entries in /etc/squid3/squid.conf
[19:01] <jrwren> k2gremlin: also, a lot of us don't have access to visio, so maybe draw it in text :)
[19:01] <k2gremlin> RoyK, I know squid really well. I tried a VM 2 days ago and setup the ACL's and such in squid. once it's past the rules in squid the http requests die lol
[19:01] <k2gremlin> jrwren, I screen shot the visio :)
[19:01] <k2gremlin> 1 sec
[19:02] <sarnold> RoyK: jeeze the other day I wasted twenty minutes trying to figure out why my sed -i -e 's/anl.gov/pnl.gov/ for my apt sources failed
[19:02] <sarnold> RoyK: it culminated in finding that I had previously set acls on squid for the hosts it would cache :)
[19:03] <RoyK> sarnold: hehehe
[19:07] <k2gremlin> jrwren, Ok the top is what I have right now for testing. The bottom is the end result I eventually want. http://puu.sh/ew2LH/59f97f043e.png
[19:08] <jrwren> k2gremlin: I don't think it is possible the way you have documented it.
[19:09] <k2gremlin> WHOA...
[19:09] <k2gremlin> I set the acl for src 192.168.2.0/24
[19:09] <k2gremlin> and allow http_access for that acl
[19:09] <k2gremlin> it worked..
[19:09] <k2gremlin> NOTHING is configured for IP tables
[19:10] <k2gremlin> let me make sure the laptop isnt directed at squid for a proxy
[19:10] <k2gremlin> shit it is
[19:10] <k2gremlin> lemme uncheck lol
[19:10] <k2gremlin> and connection fails lol
[19:11] <k2gremlin> So I need IPtables to pull traffic from eth1 and force it to squid... then squid to redirect the traffic to eth 0
[19:11] <k2gremlin> but this is sort of working. Clients cant access the internet without having the proxy setup.
[19:11] <jrwren> k2gremlin: sounds like you are almost there.
[19:11] <k2gremlin> My current home setup, if the proxy isnt configre they go straight out to the net
[19:12] <k2gremlin> which I don't want them to be able to do.
[19:12] <k2gremlin> Ultimatly, I want them to go through the proxy without having to configure the client
[20:09] <lnxmen1> numkem: https://www.linode.com/docs/email/postfix/email-with-postfix-dovecot-and-mysql
[20:09] <lnxmen1> I configured this server with this tutorial
[20:26] <sarnold> utlemming: why do you attach your gpg key to every email message?
[20:28] <dasjoe> Why should one trust a GPG key received in that way?
[20:29] <sarnold> dasjoe: well, in some sense, it's better than just requesting a key from the servers with a 32 bit keyid -- you can inspect the headers of the email and make sure that they look similar to previous emails from the sender, the purported sender can complain if seeing the mails on a public list..
[20:30] <rbasak> Sign every email. Then the recipient doesn't need to inspect the headers - he can just verify that all previous emails were signed by the same key.
[20:31] <rbasak> That pushes any possible MITM attack back to before the first email.
[20:33] <sarnold> rbasak: hehe, yeah, I sometimes download a key from the servers with the 32 bit key id, filter mutt to show only messages from that person, and go verify a few dozen emails with it -- then lsign the thing :)
[20:33] <sarnold> I wish mutt had some kind of interface to let me know when keys change or someone who always signs neglects to sign... but it's a start.
[20:45] <keithzg> Hmm, I think I'm out of my depth in trying to limit the CPU usage of a libvirtd-run VM. I had assumed I could set a percentage or such, but in <cputune> one needs to set the <quota> in microseconds. I can't claim to have any idea of what a reasonable value would be!
[20:45] <RoyK> keithzg: perhaps playing with cgroups could help?
[20:46] <sarnold> keithzg: you could set it to something like 750000 -- if it is measured per-second, as I expected, that'd be a 75% quota..
[20:49] <keithzg> RoyK: Probably, I guess I just assumed via the KVM settings would be the easier way to go.
[20:49] <keithzg> sarnold: Thanks, I'll give that a shot.
[20:50] <keithzg> (this is a VM that still runs a CVSNT server for people to go back and check from time to time, because nobody can be bothered to just find the equivalent commits in SVN I guess :P And CVSNT being CVSNT, it sometimes chews inexplicably high CPU time, making all the other VMs on the same host intermittently slow)
[20:51] <sarnold> and it's not disk bandwidth?
[20:54] <keithzg> naw, it's cvslockd jumping up to 100% CPU usage.
[20:54] <keithzg> And then it just sticks there until I retart either the lock daemon or, if I'm feeling lazy, the entire VM.
[20:56] <sarnold> *nod*
[20:56] <sarnold> it probably needs the restart from time to time anyhow :)
[21:01] <keithzg> That's what I tell myself at least ;)
[21:21] <k2gremlin> is there any reason an iptable command does not show up when I do iptables -L
[21:21] <k2gremlin> the command I entered was "sudo iptables -t mangle -A PREROUTING -p tcp --dport 3128 -j DROP"
[21:23] <teward> k2gremlin: different tables set
[21:23] <k2gremlin> Yea I see now, iptables -L -t nat
[21:23] <k2gremlin> but my proxy still not working lol
[21:24] <k2gremlin> trying to accept packets on one port... bring through the proxy, then forward to another port
[21:30] <hoogeveen> i'm trying to use a working OEL/redhat kickstart process, which uses NFS based ISO install tree for ubuntu and the ubuntu installer keeps barfing on a missing CD.
[21:31] <hoogeveen> is there a way of telling ubuntu to get its files from an NFS location instead of a local cd?
[21:31] <hoogeveen> the doc for auto-install states that this is not supported: "Installation from an archive on a local hard disk or from an NFS archive. "
[21:31] <hoogeveen> but this doc may be old.
[21:35] <hoogeveen> the installer has created a dir /var/spool/kickseed/fetch/nfs/A.B.C.D/export/linux/ks/hosts
[21:36] <hoogeveen> which sort of implies that it is attempting to do something, NFS-wise, since A.B.C.D is the IP of my kickstart server
[21:37] <hoogeveen> ahhh, kickseed appears to be only the ks config file
[22:44] <bekks> hoogeveen: How does your kickstart file looks like? And how does your boot entry for booting off that kickstart file looks like?
[22:45] <hoogeveen> the kickstart file contains an nfs line
[22:45] <bekks> Please show us both files :)
[22:46] <hoogeveen> the nfs line looks like this:
[22:46] <hoogeveen> nfs --server=ni1central-228.us.oracle.com --dir=/export/linux/ubuntu/ubu14.04.1.tls
[22:46] <hoogeveen> that is in the kickstart file
[22:46] <AdventureTime> thanks to the guy who helped me out! who ever you are, send me a pm so that i can talk you.
[22:46] <hoogeveen> i think you mean the command line for the kernel and that looks like:
[22:46] <sarnold> are you confident that DNS works at that stage of boot?
[22:46] <hoogeveen> ksdevice=eth0 ip=10.80.228.174 netmask=255.255.255.0 gateway=10.80.228.1 dns=192.135.82.132,130.35.249.41,130.35.249.52 ks=nfs:10.80.228.15:/export/linux/ks/hosts/tbrm-x86 load_ramdisk=1 initrd=pxelinux.cfg/ubu14.04.1.tls/initrd.gz network console=ttyS0,9600 BOOT_IMAGE=pxelinux.cfg/ubu14.04.1.tls/vmlinuz
[22:47] <hoogeveen> no, not with ubuntu, since it isn't working.  it works with redhat/oel
[22:47] <hoogeveen> however, i'm not getting dns errors, i'm getting "nothing loaded in cdrom" errors.
[22:47] <sarnold> ah, right
[22:47] <hoogeveen> two different auto-install docs mention that nfs doesn't work for the pkgs and peole should use http instead.
[22:47] <hoogeveen> so, i wanted to verify that before i go down that road.
[22:48] <hoogeveen> i'm fairly sure that it is probably getting the kickstart file
[22:49] <hoogeveen> in that the log file mentions it.
[22:49] <hoogeveen> it is a little odd that the installer can nfs mount and fetch the ks.cfg file, but can't nfs mount and fetch a package.
[22:50] <hoogeveen> i'm not sure what the difference would be, unless it is just the front-end processing that is missing.
[22:50] <hoogeveen> the nfs support *appears* to be there...
[22:50] <hoogeveen> but, this is my first foray into network installs with nfs on ubuntu
[22:50] <hoogeveen> so i am quite unfamiliar with any restrictions that may be in place.
[22:51] <hoogeveen> other than the afore mentioned two documents on auto-install which counter indicate nfs & archives
[22:51] <hoogeveen> do you still want the contents of the ks.cfg file?
[22:53] <bekks> hoogeveen: That would be helpful too, yes.
[22:54] <hoogeveen> should i paste it somewhere or splatter it here, getting bits of crap all over everyone?
[22:54]  * hoogeveen is unfamiliar with this channle.
[22:54] <hoogeveen> or channel even
[22:54] <sarnold> hoogeveen: generally pastebins are preferred if it's more than two or three lines
[22:55] <hoogeveen> ok, i thought so.   wait a bit and i'll whip it up
[22:55] <sarnold> pastebinit or wgetpaste can make it easier
[22:55] <hoogeveen> not familiar with those tools on solaris.
[22:55] <hoogeveen> i'm guessing that is either windows or linux
[22:55] <hoogeveen> 269 lines
[22:57] <sarnold> pastebinit requires python3, so it should be portable to solaris -- though if you don't already have python3 installed, it might be too much work
[22:57] <hoogeveen> i'll look for it later - thanks for the tip
[22:58] <bekks> You can just upload it to a pastebin with your browser, too.
[22:58] <cyclob|work> Hi all, can anyone point me into the direction of getting pam_mysql to use hsa512 passwords. Apparently it's supported but i can't find out where i can set it to use it
[22:58] <hoogeveen> are you ok with me eliding the post-install script?  that isn't really germane to this problem.
[22:58] <cyclob|work> sha512*
[22:58] <sarnold> bekks: yeah but copy-paste is such a pain in the ass when it doesn't fit on one terminal window :)
[22:58] <bekks> sarnold: The even have a file selection button :P
[22:59] <bekks> *They
[22:59] <sarnold> bekks: they do? hunh :)
[22:59] <bekks> :D
[23:02] <hoogeveen> http://pastebin.com/ab8kKNV2
[23:02] <hoogeveen> that is the kickstart minus the %post
[23:05] <hoogeveen> this is the pxe file   http://pastebin.com/eHm5u18J
[23:07] <hoogeveen> here they are, sarnold bekks
[23:09] <sarnold> hoogeveen: sorry, I"ve never done kickstart myself :/
[23:10] <sarnold> hoogeveen: nothing else stands out to me
[23:10] <hoogeveen> ok.   i suspect that it isn't supported and that i should stand up a web server, but didn't really want to do that if i already had a full NFS install structure set up.
[23:10] <sarnold> no kidding
[23:10] <sarnold> NFS is just so easy by comparison
[23:11] <hoogeveen> i could unroll the initrd and putz around in there with getting the nfs mount point set up, but that seems like a bit more work than it may be worth
[23:11]  * hoogeveen is unsure if sarnold has the sarcasm flag enabled....
[23:11] <sarnold> hoogeveen: hehe, no, I don't much like how complicated webservers are
[23:11] <bekks> I just compared your settings against mine - and the only difference is that I'm actually using a http server for serving all files, instead of a NFS server.
[23:12] <sarnold> hoogeveen: especially if yo'ure using zfs on a dataset, you probably just get to zfs export dataset ... and the damn thing just works :)
[23:12] <bekks> And setting up a webserver just for serving that stuff is pretty easy :)
[23:12] <hoogeveen> ok, i think that you two, plus the people in #ubuntu who didn't know what i was talking about, plus the two docs, plus a couple of other people are enough of a quorum on this
[23:12] <hoogeveen> yup.
[23:12] <hoogeveen> zfs is nice
[23:13] <hoogeveen> well, i live in a big corp, so there are sometimes.... let me say, complications to things like that.
[23:13] <sarnold> oh, it'd be zfs share, not export, that's something else entirely. :) anyway, I wish there was a similarly easy way to do httpd. hehe.
[23:13] <hoogeveen> hopefully, i'll just be able to do the simple standup to share these out and be done with it.
[23:13] <hoogeveen> thanks for the eyeballs on this.
[23:13] <sarnold> good luck :)
[23:13] <hoogeveen> oh, one more question.  it looked like it was just http and not https
[23:13] <hoogeveen> is that correct?
[23:14] <hoogeveen> or was it just that the examples were http and https was implied?
[23:14] <cyclob|work> anyone know how how to get pam_mysql hashing with sha512?
[23:14] <hoogeveen> we've been moving to all htpps internally lately
[23:14] <sarnold> no idea, sorry; I suspect http, since the 's' part might be difficult to do correctly (trusted CAs, trusted date/time for boot, etc..)
[23:14] <hoogeveen> yeah, that can be a *big* can of worms.
[23:14] <hoogeveen> again, thanks for taking the time to indulge me and talk to you later.
[23:15] <sarnold> the installer verifies signatures with gpg, so https has never been a big priority for anyone
[23:15] <sarnold> a pleasure hoogeveen :)
[23:16] <bekks> :)
[23:49] <cyclob|work> grr. why am i getting permission denied when my user is in an webdev group with rwx permissions
[23:52] <cyclob|work> oh right have to re-log to make the group take effect.