[02:46] <Curly_Q> Hello folks.
[02:47] <Curly_Q> Ubuntu has a     sudo reboot      and a     sudo poweroff     command.
[02:47] <Curly_Q> Is there a sleep mode to ensure that Ubuntu can be remotely controlled in sleep mode?
[02:48] <mybalzitch> no
[02:48] <Curly_Q> Power off means that the machine needs to be mechanically turned on.
[02:50] <Curly_Q> I suppose I could program a device that starts the machine.
[02:50] <Curly_Q> There are some devices that do this by way of telephone.
[02:52] <Curly_Q> The other thing is can Ubuntu control the BIOS settings remotely?
[02:53] <Curly_Q> I suppose it would depend upon the type of the machine.
[02:54] <Curly_Q> Which means that the BIOS settings would require the machine to reboot.
[02:54] <mybalzitch> you'll want to look at wake on lan
[02:56] <Curly_Q> I do lots of computer repair and data recovery, but never remotely.
[02:58] <Curly_Q> I was spoiled by Windows GUI and now like to use the Command Prompt. It is much more rewarding to use.
[02:59] <Curly_Q> More homework but worth the effort.
[03:00] <Curly_Q> My server is Apache2 Headless. Nice machine. i386
[03:01] <Curly_Q> I am using Wily  version 15.10  <------<   Nice.
[03:04] <Curly_Q> The one thing I don't understand is that if I purchase an expensive 64 bit machine will I have to change all of the software installed to 64 bit?
[03:05] <Curly_Q> If I install a 64 bit Ubuntu server will the same software work?
[03:05] <mybalzitch> I'm mostly sure you can run 32bit software in a 64bit userland
[03:06] <Curly_Q> I am sure there are exceptions there.
[03:07] <Curly_Q> I use VBOX on all of my 32 bit Windows machines and run Debian and Ubuntu servers.
[03:07] <Curly_Q> I know that 64 bit box will work faster with VBOX.
[03:08] <Curly_Q> If there were a 128 bit machine, I would purchase it in a heart beat.
[03:10] <Curly_Q> Solaris and UNIX are a different story.
[03:15] <Curly_Q> The issue is that most programmers program with the current technological compilers and it is 16    32    or    64 bit. Programs vary.
[03:16] <Curly_Q> Any ways Mybalzitch thanks for the input. You have a strange nickname.  :)
[03:18] <Curly_Q> I suggest    sudo apt-get  install Scratch Them     hehe
[03:18] <patdk-lap> ya, use wake-on-lan or ipmi for poweroff/on
[03:18] <Curly_Q> Don't forget to    sudo apt-get to update     it.
[03:19] <patdk-lap> heh?
[03:19] <Curly_Q> Patdk-lap interesting. I will Google that. Thanks.
[03:19]  * patdk-lap wonders where you can even purchase a 32bit machine
[03:20] <patdk-lap> it's been a rather long time since they made cpu's that didn't do 64bit
[03:21] <Curly_Q> Patdk I have a large collection of computers in my home basement. They just sit there.
[03:24] <Curly_Q> The nice thing about Ubuntu is that it still accomodates the older machines. If not you can still download the older versions of Linux or Ubuntu.
[03:26] <Curly_Q> I have used Red Hat Linux years ago. It was nice.
[03:27] <Curly_Q> It came with a floppy disk to install it.
[03:28] <Curly_Q> Partitioning the disk was fun though.
[03:30] <Curly_Q> The floppy disk was a DOS Windows disk. Strange with vmlinuz    file.
[03:33] <Curly_Q> Oh whell, those were the old days,
[03:34] <Curly_Q> Patdk where are you from?
[03:34] <Curly_Q> I am from Massachuestts  U.S.A.
[03:35] <Curly_Q> I guess that everyone here is asleep.
[03:36] <Curly_Q> Have a nice day folks. Sleep well.
[07:34] <Hyllegaard> Hi.
[07:34] <Hyllegaard> My problem is that having just installed ubuntu and the openstack single server, I am not able to reach any of the ip's listed in the openstack status.
[08:16] <Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
[11:33] <Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
[12:44] <Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
[13:35] <melati> Assalamaulaikum
[14:12] <pmatulis> huh?
[14:29] <rbasak> pmatulis: it's a greeting. Not sure why he joined, greeted us and then left though.
[14:32] <pmatulis> rbasak: ok, TIL
[16:39] <bonzibuddy> hello
[16:39] <bonzibuddy> my ubuntu server 14.04 keeps printing stuff on the console asynchronously, when i'm logged in over ssh
[16:40] <bonzibuddy> "fatal: Read from socket failed: Connection reset by peer [preauth]"
[16:40] <bonzibuddy> seems to be related to ssh and potentially brute force attempts
[16:40] <bonzibuddy> ultimately i dont want those printing to my ssh session.... its really messes up ncurses based things
[16:41] <bonzibuddy> how do i disable that???
[16:44] <andol> bonzibuddy: Are you seeing that in a (physical) console or in a ssh terminal session?
[16:44] <bonzibuddy> andol: ssh terminal session
[16:46] <bonzibuddy> it seems to happen to any logged in user
[16:46] <andol> bonzibuddy: Hmm, in that case I'm not sure, but it might an option to tune alt. remove the /dev/xconsole entry in /etc/rsyslog.d/50-default.conf
[16:48] <bonzibuddy> andol thx! I will look into that.  seems to be what I'm after
[16:50]  * andol is not entirely sure to what extent /dev/xconsole connects to ssh terminal sessions.
[16:51] <bonzibuddy> this server is running syslog-ng and seems to have similar rules
[16:51] <andol> Ahh, then it might make more sense :)
[16:51] <andol> I just assumed a default 14.04 install.
[17:16] <tyui> hi there
[17:17] <tyui> how to check what happen to a process in a specific time ?
[17:19] <Pici> tyui: What do you mean?
[17:20] <tyui> i got a process which goes off  on saturday
[17:21] <tyui> i would like to know why that process gone down like that ?
[17:21] <Pici> tyui: look at your logs.
[17:21] <tyui> where exactly ?
[17:21] <tyui> i can't find /var/log/messages on ubuntu
[17:23] <Pici> tyui: You might find something in /var/log/syslog  but your process might not be configured to log using syslog.
[17:24] <tyui> so where i can find it ?
[17:24] <Pici> tyui: it completely depends on what you are running.
[17:25] <tyui> it's an program perl
[17:26] <tyui> ok thanks
[17:26] <tyui> bye
[19:08] <Blueking> TJ- hello again :)
[19:09] <Blueking> TJ- you talked about making script to fix mine net issues related to dhcpdiscover,dhcprequest,dhcpoffer, dhcpnak  problem
[19:35] <TJ-> Blueking: oh, yes, how far did you get with that? Last I recall I suggested enabling debugging mode and capturing the log when the lease expires
[19:40] <RoyK> TJ-: I think he has a wireshark dump of it
[19:41] <TJ-> RoyK: it's the dhclient debug log I asked Blueking to collect
[19:41] <TJ-> dhclient fires events into a shell script and we can hook into those events to over-ride actions
[19:42] <RoyK> he should have that as well
[19:42] <RoyK> he gave me some log yesterday, but I was a bit tired
[19:46] <TJ-> Some terrible ISP there!
[19:54] <RoyK> TJ-: the interesting part is that I have the same ISP and my internet connection has been stable for >5Y
[19:54] <TJ-> right, but are you using static IP?
[19:54] <RoyK> so it may be the problem is somewhere else
[19:54] <RoyK> I have a static IP with DHCP
[19:54] <TJ-> also, are you seeing the same short DHCP renew times
[19:54] <RoyK> so has Blueking now
[19:55] <RoyK> I'm just using the router they gave me - he put that in bridge mode, so there's the diff
[19:55] <TJ-> right, and the issue is the ISP DHCP server is NACK-ing the renewal so the lease/address gets withdrawn on the client, then it asks for a fresh lease and gets the same IP back in a new lease
[19:55] <TJ-> I was wondering about router MAC registration being an issue
[19:55] <RoyK> they don't have proper ipv6 - so you need to setup 6rd with the router in bridge to make that work
[19:57] <TJ-> as I recall we were dealing with IPv4 only
[19:58] <RoyK> yeah, but the main reason to use bridge mode on that router thing is to make ipv6 work
[19:58] <RoyK> with 6rd
[19:59] <RoyK> I haven't tried it yet
[20:50] <coreycb> jamespage, ddellav, I'm fixing up the mitaka CA build failures for ryu and cinder (i386 issues).  I think I have both fixed but cinder might take a little longer since it has some new non-i386 failures on xenial.  I'm going to see if RC2+new deps fix those issues up.
[21:51] <rbasak> hallyn: quick question, not really work related. What's the recommended way for me to get a shell running in its own cgroup? Should I be using cgm like its manpage example or something else?
[21:51] <rbasak> (on Xenial)
[21:52] <rbasak> Google seems to suggest many things. It's not clear to me what is deprecated, etc.
[21:53] <hallyn> rbasak: good question.  i think we should ask pitti if there is a proper systemd way to do it
[21:53] <hallyn> cgm imo is the easiest way still, but since we're trying to drop cgmanager...
[21:54] <rbasak> Thanks, I'll ask in #ubuntu-devel since pitti isn't here.
[21:54] <hallyn> +1
[22:05] <Blueking> TJ-  back
[22:12] <noobadmin> hi, people, I need help bringing up a new bridge interface. I'm on 16.04 and I edited /etc/network/interfaces to add 'br0' using dhcp and set 'bridge_ports em0', when I bring it up with 'ifup br0' it works but I lost connectivity
[22:13] <noobadmin> and I can see on the router a lot of 'arp who-has' and 'arp reply' but nothing else... can somebody help me a bit? I don't know what else to check
[22:14] <Blueking> can I pm u or u disabled it ?
[22:14] <Blueking> TJ- ?
[22:14] <noobadmin> I'm not sure, I'm not use to use irc... lets try
[22:16] <JRWR> What would be the best method to combine the free space of 4 servers into once filesystem over a network? all three have different drive sizes but in total it would be 15TB and would like to be able to add new servers in at any time, no redundancy needed since its bulk data that can be easly replaced
[22:17] <JRWR> I have experimented with aufs/nfs mounts and it was OK but I found the rr modes where not very roboust
[22:19] <sarnold> JRWR: sounds like ceph, gluster, or maybe hdfs (less likely, depends upon what you're doing with it)
[22:20] <JRWR> mostly just bulk media storage, nothing too fancy
[22:20] <sarnold> JRWR: note that ceph appears to be insanely picky about full storage targets. do not let that happen.
[22:20] <sarnold> JRWR: alright, skip researching hdfs then.
[22:20] <JRWR> gluster looked pretty good but the auth methods are lacking, noticed it was IP only
[22:21] <JRWR> I'm fine with that but how well does it handle load balancing the files across the systems
[22:22] <sarnold> ceph allows you to define maps that say which storage targets in which drives in which servers in which racks in which datacenters on which continents get your data
[22:22] <sarnold> it probably scales up to planets too but to my knowledge no one's tried
[22:23] <JRWR> lol
[22:23] <JRWR> Im not trying to scale that high, maybe 10 servers at max
[22:23] <sarnold> no interplanetary travel? oh well
[22:23] <JRWR> gluster looks like a bitch to configure
[22:24] <sarnold> JRWR: ceph feels like a decent fit, but it'll take you two or three days to work through the docs
[22:24] <sarnold> I haven't read all the gluster docs yet, it didn't feel as 'ready' as ceph to me as far as I have researched it
[22:24] <sarnold> the filesystem layer of gluster felt weak; apparently gluster's object store layer is decent thuogh
[22:25] <JRWR> Well this is not for enterprise at all, right now im using mhddfs but its a little CPU heavy
[22:25] <JRWR> on top of NFS exports
[22:27] <sarnold> neat, I've never heard of that. seems like a funny storage design though.
[22:27] <JRWR> Greyhole workds the same way
[22:36] <JRWR> looks like gluster can stripe files over servers, thats interesting
[22:37] <sarnold> so will ceph
[22:37] <sarnold> the fact that greyhole and .. uh the other one .. don't do striping is in fact pretty strange to me :)
[22:38] <JRWR> both are more file routers then anything
[22:38] <JRWR> they overall all the filesystems on top of one another and route based off freespace
[22:38] <sarnold> heh, interesting analogy
[22:38] <JRWR> so not really a filesystem
[22:39] <sarnold> I guess the nice thing is that if you lose a server you lose those specific files and nothing else; losing enough ceph nodes to run below your replication levels means you lose pretty much everything
[22:39] <sarnold> but that's why you tune your replication levels appropriately :)
[22:42] <JRWR> I was thinking of just exporting everything over NBD
[22:42] <JRWR> and zfs the bitch up
[22:42] <JRWR> put small OSes on them and do science
[22:43] <sarnold> SCIENCE!
[22:43] <JRWR> lol
[22:43] <sarnold> zfs is -not- a cluster filesystem
[22:43] <JRWR> no
[22:43] <sarnold> you probably know that but I've got to say it
[22:43] <JRWR> but you can raid shit
[22:43] <JRWR> and nbd exports block devices over the network
[22:44] <JRWR> and do support uneven raid
[22:48] <JRWR> nbd vs iscsi
[22:48] <JRWR> now I've really gone down the rabbit hole
[22:49] <sarnold> .. and -which- iscsi targets / initiators to use..
[22:49] <JRWR> oh noes!
[22:49]  * JRWR now has over 400 tabs open in chrome
[22:51] <JRWR> uses CHAP for auth
[22:51] <JRWR> holy shit
[22:51] <sarnold> yeah, these things often assume they're running on a trusted storage network
[22:51] <sarnold> different switches than the application network
[22:55] <JRWR> nope, all these guys are pretty much on the open internet
[22:56] <JRWR> so software firewalls for me :3
[22:57] <sarnold> ipsec or openvpn the things then :)
[22:57]  * sarnold adds another dozen tabs to JRWR's poor chrome
[22:57] <JRWR> thats what I had kinda planned on
[22:57]  * JRWR knows how to setup openvpn already :p
[22:58] <ndf> wheeyy what a time to walk in
[22:58] <ndf> I literally just got my openvpn working
[22:58] <ndf> =)

[22:58] <JRWR> so ill do tiny os installs (15GB) and the rest ill export over iscsi
[22:58] <JRWR> then use ZFS to raid all those bad boys together
[22:59] <ndf> you raiding over the internet through vpn?
[22:59] <ndf> didn't even know you could do that
[22:59] <ndf> lol
[22:59] <sarnold> "can" and "should" are different things of course ;)
[22:59] <ndf> hah
[22:59] <sarnold> JRWR: ooh ooh ooh, tahoe-lafs. :)
[23:00] <JRWR> I saw that
[23:00] <sarnold> (though to be honest I don't know how many people use it.)
[23:00] <JRWR> ndf I'm taking the storage space of a few servers and combining it
[23:00] <JRWR> making a poor man's SAN
[23:00] <ndf> well I suppose it fills the gap in the market for realtime remote backup, but it's gotta cost a lot of bandwidth
[23:01] <JRWR> meh
[23:01] <JRWR> they are in the same datacenter
[23:01] <ndf> oh ok
[23:02] <ndf> wellllll wouldn't it be cost effective to shuffle the racks closer and share a hdd cage?
[23:02] <ndf> lol
[23:02] <ndf> where's the fun in that tho eh
[23:03] <JRWR> there are some nifty tools I found
[23:03] <JRWR> like mhddfs that works like a file router based off free space
[23:03] <JRWR> thats what I'm currently using
[23:05] <JRWR> likes to get touchy when you abuse it
[23:07] <sdeziel> JRWR: with ZFS you can always use send/receive to copy your data over the VPN link. Not real-time though.
[23:07] <JRWR> ya, looking for real time, thats why I was exploring layered filesystems
[23:07] <JRWR> AuFS looks pretty nice
[23:08] <sdeziel> JRWR: for real-time replication, DRBD is pretty impressive and since it runs over TCP, it's easy to tunnel over VPN
[23:08] <sarnold> aufs/overlayfs feel like a stream of issues :/
[23:09] <JRWR> looking for combined, not redundancy
[23:09] <sdeziel> like RAID0 over the network?
[23:09] <JRWR> pretty much
[23:09] <JRWR> thats why i was going to use iSCSI and do soft-raid
[23:10] <sdeziel> should work (make sure to use write-mostly for the iSCSI one)
[23:13] <sdeziel> err, write-mostly only makes sense for RAID1 ... nvm that part