[01:16] <adam_g> smoser, around?
[06:49] <jdstrand> hallyn: hrm.. both sarnold and I are now pinned on the saucy version
[06:49] <jdstrand> but I'd be happy to try a new version
[07:06] <lordievader> Good morning.
[07:08] <sarnold> jdstrand: did you notice mdes laur posted qemu updates to the security proposed ppa? I wondered if we were tripping over something that might have been fixed in the giant block level auditing...
[07:17] <jdstrand> sarnold: I say the call for testing. I did not do it yet. I hadn't considered that it would fix our issues because I figured the fixes were likely in 2.0, but if mdes laur patched trusty too, probably worth trying
[07:17] <jdstrand> s/say/saw/
[07:19] <sarnold> jdstrand: aww. I hadn't considered that they might not have affected trusty. now I'm dissapointed.
[08:17] <jamespage> gnuoy, https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1322498
[08:17] <jamespage> building here - https://launchpad.net/~ceph-ubuntu/+archive/edgers/+build/6034662
[08:17] <gnuoy> jamespage, thanks
[09:26] <cocoa117> when you using ProxyCommand to jump through hoops, you have to have authendication to those hoops machine on your local machine (public key wise), otherwise you can't get on to next one. Is there anyway to allow the machine connect to it authendicate this rather then the local machine initialised the connection?
[09:34] <hxm> hello
[09:36] <lordievader> o/
[09:44] <morph-> could anyone here help me with my ubuntu server? i ended up somehow installing a bunch of kernel images and headers and now i'm stuck booting into this one particular kernel. I just want to revert back to my old one but its a remote server so I can't actuallly see grub when it comes up
[09:52] <mardraum> morph-: you have a remote server without any management out-of-band that can't boot?
[09:52] <morph-> i can go into rescue mode or something
[09:53] <morph-> but I can't KVM or anything to actually mess with it while ist booting
[09:53] <mardraum> how are you able to get it into "rescue" mode
[09:53] <morph-> go into OVH manager
[09:53] <morph-> select rescue
[09:53] <oro> hi all, anyone has a workaround for trusty's network device reanming bug? (https://bugs.launchpad.net/ubuntu/+source/biosdevname/+bug/1284043)
[09:54] <morph-> from netboot
[09:54] <morph-> then reboot the server
[09:54] <morph-> its a server from ovh.ie
[09:54] <oro> i am aware of udev's "predictable device naming" feature, but it's not predictable at all.
[09:54] <mardraum> do they offer documentation about what that rescue mode does?
[09:54] <morph-> yea
[09:54] <morph-> 1 sec
[09:54] <mardraum> I don't want a link, I want to know if you have read it :p
[09:55] <morph-> mardraum http://help.ovh.com/RescueMode
[09:55] <morph-> ive used it plenty of times
[09:55] <morph-> i dunno how to fix this
[09:55] <morph-> trust me i didnt coem in here and ask first
[09:56] <mardraum> if you can get access to the shell of your machine, can you remove the problem kernel packages?
[09:56] <morph-> yeah im ssh'd in my server right now
[09:56] <morph-> it works
[09:56] <morph-> its just this kernel keeps lagging me out
[09:56] <morph-> but i dunno how to undo all of this
[09:57] <oro> i have already put 80-net-name-slot.rules and 80-net-setup-link.rules and deleted, had it auto-recreated, and also edited 70-net-persistence-rules but it still unpredictable. sometimes my devs renamed to "rename5"
[09:57] <mardraum> remove the kernel packages
[09:57] <mardraum> leave the ones that WORKED
[09:57] <morph-> mardraum http://pastebin.com/w9yNncTr
[09:57] <morph-> thats from grub-mkconfig
[09:57] <mardraum> ok?
[09:58] <morph-> Ubuntu, with Linux 3.8.0-41-generic
[09:58] <morph-> is the one i want
[09:58] <morph-> do i just like
[09:58] <mardraum> remove the others
[09:58] <morph-> go into /boot
[09:58] <morph-> and remove all this stuff?
[09:58] <mardraum> well, I would use apt-get
[09:58] <mardraum> you have used it before right?
[09:59] <mardraum> how did you install these kernels....
[10:00] <morph-> some with apt-get
[10:00] <morph-> some with dpkg
[10:00] <mardraum> that's ok then
[10:01] <morph-> i have no idea what the file names were though
[10:02] <morph-> they're not like /boot/initrd.img-3.13.5-vanilla
[10:02] <morph-> linux-image-3.13.5-vanilla?
[10:02] <morph-> is taht the format maybe?
[10:02] <mardraum> dpkg --list
[10:03] <morph-> yay
[10:03] <morph-> okay one more thing
[10:03] <morph-> i know when it gets down to it i wont be able to remove this kernel im booted into
[10:04] <morph-> it'll throw an error telling me i can corrupt everything blah blah
[10:04] <morph-> how do i get past that?
[10:04] <mardraum> I've not seen that sorry
[10:08] <morph-> mardraum http://puu.sh/8XEQu/6920cfaa56.png
[10:17] <morph-> marcoceppi
[10:17] <morph-> mardraum
[10:17] <morph-> I got it :)
[10:21] <morph-> thanks a ton mardraum
[10:44] <jamespage> zul, https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1322568
[10:44] <jamespage> :-)
[10:44] <jamespage> zul, not quite sure why I did not see that pre-release
[10:44] <jamespage> I'm pretty sure tempest does lots of those
[10:47] <caribou> gnuoy: just following up to your answser in #juju as it pertains to the nova-cloud-controller charm
[10:48] <gnuoy> caribou, hi, I'm in an upgrade-charm debug session for nova-cloud-controller as I type
[10:51] <caribou> gnuoy: in the compute_changed() function I just printed the migration_auth that comes from "relation_get('migration_auth_type')
[10:51] <caribou> gnuoy: when running "juju upgrade-charm", it prints None as the value; I must be missing something
[10:51] <gnuoy> caribou, you need to give it the relid and unit id
[10:52] <gnuoy> hang on, I'll paste bin
[10:52] <caribou> gnuoy: that's what I thought; I got misled by the fact that compute_changed doesn't ask for one; I can change that
[10:53] <gnuoy> kk
[10:54] <caribou> gnuoy: you can still paste what you have, it'll help me
[10:56] <gnuoy> sure, one sec
[11:02] <gnuoy> caribou, sorry, I lost connectivity for a bit there. http://paste.ubuntu.com/7505148/
[11:03] <caribou> gnuoy: np
[11:10] <gnuoy> caribou, Iwhen I run that I'm only getting the private address of the compute host so I guess the compute host is not setting anything when the relation is joined
[11:10]  * gnuoy checks
[11:13] <gnuoy> caribou, do you have enable-live-migration enabled ?
[11:13] <gnuoy> 'migration_auth_type' is not set by the compute node if enable-live-migration is not True
[11:14] <caribou> gnuoy: well, my config file does set it
[11:14] <caribou> gnuoy: maybe I should explain what I try to achieve instead
[11:14] <gnuoy> sure
[11:16] <caribou> gnuoy: I've fixed nova-cloud-controller & nova-compute to transfer authorized_keys/known_hosts files in multi-lines
[11:17] <caribou> gnuoy: I'll get you the Merge Reqs:
[11:18] <caribou> gnuoy: https://bugs.launchpad.net/+branch/~louis-bouchard/charms/precise/nova-cloud-controller/lp1313602-multiline-known-hosts
[11:20] <caribou> jamespage: suggested to rerun the hooks for relations in order to get that multi-line setup to be applied when we upgrade the charm
[11:20] <caribou> gnuoy: my mistake is to suppose that all relations were available in 'upgrade-charm'.
[11:21] <gnuoy> caribou, they are all available, what makes you think they aren't ?
[11:22] <caribou> gnuoy: well, I suppose they are if I explicitely call them by rid, in relation hooks there's no need for that apparently
[11:22] <caribou> if I understand it correctly
[11:23] <caribou> gnuoy: I mean I do not need to supply the relation-id in a relation hook when doing 'relation-get'
[11:23] <gnuoy> caribou, you don't need to explicitly set relation id when you're in the context of that relation. But in upgrade charm your not in the context of any relation so it needs to be set explicitly
[11:24] <caribou> gnuoy: sorry for all those nOOBs assumptions, I'm just getting into those charms mechanisms
[11:24] <gnuoy> caribou, don't apologise, no problem at all
[11:25] <YamakasY> how can I make my mirror smaller ? it's 200GB!
[11:38] <YamakasY> anyone ?
[11:41] <gnuoy> YamakasY, have asked in #ubuntu-mirrors and looked https://wiki.ubuntu.com/Mirrors for expected mirror sizes ?
[11:42] <gnuoy> " The Ubuntu archive, as of 2013-04-04, uses about: 642GB of disk space for the Ubuntu package archive. "
[11:49] <YamakasY> gnuoy: not loked there yet
[11:51] <caribou> gnuoy: maybe I misunderstood what jamespage asked for
[11:52] <jamespage> caribou, nope- the conversation above looks on the right track :-)
[11:52] <caribou> jamespage: thanks for the confirmation
[11:53] <caribou> jamespage: there is also a dependancy on the order of the upgrade : nova-cloud-controller must be upgraded first to make the new relations available to nova-compute
[11:54] <jamespage> caribou, actually I don't think that will matter
[11:54] <caribou> jamespage: well, if nova-compute runs & no indexed relation is there, it will do nothing
[11:54] <jamespage> caribou, if you do it the other way around, nova-cc will set the data and the nova-compute nodes will just pickup the new relation format
[11:54] <jamespage> caribou, yes
[11:55] <caribou> hence nova-cc needs to be to the latest version first
[11:55] <caribou> jamespage: I mean it will not break anything, but just not use the multi-line format
[11:56] <caribou> so someone not knowing about the specific ordering requirement will not get a functional live-migration ssh key setup after the upgrade
[11:56] <caribou> if nova-compute is upgraded first
[12:29]  * zul shakes his fist at tempest
[12:38] <oro> anyone has a workaround for trusty's network device reanming bug? (https://bugs.launchpad.net/ubuntu/+source/biosdevname/+bug/1284043)
[12:54] <caribou> jamespage: gnuoy: running compute_changed() in a non relation hook context is rather difficult :
[12:54] <jamespage> caribou, there should be lots of examples of how todo that already in the nova-cloud-controller charm
[12:54] <caribou> many of the function used make the assumption that all relations are available
[12:55] <gnuoy> caribou, note that those which need to be called outside of a relation context allow the rid to be passed in
[12:56] <gnuoy> def compute_joined(rid=None, remote_restart=False):
[12:56] <caribou> gnuoy: I fixed many of them, but the call to shh_compute_add fails because it calls helpers that expect  some environment variable to be there
[12:56] <caribou> gnuoy: yeah, i've done all of this, the whole relation get/set is fixed accordingly
[12:58] <caribou> gnuoy: here is an example : http://paste.ubuntu.com/7505499/
[13:00] <gnuoy> caribou, just to be pedantic "all relations are available" in all hook contexts. And there is no hook context where they're all available without having to set a rid for ones outside of the current context.
[13:00] <caribou> gnuoy: agreed, that why I added rid & uid to the call so I have them, that works
[13:00] <gnuoy> kk
[13:02] <caribou> gnuoy: it's the call to ssh_directory_for_unit which calls remote_unit that expects $JUJU_REMOTE_UNIT to be set; maybe I should change that to use a unit passed as argument
[13:02] <caribou> gnuoy: since I know it already
[13:02] <gnuoy> sounds good to me
[13:03] <caribou> gnuoy: just that the fix becomes more intrusive that I expected ( or rather that I'm used to)
[13:03] <gnuoy> yeah, I think you've been unlucky
[13:03] <caribou> gnuoy: from the look of it, the current compute_joined was written to be called in a relation hook only
[13:04] <caribou> gnuoy: I must change it to be called in any context
[13:04] <caribou> gnuoy: I'm fine with it
[13:04] <gnuoy> caribou, no, it is called in other relation contexts already, hence the ability to pass in the rid
[13:05] <gnuoy> shared-db-relation-changed calls it for one
[13:07] <caribou> gnuoy: it calls compute_joined, not compute_changed
[13:07] <caribou> compute_changed is argumentless atm
[13:07] <gnuoy> caribou, ah, you said compute_joined in your previous comment "the current compute_joined was written to be called in a relation hook only"
[13:07] <caribou> gnuoy: oops, sorry :-/
[13:08] <gnuoy> np :)
[13:08] <caribou> ok, now that I got blessing for people who know, let's get that working :-)
[13:51] <Werkena> Hi, How do I make sure whenever a service e.g. apache stops to make sure it be restarted again.....apart from using cron to check on a timely basis.
[13:55] <rberg_> Werkena: upstart can restart processes that crash
[13:57] <Werkena> rberg: Can you take such an instances where the sysadmin wants to ....e.g. just unpluging a wifi usb AP and replugging it would make hostapd to freez/stop....so I have to restart it manually....or if some how the usb wifi is pluged after the hospad service already started, it wouldn't restart itself.....so I wanted it in such like scenario....
[14:00] <rberg_> for hardware events I would look at writing udev a rule
[14:14] <Werkena> rberg: I think I understand you now, Should I only include the word "respawn" in the init.d of the script (for hostapd) like found in this post.....https://forums.plex.tv/index.php/topic/109449-respawn-on-crash/
[14:32] <klander> hey guys. whats the easiest way to port your deployed apps to apache 2.4 from apache 2.2 ?
[14:33] <klander> this blog mentions automated deployments but i really don't know what he's talking about: http://www.justgohome.co.uk/blog/2014/04/new-in-14-04-apache.html
[14:33] <klander> does anyone know what he's referring to?
[14:41] <TJ-> klander: vrtualised instances using deployment and configuration management tools; Chef, Puppet, Salt, Ansible,  etc, etc,
[16:17] <caribou> gnuoy: jamespage: FYI, looks like I got the compute_changed() to work in non-relation hooks (i.e. upgrade-charms)
[16:17] <caribou> I'll give it more tests next monday & fix the unit-tests
[16:17] <gnuoy> caribou, excellent
[16:17] <caribou> gnuoy: jamespage: your help was very valuable, thanks for it
[16:17] <gnuoy> np
[16:18] <jamespage> caribou, yw
[16:20] <smoser> kirkland, bummer: https://bugs.launchpad.net/cloud-init/+bug/1322692
[16:53] <adam_g> smoser, eeek https://bugs.launchpad.net/cloud-init/+bug/1316475
[16:55] <smoser> adam_g, bah.
[16:56] <smoser> good debugging.
[16:57] <smoser> adam_g, you can just disable that datasource
[16:57] <smoser> thats the easiest thing to do .
[16:58] <adam_g> smoser, yeah..
[17:03] <adam_g> smoser, how do i disable it?
[17:04] <smoser> dpkg-reconfigure cloud-init will allow you to select
[17:05] <smoser> or you can just pars/edit /etc/cloud/cloud.cfg.d/90_dpkg*
[17:11] <smoser> adam_g, that just plain sucks
[17:24] <adam_g> smoser, yeah. its still not clear to me why its happening on a small percentage of our servers and not others. im happy to test whatever you come up with.
[17:29] <smoser> adam_g, well, it would not hang if either:
[17:29] <smoser> a.) there was no ttyS1
[17:29] <smoser> b.) there was data to read on the ttyS1
[17:29] <smoser> c.) the read failed.
[19:04] <kirkland> smoser: okay...suggestions?
[19:06] <smoser> no good suggestions at the moment
[19:22] <vonsyd0w> Hello! I'm trying to get my dhcp server to dynamically update my dns server with hostnames, but dhcpd keeps spitting out these errors: dhcpd: Unable to add forward map from $HOSTNAME. to $IP: connection refused - Anyone familiar with it?
[19:24] <sarnold> vonsyd0w: 'connection refused' could be generated by a firewall on the dhcp machine, a firewall on the dns machine, or the dns server not being configured to accept connections on that IP/port
[19:29] <vonsyd0w> firewall on the dhcp! wow forgot
[19:30] <vonsyd0w> let me check now...
[19:53] <kingbeowolf> how do you guys control your raid?
[19:55] <dcosnet> telekinetics
[19:55] <dcosnet> O:-)
[19:55] <rberg_> with mdadm.. how else?
[19:55] <kingbeowolf> so no hardware?
[19:55] <kingbeowolf> im looking at a 4 disk ssd raid
[19:55] <kingbeowolf> maybe 5
[19:56] <rberg_> I have used some hardware.. I find megacli obtuse
[19:57] <kingbeowolf> what do you think of this? http://www.amazon.com/gp/product/B004JPHAF0/ref=s9_wish_co_d0_g147_i4?ie=UTF8&colid=12KKAP4FH9RM4&coliid=I3TX9VQI5ZJ53U&pf_rd_m=ATVPDKIKX0DER&pf_rd_s=typ-top-left-1&pf_rd_r=1ZG25XSSJ2F44974WDGZ&pf_rd_t=3201&pf_rd_p=1780082482&pf_rd_i=typ01
[19:59] <rberg_> personally I would stick to software raid for 4 or 5 disks
[19:59] <kingbeowolf> i think i have a bottle neck some where with this 4 disk raid 0
[19:59] <kingbeowolf> using mdadm
[20:00] <vonsyd0w> sarnold, that was it! A firewall issue. I forgot my dhcp server (my edgemax router) had explicit deny rules set. I had to create an allow rule for port 53, its working now
[20:02] <sarnold> vonsyd0w: oh! nice :D
[20:02] <sarnold> vonsyd0w: thanks for reporting back, I love learning what fixes things :)
[20:02] <fridaynext> has anyone used this before? http://sourceforge.net/projects/automysqlbackup/?source=dlp
[20:02] <fridaynext> I'd like to set up a cron job to back up my mysql DB's, and that looks to fit the bill, but it hasn't been updated in over a year.
[20:05] <remix_tj> fridaynext: a collegue of mine implemented this, works well, dumps when requested-
[20:06] <fridaynext> remix_tj: recently?
[20:06] <remix_tj> last july
[20:07] <fridaynext> remix_tj: meh, close enough. it's not like commands to back up a db have changed in 6 months.
[20:07] <fridaynext> mysqldump -u me -pblah, etc.
[20:07] <remix_tj> oh, yeah
[20:07] <fridaynext> I'd write the scripts myself, but I'm too lazy.
[20:07] <[[lutchy]]> What's wrong with just a routine mysql dump?
[20:07] <vonsyd0w> fridaynext, Percona XtraBackup is another option: http://www.percona.com/doc/percona-xtrabackup/2.1/
[20:08] <vonsyd0w> I haven't used it, but it seems well maintained
[20:08] <vonsyd0w> and good docs
[20:08] <remix_tj> automysqlbackup also sends report, helps with a lot things for lazy sysadmins
[20:08] <[[lutchy]]> hmmmm...
[20:08] <remix_tj> and you know, you're not a senior sysadmin until you're lazy
[20:08] <fridaynext> remix_tj: have i been upgraded to senior??? nice!
[20:09] <[[lutchy]]> No offense, but it's like another software to keep up2date.
[20:09] <fridaynext> really all I want to do is back up the db, zip it, and move it into the user's owncloud folder (daily).
[20:09] <remix_tj> [[lutchy]]: your code has to be tested and updated by yourself, drilling down problems when issue appears
[20:09] <fridaynext> dammmit, i'm just going to write the cron job for it.
[20:09] <fridaynext> no reason to install something else.
[20:10] <remix_tj> when the code is written by others for many people, it's well tested (hope) and problems are eventually drilled down by more people than yourself
[20:10] <[[lutchy]]> Now if I make it simple...
[20:10] <[[lutchy]]> s/now/not/
[20:11] <remix_tj> yeah
[20:11] <remix_tj> simple means 2 lines of code
[20:11] <[[lutchy]]> I like simplicity when it comes to sysadmin...
[20:11] <remix_tj> more than two is already on the road for "complex" :-P
[20:11] <[[lutchy]]> remix_tj, Yeah, only time it breaks, is bash or something else changed :-p
[20:12] <remix_tj> yup :-)
[20:13] <[[lutchy]]> In my experience, I've seen some folks come with complicated solutions to solve a simple solution (not sure if they are trying to impress someone or not)
[20:13] <[[lutchy]]> s/simple solution/simple issue/
[20:15] <Joe_knock> Hello
[20:18] <[[lutchy]]> Hi
[20:24] <Joe_knock> Would it be possible to use wubi.exe to install ubuntu server alongside winXP? Using ubuntu12.04 server
[20:25] <[[lutchy]]> That's 'ubuntu12.04' part confused me a bit...
[20:27] <Joe_knock> [[lutchy]]: 12.04 is the version of ubuntu server that I have.
[20:28] <[[lutchy]]> Joe_knock, If I understood correctly, you want to install Ubuntu Server under ubuntu 12.04 along side winxp ?
[20:29] <[[lutchy]]> I have about 4 Ubuntu Server VMs, but I would assume that the installer, which I can test in a VM, will automatically recognize WinXP :/
[20:30] <[[lutchy]]> wubi.exe to install ubuntu, using ubuntu12.04? webi.exe is a Win program..
[20:32] <lordievader> Joe_knock: Wubi hurts performace, not really what you want on a server.
[20:33] <Joe_knock> lordievader: I just need it for home-testing and playing around with it. It isn't a production server.
[20:34] <Joe_knock> [[lutchy]]: That is my point. I want to use wubi.exe to install ubuntu server alongside WinXP
[20:34] <lordievader> Still, wubi is ugly and should die. But that is just my personal opinion.
[20:35] <Joe_knock> lordievader: I respect your opinion, but I am just clarifying whether it is possible or not. I am restrained in that I cannot make the system run ubuntu on its own.
[20:36] <[[lutchy]]> From what I understood, wubi was just a more easier way to install Ubuntu under Windows XP ?
[20:36] <[[lutchy]]> Not Windows XP, but under any Windows version that is supported
[20:36] <lordievader> Hmm, haven't done a wubi install in years. But I suppose that if it is possible to install the desktop the server part shouldn't be a problem.
[20:37] <Joe_knock> [[lutchy]],
[20:37] <Joe_knock> [[lutchy]]: Yes that is correct. It installs "within" windows. so ubuntu kind of lives on top of windows
[20:37] <Joe_knock> lordievader: Thanks
[20:38] <guntbert> Joe_knock: just why don't you use virtual machines?
[20:38] <[[lutchy]]> hehehe
[20:39] <[[lutchy]]> I have Windows Server 2012 R2, with the latest patches to Hyper V
[20:40] <[[lutchy]]> I have CentOS, and 2 Ubuntu VM Server
[20:40] <[[lutchy]]> My http://projects.lhprojects.net/ and http://scm.lhprojects.net/. both running from the same server and from VMs
[20:40] <Joe_knock> guntbert: The tower I intend using only has 512MB of memory
[20:41] <[[lutchy]]> Ok...
[20:41] <[[lutchy]]> Well, IIRC, 14.04 only support 64bit ?
[20:42] <WACOMalt> Hey folks. I just checked munin on a largely inactive server, and I am seeing a lot of postfix activity http://i.imgur.com/x49HlKk.png
[20:42] <WACOMalt> should I be worried?
[20:42] <[[lutchy]]> Joe_knock, Let me guess, no USB, no CD-ROM ?
[20:42] <guntbert> Joe_knock: thats really low, indeed
[20:42] <lordievader> !munin
[20:42] <lordievader> Hmm, thought there was something with munin.
[20:43] <Joe_knock> [[lutchy]]: It has USB and a CD-ROM drive, but I will install using a virtual drive.
[20:43] <WACOMalt> well anyways, this server has never ever been used (intentionally) as a mail server.
[20:43] <WACOMalt> So is this sign of someone using it agains tmy will?
[20:43] <Joe_knock> guntbert: Which is why I need full resources to try installing a couple of tools  I want to test
[20:43] <Joe_knock> WACOMalt: if that is the case, why is postfix installed on it?
[20:44] <WACOMalt> I would venture to guess it came with it
[20:44] <[[lutchy]]> Postfix by default, is pretty secure
[20:44] <guntbert> Joe_knock: I understand but cannto contribute anything to your wubi-question - I never used it, not even in a VM
[20:44] <guntbert> *cannot*
[20:44] <WACOMalt> how can I check if anyone has been using it?
[20:44] <[[lutchy]]> All my Ubuntu installations were basic with just openSSH
[20:45] <Joe_knock> WACOMalt: It looks like only 6 mails have been deferred. Unless I am misinterpreting the graph. so it isnt a lot, compared to previous activity
[20:45] <WACOMalt> can I check what user initiated those?
[20:45] <Joe_knock> guntbert: I will test it out in a couple of minutes time.
[20:46] <guntbert> Joe_knock: Good luck!
[20:46] <Joe_knock> WACOMalt: yes, you should be able to see activity of sent/received emails. Do you have access to the server?
[20:46] <WACOMalt> I do, but have no experience with postfix to know where to look
[20:47] <Joe_knock> WACOMalt: Are you comfortable with the commandline?
[20:47] <Joe_knock> guntbert: Thanks
[20:47] <[[lutchy]]> The Mail log, will not tell you who initiated in terms of a *user*... It will tell you the host
[20:47] <WACOMalt> yes, provided I can get some help as to what to type :P
[20:47] <[[lutchy]]> Who asked the Mail to be relayed
[20:47] <WACOMalt> that works [[lutchy]]
[20:47] <[[lutchy]]> Unless you have SASL auth
[20:47] <[[lutchy]]> Then you will get a *user*
[20:48] <WACOMalt> -\(°_o)/¯
[20:48] <WACOMalt> I never set that up, so I guess I wont get a user
[20:48] <WACOMalt> so how do I check?
[20:48] <[[lutchy]]> How do you want the E-Mail server to be setup ?
[20:48] <Joe_knock> [[lutchy]]: Wouldn't all that is needed to identify the person is the email address that tried to send the email? Based on the graph, it seems somebody or 6 people or 1-6 people tried to send a total of 6 emails that were "deferred"
[20:49] <[[lutchy]]> Postfix is very documented and features typically work as expected
[20:49] <WACOMalt> [[lutchy]], not at all preferably
[20:49] <lordievader> WACOMalt: Have you by any change installed logwatch?
[20:49] <WACOMalt> nope
[20:49] <[[lutchy]]> Joe_knock, no, I can telnet to your server and enter whatever mail from person I like
[20:50] <Joe_knock> [[lutchy]]: true, but it may be possible that some end-user is trying to send an email.
[20:50] <WACOMalt> so, how do I check?
[20:50] <[[lutchy]]> logwatch is really good
[20:51] <Joe_knock> WACOMalt: Do you want us to check what commands to type in for you? lol
[20:51] <[[lutchy]]> It's very useful ... I thinking of another program to aggregate results from different servers.. I found logstash
[20:52] <Joe_knock> WACOMalt: Perhaps this might help -- http://www.postfix.org/postqueue.1.html
[20:52] <[[lutchy]]> Joe_knock, It's not about if user will or will not send mail... it's based on the policy of the mail server to reject it
[20:53] <Joe_knock> WACOMalt: I see that the -p part is where you may be looking.
[20:53] <[[lutchy]]> I have two postfix servers relaying mail..
[20:53] <[[lutchy]]> I don't know why you would suggest postqueue
[20:53] <[[lutchy]]> When postqueue is about whatever mail that's currently in the 'queue'
[20:54] <Joe_knock> [[lutchy]]: I am trying to help him as best I can. Feel free to make a better recommendation
[20:56] <[[lutchy]]> Yeah, it's about helping one another ...
[20:56] <[[lutchy]]> I had asked earlier how he wanted his E-Mail server to be setup
[20:56] <[[lutchy]]> I doubt, when you 'apt-get install postfix', it leaves you as open RELAY
[20:57] <WACOMalt> [[lutchy]], I answered that question with "not at all"
[20:57] <[[lutchy]]> Postfix, being complex, has a very simple and well documented
[20:57] <[[lutchy]]> configuration
[20:57] <WACOMalt> it looks like one of my users has set up an email address, and they have google set to recieve that for them in their gmail
[20:58] <WACOMalt> thats all the acticity I can see in postqueue
[20:58] <WACOMalt> so I guess things are fine?
[20:59] <[[lutchy]]> Read what postqueue is far ...
[20:59] <[[lutchy]]> postqueue does not produce logs
[21:00] <[[lutchy]]> Postfix is simple a MTA, if it's allowed to TRANSFER email, then it's likely a faulty in security
[21:01] <WACOMalt> ok, then I'm just uninstalling it
[21:01] <[[lutchy]]> If you don't need a MTA, then yeah, but keep in mind... programs that need a sendmail program will fail
[21:02] <[[lutchy]]> Bottom line, who accessing your Computer ?
[21:02] <WACOMalt> honestly, at this point I dont even know. I have 3 authorized users. And I'm about as unfit of a server admin as has ever existed
[21:02] <WACOMalt> :)
[21:02] <Joe_knock> [[lutchy]]: the emails weren't delivered, which means they're sitting in the queue, which is why postqueue can tell what emails are there.
[21:02] <[[lutchy]]> Ok
[21:03] <Joe_knock> WACOMalt: Do you recognise the email address?
[21:03] <[[lutchy]]> WACOMalt, There are few things you can to secure it up
[21:03] <WACOMalt> yes, and no
[21:03] <WACOMalt> there is www-data@myserver.com
[21:03] <WACOMalt> and then a few that are actually users
[21:03] <WACOMalt> but all are trying to send to gmail servers
[21:04] <Joe_knock> WACOMalt: Identify with the company bureaucracy whether you are allowed to access email accounts, if yes, then maybe you can access the content of the email to see what it says.
[21:04] <WACOMalt> "the company beaurocracy" = me
[21:04] <WACOMalt> its just a guy (me) with a box
[21:04] <[[lutchy]]> What piss you off the most
[21:04] <Joe_knock> WACOMalt: I think what is happening is that they're using mail-forwarding, and those 6 emails weren't forwarded for some reason.
[21:05] <[[lutchy]]> The fact, 3 authorized people send Mail
[21:05] <[[lutchy]]> Or the targets of the E-Mail, by one person, is wrong
[21:05] <[[lutchy]]> That information can help
[21:05] <WACOMalt> yeah I'm gonna talk to those 3 people
[21:05] <WACOMalt> they dont use the emails any more, but I know they had them set up at once point. Gonna askt hemt o remove them from gmail
[21:06] <[[lutchy]]> BTW, when it's local mail
[21:06] <[[lutchy]]> It's logged by user
[21:06] <[[lutchy]]> So, if they send a mail from 'sendmail' program from the server, it will be logged by who
[21:06] <Joe_knock> WACOMalt: Are they just forwarding from myemail@domain.com to myemail@gmail.com or are you using gmail as your email provider?
[21:07] <[[lutchy]]> Postfix behaves differently when it comes to local sent mail
[21:07] <WACOMalt> Joe, the former example
[21:07] <[[lutchy]]> There is a different daemon that handles local mail
[21:07] <[[lutchy]]> That should be log, what Ubuntu server do you have ?
[21:07] <WACOMalt> At current I see nothing to hint at someone misusing the server. So I'm not worried and will jsut ask the users to disconnect google if they arent using the address any more
[21:08] <WACOMalt> 10.04 :/
[21:08] <Joe_knock> WACOMalt: Based on what you say, it is actually the automated email forwarder that has placed the emails in the queue (my assumption based on info).
[21:08] <WACOMalt> hmm possibly
[21:09] <Joe_knock> WACOMalt: Disconnecting google won't fix it. You need to stop mail-forwarding from: myemail@mydomain.com . What email client do you have access to?
[21:09] <WACOMalt> you mean on the server?
[21:09] <WACOMalt> or on my local desktop
[21:10] <WACOMalt> locally I think I have windows live mail which could connect to those addresses
[21:10] <Joe_knock> Yes, you need to configure mail-forwarding to be stopped. Find out how the 3 other users setup mail-forwarding previously and tell them to reverse that process.
[21:10] <WACOMalt> righto. I'll have to look into that tonight though
[21:10] <WACOMalt> I have to leave for the time being.
[21:11] <Joe_knock> WACOMalt: Perhaps a webmail client like squirrel or something else might help. Good luck
[21:11] <WACOMalt> Thank you both for the help. At least my mail wasnt being used for nefarious purposes
[21:11] <Joe_knock> WACOMalt: If you don't need postfix, best to remove it then.
[21:12] <WACOMalt> yeah I'll check if anyone else is actually using it, but it certainly looks like they arent
[21:13] <WACOMalt> ok, I'm off. thanks again!
[21:15] <fridaynext> what permissions does a mysql user need to be able to dumb a DB?
[21:15] <[[lutchy]]> root
[21:15] <fridaynext> i have all my users set at SELECT, UPDATE, INSERT, DELETE
[21:15] <[[lutchy]]> Depends, the entire DB directory ?
[21:15] <fridaynext> oh, so only root can dump a db?
[21:15] <fridaynext> no just individual DB's
[21:15] <fridaynext> I want to dump DB's daily for my clients running WP sites on my server
[21:16] <serverhorror> fridaynext: usage and select (on all object) may be enough if you do a mysqldump. but it really depends on the kind of backup you want to do
[21:16] <fridaynext> serverhorror: just basic.
[21:16] <[[lutchy]]> thanks serverhorror
[21:16] <fridaynext> mysqldump -u someone -pblah summatDB > summatDB_today.sql
[21:16] <[[lutchy]]> I was thinking only 'select'
[21:17] <serverhorror> i think just about any right implies usage, but granting it makes it explicit in listing the rights also
[21:18] <fridaynext> this is what i'm getting when i try to use a user's un/pw for a dump http://pastebin.com/TrcETmff
[21:19] <serverhorror> fridaynext: actually any user can create a backup like this. if you “grant usage,select on myschema.public_table to ‘restricted_backupuser’@‘backuphost.invalid’” you’ll still get a backup just not of all the tables in the database ;)
[21:19] <serverhorror> s/database/schema
[21:21] <Macer>            0.25    0.00    2.61   26.12    0.00   71.02
[21:21] <Macer> i am getting 26.12 in iostat for iowait :/
[21:21] <[[lutchy]]> iostat?
[21:21] <fridaynext> serverhorror: what if I grant lock tables - will that fix the error in the pastebin?
[21:22] <Macer> top too
[21:22] <Macer> but i wanted to see where it was.. .can't quite put my finger on where i'm getting the pause
[21:22] <[[lutchy]]> You know fridaynext, give users all access to the DB they OWN
[21:23] <serverhorror> it should, but it will also …. LOCK your tables (no other client can write while the backup is running)
[21:23] <fridaynext> [[lutchy]]: i'm running wordpress sites, and wordpress codex suggests only granting select, insert, update, delete to mysql users.
[21:23] <[[lutchy]]> No..
[21:23] <[[lutchy]]> You can grant access per db, even per table
[21:24] <fridaynext> http://codex.wordpress.org/Hardening_WordPress#Database_Security
[21:24] <fridaynext> I know I can do that, but this is just what the WordPress codex suggests.
[21:24] <fridaynext> I want to keep it as secure as possible.
[21:24] <serverhorror> fridaynext: don’t use wordpress :)
[21:24] <[[lutchy]]> Macer, Install 'iotop', it can show you IO in terms of IO in regards of DISK usage
[21:24] <fridaynext> serverhorror: if i had a nickel.
[21:25] <serverhorror> hehe
[21:25] <[[lutchy]]> Let me get this right...
[21:25] <fridaynext> but until I can teach myself a new language and implement a secure e-commerce platform for the 12 sites i'm currently running - i'm stuck with WP.
[21:25] <[[lutchy]]> You know you can do that but you followed instructions blindly ?
[21:25] <fridaynext> [[lutchy]]: not blindly.
[21:26] <Macer> ah ok. let me take a look at it
[21:26] <serverhorror> fridaynext: truly the permissions depend on the backup. my default solution is to run a backup only slave and go from there (either xtrabackup, mysqldump, …) that at least won’t lock the clients out during backups
[21:27] <serverhorror> fridaynext: e.g. I’m pretty sure mysqldump —opt … and mysqldump —single-transaction …  will need different sets of permissions
[21:27] <fridaynext> if I run mysqldump as root, and put that in a .sh file in the root directory, to be run with the root user's crontab - is that basically secure?
[21:27] <[[lutchy]]> Yes, but from he is telling me, he already knew beforehand, then apply permission per DB
[21:27] <[[lutchy]]> NEVER fridaynext
[21:27] <serverhorror> fridaynext: just don’t
[21:27] <fridaynext> which part - the root user's cron job?
[21:27] <fridaynext> or running mysqldump as root?
[21:28] <fridaynext> or both?
[21:28] <[[lutchy]]> This is how I do it, as opposed to serverhorror
[21:28] <Macer>            0.25    0.00    2.61   26.12    0.00   71.02
[21:28] <Macer> oops
[21:28] <Macer> 12015 be/4 root       48.19 K/s    0.00 B/s  0.00 % 98.63 % dpkg -i ./packages/zimbra-store_8.0.7.GA.6021.UBUNTU12.64_amd64.deb
[21:28] <[[lutchy]]> I use phpmyadmin, to select what permissions per user has
[21:28] <Macer> wow... wth?
[21:29] <Macer> is that because of some awkward lxc kernel translation or something? heh
[21:29] <[[lutchy]]> I just give full permission to that user to the db
[21:29] <Macer> iotop has dpkg @ 99% io?
[21:29] <[[lutchy]]> It avoids any permission issues moving forward
[21:29] <serverhorror> fridaynext: make it secure: use ssl all over the place (expect about 30% of the original performance). never use root but dedicated role accounts. get rid of the [debian|ubuntu] mysql user
[21:30] <fridaynext> serverhorror: i've been converting all sites to 100% ssl over the past week, so that much is being taken care of.
[21:30] <serverhorror> fridaynext: practically I’m having dedicated schema owners and use those for maintenance like backups…
[21:30] <serverhorror> fridaynext: on the webserver part or the mysql part
[21:30] <fridaynext> webserver
[21:30] <serverhorror> fridaynext: i’m talking about mysql
[21:30] <fridaynext> i didn't know using ssl was possible within the same machine
[21:30] <fridaynext> for mysql
[21:30] <fridaynext> since it's all on localhost, that is.
[21:31] <[[lutchy]]> Macer, snippets... do not work.. I can't see headers, you expect people understand what you pasting
[21:31] <serverhorror> fridaynext: it is. mysql just has the habit of being overly friendly and actually uses the unix socket instead of the tcp socket if you specify localhost but you can tell the (at least) the mysql client to use the tcp socket
[21:31] <[[lutchy]]> MySQL and SSL serverhorror .. it's frustrating
[21:32] <[[lutchy]]> Not all clients support SSL with MySQL
[21:32] <[[lutchy]]> I have stunnel setup .. :)
[21:32] <Macer> hm. zfs seems to be giving me some pretty bad speeds
[21:32] <serverhorror> [[lutchy]]: as i said: 30% of the original performance (not 30% drop) 30% left of what you had
[21:32] <fridaynext> serverhorror: wow that's a huge drop
[21:32] <serverhorror> (rough estimates of course)
[21:33] <[[lutchy]]> serverhorror, You didn't say from what I am reading just in case
[21:33] <Macer> [[lutchy]]: sorry.. it's not zfs. the i/o to the zfs isn't bad. i don't know what is holding it up
[21:33] <Macer> dpkg in a container seems to be getting held up somewhere
[21:34] <serverhorror> [[lutchy]]: I was referring to the frustration. I’d drop mysql without a blink if I had the chance too
[21:34] <Macer>   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
[21:34] <Macer> 12015 be/4 root        0.00 B/s    0.00 B/s  0.00 % 99.84 % dpkg -i ./packages/zimbra-store_8.0.7.GA.6021.UBUNTU12.64_amd64.deb
[21:34] <[[lutchy]]> The socket issue is a problem
[21:34] <Macer> it's not the disk.. :/
[21:35] <fridaynext> so how do i avoid doing a mysqldump with locktables? (is that even possible?(
[21:35] <[[lutchy]]> Especially when I am using stunnel, but just specifying the IP of the localhost should resolve that
[21:35] <[[lutchy]]> fridaynext, read the error carefully
[21:35] <fridaynext> [[lutchy]]: it looks like it is definitely going to lock the tables while performing the db backup.
[21:36] <[[lutchy]]> I would have to defer to serverhorror ....
[21:36] <fridaynext> oh it's that percent sign, isn't it.
[21:36] <[[lutchy]]> How big is the DB ? Why can't you schedule a time  ?
[21:36] <fridaynext> It's probably 30MB
[21:37] <fridaynext> I can schedule a time, but I'm testing the mysqldump command as the owner of the db before setting up a cronjob that won't work when it's fired
[21:37] <[[lutchy]]> Regardless, I still have to defer to serverhorror...
[21:37] <fridaynext> [[lutchy]]: I don't get it - defer b/c I'm too daft to understand, or you don't know what to tell me next?
[21:37] <[[lutchy]]> My question to serverhorror, if you know, does mysqldump 'lock'?
[21:37] <serverhorror> 30M should be done in well under a minute with almost any hardware today. depends on your requirements wether that can still be locked or not
[21:37] <fridaynext> [[lutchy]]: oh.
[21:39] <[[lutchy]]> Yeah, I agree with serverhorror
[21:39] <[[lutchy]]> But, are you doing back per user ?
[21:39] <[[lutchy]]> s/back/backup/
[21:40] <[[lutchy]]> That's inefficient
[21:40] <serverhorror> [[lutchy]], fridaynext: look for “—add-locks” there’s also a “—no-add-locks” (or any other —no…. options for that matter) so it doesn’t necesarily lock. but then it doesn’t guarantee consistency (read: you could get a broken backup, missing posts, etc. but 30MB doesn’t sound like a high traffic blog) —
[21:40] <serverhorror> http://manpages.debian.org/cgi-bin/man.cgi?query=mysqldump&apropos=0&sektion=0&manpath=Debian+7.0+wheezy&format=html&locale=en
[21:40] <[[lutchy]]> Thank serverhorror :)
[21:40] <[[lutchy]]> Thank you* serverhorror
[21:41] <serverhorror> personally i do mysqldump —single-transaction —opt —triggers —add-drop-triggers ….
[21:41] <fridaynext> serverhorror: it's a super high trafficked site, but I just granted lock tables to the user, and it performed the dump in about 0.3 seconds.
[21:41] <serverhorror> but as I said: I always have a backup only slave so I have no problems with locks
[21:42] <serverhorror> fridaynext: high traffic in that case means: it’s creating multiple posts per second as the lock would keep you from saving them but not from reading them
[21:42] <fridaynext> is that a separate mysql user with permissions on all DB's?
[21:42] <[[lutchy]]> I think the issue is, since fridaynext mentioned user, I still don't get why you don't give the use full privs?
[21:42] <serverhorror> fridaynext: no it’s separate hardware that acts as a repliation slave
[21:42] <fridaynext> [[lutchy]]: I've had client's come to me whose sites have been hacked, and I want to keep my sites as secure as possible.
[21:43] <[[lutchy]]> Do you trust MySQL ?
[21:43] <fridaynext> I haven't read how mysql gets injected with malware, etc, but I figure as few permisisons as necessary is a good start to keeping it secure.
[21:43] <serverhorror> fridaynext: the practical solution is actually a “schema owner” that can connect from trusted sources.
[21:43] <[[lutchy]]> It doesn't matter
[21:43] <[[lutchy]]> You can't prevent SQL injection from MySQL
[21:43] <[[lutchy]]> You can limit
[21:44] <[[lutchy]]> A user having entire access to his DB
[21:44] <[[lutchy]]> The information in his DB get corrupted is not really your issue
[21:44] <serverhorror> fridaynext: most web apps (including wordpress) don’t get hacked because of the base installation but rather because of weak passwords and plugins (which are mostly …. let’s say audited to a lesser extent)
[21:44] <serverhorror> reason no. 1 being weak passwords
[21:45] <fridaynext> serverhorror: and i'm already enforcing strong passwords, so I guess I don't even need to worry about the user having all permissions.
[21:45] <serverhorror> you can’t possibly get your users to use secure passwords if the wp-admin is on the interwebs
[21:46] <fridaynext> serverhorror: not sure if sarcastic, but i've enforced strong passwords with a security plugin.
[21:46] <serverhorror> people tend to use the same password all over the place so it’s not like you can actually help them. password leaks being everywhere and downloads of the files only a google search away
[21:46] <[[lutchy]]> No one suggested all permissions \
[21:46] <fridaynext> 16:42 [[lutchy]]: I think the issue is, since fridaynext mentioned user, I still don't get why you don't give the use full privs?
[21:46] <[[lutchy]]> I suggest all permission on per user database
[21:47] <[[lutchy]]> Right
[21:47] <fridaynext> oh
[21:47] <fridaynext> that's what I thought, and what I meant when I stated it just now.
[21:47] <fridaynext> so we're still on the same page.
[21:47] <serverhorror> fridaynext: as i said “schema owner” (grant all on customer1.* to schema_owner@secured_host) that is what I use. everything else was impractical to manage for me
[21:48] <[[lutchy]]> I have been saying the same statement for the past 30 minutes
[21:48] <fridaynext> okay thanks for the help.
[21:49] <serverhorror> ok now for my question :)
[21:50] <[[lutchy]]> I think there was some level of miscommunication
[21:50] <serverhorror> I’m looking for image deployment systems. should be able to handle about 50-100 images and scale to deploy to about 10K hosts/year
[21:50] <serverhorror> [[lutchy]]: I think so too. Practicality and security don’t always play well together :)
[21:51] <[[lutchy]]> uh... When you say Image ? Do you mean, image of a disk ?
[21:51] <serverhorror> target OS being: all things linux (debian, rhel, ubuntu, …)
[21:51] <serverhorror> [[lutchy]]: yes.
[21:51] <serverhorror> not necessarily block based but definitely not the usual pxe boot/debootsrap/seed files
[21:52] <serverhorror> rather something like pxe boot > do_magic.sh (which puts the image on the target, optionally runs post-script, or some kind of first-boot afterwards) > profit
[21:53] <[[lutchy]]> I am a intermediate sysadmin... I would like to see how you solve this large scare problem
[21:53] <[[lutchy]]> With tools that Ubuntu offer
[21:53] <[[lutchy]]> BTW, I don't like Debian... That's why I pick Ubunt
[21:53] <bekks> kickstart.
[21:54] <[[lutchy]]> I would think of backup system
[21:54] <serverhorror> bekks: I thought kickstart was just the native RHEL way of running installation scripts like debian distros have preseed files and the installer
[21:54] <bekks> It works fine on Ubuntu, too.
[21:55] <bekks> And it is much easier than preseed files.
[21:55] <serverhorror> bekks: but will it use the “current package sources” or a frozen state (image deployment). I hav a requirement of “frozen state”
[21:55] <bekks> It will use whatever your local mirror provides ;)
[21:56] <bekks> For even larger use cases, you can use http://fai-project.org/ too.
[21:56] <serverhorror> bekks: that is my problem. I don’t have control over the mirrors at our site and the requirement is not to introduce “unwanted changes”
[21:57] <bekks> So set up your own mirror. Aint that hard :)
[21:57] <serverhorror> bekks: If you get my management to sign that I’ll have it in an hour :)
[21:58] <bekks> Build a test scenario, present it. It will be signed.
[21:58] <[[lutchy]]> hehehe
[21:58] <sarnold> serverhorror: surely you want all the updates installed before deployment, right? :)
[21:58] <serverhorror> sarnold: actually NO! that is the core part of the requirement
[21:58] <bekks> Get that part signed, too.
[21:59] <bekks> Otherwise you will be blamed for missing updates :)
[21:59] <serverhorror> and please don’t …. image vs. installer based is postgres vs. mysql, vim vs. emacs ….
[21:59] <sarnold> yikes, no updates? o_O
[21:59] <sarnold> you guys like heartbleed? :)
[22:00] <bekks> "No updates" is the core part of a totally broken design of how to develop things.
[22:00] <[[lutchy]]> postgres is way lighter than mysql on default install
[22:00] <[[lutchy]]> If I had choice, I would chose postgres, but I despise the way they auth
[22:00] <serverhorror> sarnold: no just 100% verifiable system state. automated image updates are a big part of it. but I need to be able to deploy a system exatly like it was in the revision referenced by $DOCUMENTATION
[22:01] <bekks> "... includiing all that bugs which where fixed by updates."
[22:01] <[[lutchy]]> I am not sure if postgres is thinking of security if I have to define how host auth ?
[22:01] <sarnold> serverhorror: fair enough, updates can always be installed after the system boots
[22:01] <[[lutchy]]> Am I wrong? :s
[22:02] <[[lutchy]]> In administrative way, for me to keep editing the host file when I create a new user..
[22:02] <serverhorror> sarnold: we were done with heartbleed (software update) in about 12 hours and had all the certs revoked and reissued in 3 working days. I’d say that we were actually pretty fast
[22:02] <[[lutchy]]> ugg...
[22:03] <bekks> serverhorror: ubuntu software updates took about 3 hours. I guess you werent that fast, actually.
[22:03] <[[lutchy]]> Everyone is focuses on serverhorror
[22:04] <[[lutchy]]> If I have to read, there is a few core issues I disagree with
[22:04] <serverhorror> [[lutchy]]: pg_hba defines _how_ to do the auth, and the permissions define what to auth. actually a pretty nice separation
[22:05] <[[lutchy]]> One of them is 'rapid' response
[22:06] <[[lutchy]]> There is 6 billion people on earth, to expect everyone know the exploit with 24 hours.. uh.. then you had to wait for openSSL to apply a patch
[22:06] <serverhorror> bekks: but installing the packages is of no help. you need to reboot all the hosts (or at least reastart the daemons). fix software, redeploy…. I’ll stick with pretty fast
[22:06] <sarnold> [[lutchy]]: sorry, I just don't know much about postgres details :)
[22:06] <[[lutchy]]> serverhorror, <[[lutchy]]> I am not sure if postgres is thinking of security if I have to define how host auth ? is that different from <serverhorror> [[lutchy]]: pg_hba defines _how_ to do the auth, and the permissions define what to auth. actually a pretty nice separation
[22:07] <bekks> serverhorror: Actually, we werent affected by heartbleed at all, since we had no systems using affected versions. :)
[22:07] <serverhorror> [[lutchy]]: permissions are grant statements within postgres (authorization), pg_hba can actually do a multitude of authentication...
[22:07] <bekks> serverhorror: Thats what I call fast. :)
[22:07] <serverhorror> bekks: cheater! ;)
[22:08] <serverhorror> bekks: or is it slow because you didn’t apply updates and had an old openssl version …. SCNR
[22:09] <bekks> serverhorror: :D - No, in fact, most of the systems werent affected at all.
[22:09] <sarnold> precise is too new for you? :) gonna give it a bit more time to mature? :)
[22:10] <serverhorror> the more I read into this image deployment the more I actually want to schedule a meeting and set up my own mirrors....
[22:10] <bekks> Nope. Solaris 10 wasnt affected at all, same as RHEL up to 6.4, etc.
[22:11] <serverhorror> I wonder how amazon/rackspace are doing it
[22:11] <[[lutchy]]> serverhorror, I still like to use Postgres even now... But not many software support it that I like to use... But to consider in a multiple user environment? How would you solve it ?
[22:13] <serverhorror> [[lutchy]]: create role general_user_role nologin; create role1 ROLE general_user_role; …; create roleN ROLE general_user_role; assign rights and be done with it?
[22:14] <serverhorror> [[lutchy]]: actually i forgot the syntax. most of the time puppet does that for me :)
[22:16] <[[lutchy]]> I actually didn't't bother to look that up...
[22:17] <[[lutchy]]> Lucky you, you have your slave minions... :-p
[22:18] <serverhorror> [[lutchy]]: yeah lucky me … I also have compliance rules. There’s no technical problem as bad as having a compliance problem
[22:19] <bekks> My minions are called kickstart, NIM, Jumpstart and AI :)
[22:19] <serverhorror> AI?
[22:19] <serverhorror> I’ll take 10!
[22:19] <bekks> S11 automated installation. :)
[22:20] <serverhorror> darn…. artificial intelligence was my hope
[22:21] <sarnold> NIM?
[22:22] <bekks> AIX network installation management.
[22:23]  * genii sips and ponders ksplice
[22:23] <sarnold> aix hunh? wow ;)
[22:24] <sarnold> genii: also look into http://lwn.net/Articles/584016/ -- our pals at suse re-did it, kgraft, what with oracle's being all oracly
[22:25] <genii> sarnold: Oooooo nice
[22:26] <genii> And not all Oracly
I refuse to believe that there isn’t a decent system that actually let’s me manage image based deployment. I get all arguments regarding updates and having current systems, but heck: that can also be done right with images, it just needs a different set of practices. </rant>
[22:27] <serverhorror> everything that comes up is essentially some linux that deploys a sysprepped windows
[22:27] <[[lutchy]]> I would like see what toolks that ubuntu-server offers ?
[22:27] <serverhorror> yeah me too :)
[22:27] <[[lutchy]]> If Ubuntu is another Debian
[22:28] <bekks> kickstart.
[22:28] <[[lutchy]]> I rather use Ubuntu because I don't like Debian
[22:28] <[[lutchy]]> Personally, I like a cross between openSUSE and Ubuntu
[22:29] <serverhorror> ah what the… I’ll fiddle around with my pi some more. Monday is coming soon enough, then I’ll at least get paid for research
[22:33] <[[lutchy]]> I would like a Ncusrse environment sometimes to make my life easier
[22:33] <[[lutchy]]> and basic.. ip .. etc rules
[22:34] <sarnold> [[lutchy]]: heh, like yast fifteen years ago? :)
[22:35] <[[lutchy]]> yast is so useful sometimes
[22:35] <[[lutchy]]> but it can get in the way
[22:36] <[[lutchy]]> For example, I din't know how you setup Interfaces in Ubuntu? or the syntax is being used ?
[22:36] <[[lutchy]]> Even after I made the modifications to /etc/network/interfaces, /etc/init.d/networking restarting would report error
[22:36] <[[lutchy]]> Even when I am root
[22:38] <[[lutchy]]> I had to end up ip.. add the 'Alias' with IP and assume on reboot...
[22:39] <z1haze> can someone please help me add another website to my server? someone had set it up for me in the past and i dont really understand it
[22:39] <[[lutchy]]> That really depends.. Apache ?
[22:40] <z1haze> the sites are in /home/www each have their own folder, but they arent shown as folders, they're listed as MS-DOS Applications .. how do I make another one?
[22:40] <z1haze> yes apache
[22:40] <z1haze> the stuff is in /etc/apache2 there is the /sites-available and the /sites-enabled folder i KIND of know what it does but not really.. ive got it working to basically the "It works" page
[22:40] <sarnold> [[lutchy]]: yeah, bad news there -- the 'right way' to do interfaces is 'ifup <ifname>' and 'ifdown <ifname>'. it's insanely confusing why the initscripts don't work, but it winds up completely breaking dbus or upstart or both.
[22:42] <[[lutchy]]> sarnold, I was discussing how to make me life easier as admin
[22:43] <[[lutchy]]> It's true
[22:43] <sarnold> [[lutchy]]: you can add the 'ip' commands right in /etc/network/interfaces. that appears to be the 'blessed' way to do that job.
[22:43] <z1haze> so how can i add another website?
[22:44] <[[lutchy]]> sarnold, Keep in mind I am talking about 'Alias'
[22:44] <[[lutchy]]> sarnold, You can ?
[22:44] <[[lutchy]]> Ok
[22:45] <serverhorror> [[lutchy]]: still sarnold is right. “ip addr add 192.0.2.1/24 dev eth0” in /etc/network/interfaces is the way. post-up and pre-down will take care of adding/removing the ip when you run ifup eth0 (or ifdown)
[22:46] <[[lutchy]]> That's not the issue at the end of the day when I ask about tools to make my life easier as admin
[22:46] <[[lutchy]]> If you can add ip rules.. noted..
[22:47] <serverhorror> z1haze: ‘kind of’ is really broad in that case is another website simply a folder within a virtual host or do you want another virtual host, is it a static site, some php/python/perl application. Your problem description is way to vague to actually give you iinformation
[22:47] <[[lutchy]]> With Yast, I can tab to network, select interface, and add the ip address
[22:47] <z1haze> i figured it out, thanks
[22:48] <z1haze> just had to run the sudo a2ensite thing
[22:48] <[[lutchy]]> CentOS has 'system-network-tui'
[22:48] <z1haze> and it makes the stuff for u
[22:48] <serverhorror> [[lutchy]]: but Yast will make your life more complicated if you just have ‘that one little setting yast doesn’t know about and keeps overwriting, but is needed so desparately’
[22:48] <sarnold> serverhorror: btdt :)
[22:50] <[[lutchy]]> serverhorror, I have used Yast for the last 3 years
[22:50] <[[lutchy]]> I know it's limits
[22:50] <serverhorror> [[lutchy]]: another rather simple case where yast is not that helpful: 10 servers that are frontends for a web application. now go add 10 IP based vhosts to each of those hosts. It’s quite some fun actually running thru at least 300 config screens (assuming you can get there with just 3 screens) to the config on the hosts :)
[22:52] <[[lutchy]]> Not the case
[22:52] <serverhorror> [[lutchy]]: imho the only thing makes life easier is extracting the defining system of the use case and automating that to the point where you could be run over by a bus and nobody would notice because all the stuff is ‘self-service’ capable…. now that would be a reason for a big raise
[22:52] <sarnold> once you get more than two or three servres, using a tool like juju, chef, puppet, ansible, etc., would be the way to go :)
[22:54] <serverhorror> automation tools are always nice. it makes stuff repeatable and deterministic. i don’t do anything anymore that isn’t somehow “managed” (not even my workstation or home directory)
[22:56] <[[lutchy]]> What's important to me is managing and changing things to a server... How would it make my life easier ...
[22:56] <sarnold> serverhorror: nice. I'm jealous. :)
[22:56] <serverhorror> sarnold: it didn’t just appear at a spawn point. it actually took me 3 years or something to get to that point
[22:57] <sarnold> serverhorror: lol
[22:58] <[[lutchy]]> So far, none of Ubuntu servers break
[22:58] <[[lutchy]]> My VPS had BLUETOOTH software
[22:59] <[[lutchy]]> I have to talk to host about that... their template is busted
[22:59] <serverhorror> [[lutchy]]: just scan around maybe it’ll find an Airbus or something that you can control :)
[23:00] <[[lutchy]]> serverhorror, One of my Ubuntu Servers is OpenVZ which a Bluetooth software as a service and running
[23:00] <[[lutchy]]> serverhorror, I didn't know that OpenVZ had Bluetooth device
[23:01] <[[lutchy]]> The Ubuntu Servers that I run when I installed them was with openSSH, that I mentioned earlier
[23:05] <serverhorror> ha! I found yet another discussion about host name schemas… time for some trolling :)
[23:09] <miceiken> is there some way to filter spam for mail server?
[23:10] <fridaynext> miceiken: spamassassin and dovecot sieves.
[23:10] <fridaynext> btw serverhorror, I took your advice and created a new user with permissions on the tables I wanted said user to back up. works a charm.
[23:11] <serverhorror> fridaynext: just restrict the host part and you should be fine. if your backup server is somewhat secure you should be fine
[23:11] <fridaynext> serverhorror: it's all on the same server.
[23:11] <fridaynext> don't have the time/money to run two servers.
[23:11] <serverhorror> so if you disk goes up in flames: how do you restore?
[23:11] <miceiken> http://upload.clusterbrain.net/2014-05-24_0111.png that's a lot of dependencies :P
[23:12] <fridaynext> serverhorror: well, this is my backup plan. the backups I've been asking you about.
[23:12] <fridaynext> I backup the db and wp-content folder to each user's owncloud data folder, and they sync with their computer so they always have an up-to-date version of their site's content.
[23:13] <fridaynext> and I pay for backups at my host (Linode), so if the specific disk goes up in flames, they can just restore the whole image.
[23:13] <serverhorror> ah makes sense then. for a budget backup that seems a nice option
[23:13] <fridaynext> serverhorror: thanks!
[23:14] <fridaynext> serverhorror: it's nice to not hear 'you're doing it wrong!' for once.
[23:15] <serverhorror> fridaynext: would there be any chance of success suggesting another host with backup software, more time/money to invest and you not being able to actually provide value? — i guess no. so weighing the options: you have a remote backup (hoster) you _could_ even put the owncloud stuff as a addon feature to sell or list it for free on the ‘features’ page and gain some more customers.
[23:17] <serverhorror> fridaynext: get some marketing, some more customers and then (and only then) when the risks get too high (hopefully before any disaster strikes) set up a system that fits the changed requirements… place that in the bookable options. make profit, get more customers, ….. :)
[23:18] <fridaynext> serverhorror: i'm not totally following.  are you basically saying, 'Please set up another server for backup" ?
[23:20] <miceiken> thanks fridaynext
[23:20] <serverhorror> no I’m saying: You (like everyone else is too) are restricted by a budget. I truly think that with the options you have that is a sane choice. If, at some point later in time, you decide that the situation has changed, build a system that fits the situation. ALSO: You build something that is valuable, why not be transparent about it and let you customers hear that and make backup something that they
[23:20] <serverhorror> can choose to have (for a fee), or use it as a marketing instrument and place it on your website like “Full Backups included@
[23:21] <serverhorror> s/@/:
[23:21] <serverhorror> can’t type anymore sorry
[23:21] <fridaynext> serverhorror: that's a good idea, actually. To add backups as an extra cost.
[23:22] <serverhorror> just be sure it doesn’t backfire. lots of people expect that to just be there
[23:22] <serverhorror> don’t talk about costs :) — add ‘extra value packages’ costs are bad, value is good :)
[23:23] <serverhorror> .oO(I have the feeling I had to many meetings with management)
[23:23] <fridaynext> nah, my clients up to this point are very good about paying the bills.
[23:24] <fridaynext> oh but you're saying, don't add a fee, change the hosting cost overall and explain the added value.
[23:24] <[[lutchy]]> I play to start a small VPS company....
[23:25] <[[lutchy]]> Hmm... 'Paying on time'
[23:25] <[[lutchy]]> s/play/plan/
[23:26] <[[lutchy]]> I am thinking, why is that so important to you fridaynext  ?
[23:28] <Joe_knock> fridaynext: all hosts provide the "full-backup" thing. You need to demonstrate that you provide genuine backups that the customer has access to on his own personal data storage, which you should be charging a little for in the full-price and not add it as a fee.
[23:28] <fridaynext> [[lutchy]]: i don't follow what you're asking
[23:36] <[[lutchy]]> fridaynext, I think I broached about the customers paying on time...What's important to you ?
[23:37] <[[lutchy]]> Personally, I don't want erratic customers .. I don't want customer who buy and want a refund later
[23:37] <fridaynext> Overall, what's important to me is that I have automation in place so that if anything goes tits up, I can get it back online easily, whether that means uploading back to my Linode server or even setting up a sub-par Hostgator shared hosting.
[23:37] <fridaynext> [[lutchy]]: yeah, fortunately, I have no customers like that.
[23:37] <fridaynext> [[lutchy]]: they're all very respectful of the time I put in to develop their sites / fix problems / etc.
[23:38] <fridaynext> and by respectful, I mean they pay the invoices in a timely fashion when I send them out.
[23:38] <[[lutchy]]> You offering me information
[23:39] <[[lutchy]]> fridaynext, I appreciate you opinion :)
[23:39] <[[lutchy]]> However
[23:39] <fridaynext> uh oh
[23:39] <fridaynext> the dreaded however!
[23:41] <[[lutchy]]> I am still confused...Do you need them to pay to make (I think because you need to plan) because if they don't.. you can't pay your servers ?
[23:42] <fridaynext> They all pay one year in advance, so they are paid up for hosting.
[23:44] <fridaynext> [[lutchy]]:  is that what you're asking?
[23:44] <[[lutchy]]> I am thinking
[23:44] <fridaynext> [[lutchy]]: or are you asking why I don't have a super hefty backup plan in place?
[23:44] <[[lutchy]]> I love to think
[23:44] <fridaynext> [[lutchy]]: me too :)
[23:44] <[[lutchy]]> Personally, My customer will have to make up their own backup plan
[23:45] <fridaynext> [[lutchy]]: well that's a weight off of your shoulders.
[23:45] <fridaynext> see, I don't want my customers to do that, b/c that means they're installing PHP plugins on their wordpress sites to do that, and I've watched my server load spike as those PHP plugins backup huge directories.
[23:45] <fridaynext> So that's why I'm setting up backups via cronjobs, sent to their owncloud directories, so it's all native *nix code.
[23:46] <fridaynext> no intensive PHP being run where it doesn't need to be.
[23:48] <[[lutchy]]> That would be a bonus
[23:48] <fridaynext> i like those server load numbers as close to 0.0 as possible :)
[23:48] <[[lutchy]]> That's not what I even care about ...
[23:49] <[[lutchy]]> But... It's interesting to see your point of view
[23:49] <[[lutchy]]> Personally... cost.. I don't mind suffer cost for 1 year
[23:49] <fridaynext> well my story in a nutshell -
[23:50] <fridaynext> I started by hosting with HostGator shared.  Traffic / load got larger, so I upgraded to a VPS for $80/month.
[23:51] <fridaynext> it. was. horrible.
[23:51] <fridaynext> mysql crashing all the time, php crashing, apache crashing.
[23:51] <fridaynext> I asked hostgator wtf was up - and this was with only like 3 or 4 sites.
[23:52] <fridaynext> so their service was quite literally the worst customer service I've ever dealt with.
[23:52] <fridaynext> They all but told me to stop hosting sites if I wanted faster performance.  Ridiculous.
[23:53] <fridaynext> So i switched to Linode and figured, I'll just manage an Ubuntu 12.04 server all by myself.
[23:53] <fridaynext> Also built a home media server with 12.04, so figured I could learn it in tandem.
[23:53] <fridaynext> And I've been strengthening my server/backup/troubleshooting skills ever since.
[23:58] <Joe_knock> hostgator are awesome for those shared 1-click install plans. As far as a VPS goes, you paid too much from them.
[23:59] <Joe_knock> I was running a decent-sized app on a 1 gig service for $7, granted, those guys fucked me around after a while. I'll never buy from a new outfit ever again.
[23:59] <IdleOne> can we please keep the language clean in here