[04:30] <thekrynn_> hello, was wondering if anyone knew why a screen's /var/run/screen file resets all associated stat times when it's attached to or detached from
[05:49] <jelly> it's not a file, it's a named pipe
[05:49]  * jelly hides
[09:10] <rbasak> nacc: could you take look at bug 1576734 please [triage]: is this familiar?
[09:10] <rbasak> Syntax error on line 2 of /etc/apache2/mods-enabled/php7.0.load: Cannot load /usr/lib/apache2/modules/libphp7.0.so into server: /usr/lib/apache2/modules/libphp7.0.so: cannot open shared object file: No such file or directory
[13:11] <caribou> nacc: I'm not sure I know how to proceed with the git repo that you created for kexec-tools
[13:11] <caribou> nacc: from what I understand, this git repo contains the equivalent of the git-dsc-commit on all existing source packages available
[13:12] <caribou> nacc: with the appropriate tags
[13:21] <coreycb> ddellav, jamespage: neutron-lbaas seems to have been fixed, possibly from a dependency bumps
[14:32] <rickbeldin> Good morning.  A partner in Korea was looking for some historical release notes on 12.04.5 but found that the Wiki is essentially empty.  Is there someplace else that has release notes and changelogs for 12.04.5?   You can see the page that ways still under development here:  https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/ChangeSummary/12.04.5
[14:32] <rickbeldin> s/ways/page/
[14:43] <teward> rickbeldin: just for .5, or are they looking for 12.04, 12.04.1, 12.04.2, 12.04.3, and 12.04.4 as well?
[14:43] <teward> because there's a lot of different data there - strewn across multiple pages.
[14:44] <teward> best thing to look at are release notes rather than change summaries
[14:44] <rickbeldin> Just for .5.
[14:44] <rickbeldin> teward: just for .5.  All the others seem complete.
[14:45] <teward> looks like it's not complete, and I can't find anything - my guess is *maybe* there's nothing but pacakge version chagnes there, but don't quote me
[14:45] <rickbeldin> I found an announcement page which has minimal info.  They are looking for something like this https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/ChangeSummary/12.04.4
[14:46] <rickbeldin> Aren't the .minor releases usually just for new hardware enablement within a main release?
[14:47] <teward> and maybe installer issues, but it just looks like the page wasn't completed.  Nothing we can do, and I don't think logs are kept anywhere specifically...
[14:49] <rickbeldin> I know it is old stuff, but 'encounraging' people to let go of the past can be done sometimes with documentation.  : )
[14:49] <rickbeldin> Apologies.  I can't type today.  :)
[14:50] <rickbeldin> I think the best we can do is a diff of the manifests between 12.04.4 and 12.04.5?  Does that make sense?
[15:08] <nacc> caribou: right
[15:09] <nacc> caribou: so, IMO, you'd clone it, go through the process of breaking ubuntu/yakkety into reconstruct/version then logical/version, then rebase that (new) local tag onto debian/sid
[15:10] <caribou> nacc: ok, I'll try that out & shout if I have problems :)
[15:10] <caribou> nacc: thanks!
[15:10] <nacc> caribou: np
[15:11] <nacc> caribou: and then, eventually, it'll be in a place more people can push to (possibly) -- and so it would not just be the imported versions, but also the active development repository (or could be), and so you'd push your stuff up to lp, we'd merge it in and tag it as 'upload/version' rather than 'import/version' in that case
[15:11] <nacc> caribou: that process probably needs refinement still :)
[15:12] <caribou> k
[15:15] <nacc> rbasak: found a bad case for our 'versions never go backwards' :) clamav 0.91.2-3ubuntu2.1~feitsy1 was published after 0.92~dfsg-2~feisty1 in feisty-backports. The first was deleted, technically, as a bad a backport, but I don't have a way to know that algorithmically. I think this would be a stil-valid case of having hte 'parent override'?
[15:18] <teward> stupid question, but i've got an (ancient) mail server set up with dovecot in a "every mailbox is a folder in an on-the-system user's home directory" for every email address, and I'm trying to copy its data to a newer Ubuntu version; assuming I've copied over /etc/passwd, /etc/group, and /etc/shadow correctly to keep the same user authentications across both systems, would an rsync with the argument flags -o -g -A -D be enough to copy all the
[15:18] <teward> data from the old server to the new and retain permissions/ownership/etc. so Dovecot and such would still work?
[15:19] <teward> egads that's a long message
[15:19] <ikonia> in theory yes
[15:19] <ikonia> nothing stupid about that question
[15:19] <teward> ikonia: 9.04 box -> 14.04 box though
[15:19] <teward> hence the question
[15:19] <teward> stupid because E:AgeOfOriginSystem
[15:19] <ikonia> not sure why that matters,
[15:19]  * teward shrugs
[15:19] <ikonia> you may need to adjust the exim config if there are feature differences
[15:20] <nacc> i think it should be fine, as well, from a permissions perspective
[15:21] <teward> ikonia: anything I should be aware of, then, moving the dovecot configurations over from such an old version to 14.04?  I expect this to be an evil migration headache in terms of settings, but I basically copied over the permissions to start with; would go through and modernize after
[15:21] <teward> lovely thing about backups is that they're there in case i botch things heh
[15:21] <ikonia> teward: should be fine, I'd look at any feature differences between the two dovecot versions
[15:21] <teward> s/permissions/permissions and origin system settings/
[15:21] <teward> ack
[15:21] <ikonia> I wouldn't blindly copy over the config
[15:21] <ikonia> certainly the data
[15:21] <teward> ikonia: y'know the problem though - i didn't configure it initially
[15:22] <teward> so i'm walking into the config blind :/
[15:22] <teward> at least, in the config migration
[15:22] <ikonia> teward: all the more reason not to copy it across
[15:22]  * teward shrugs
[15:22] <ikonia> work through the config - understand how it works, then re-apply that same concept to the later version of dovecot
[15:22] <ikonia> most of it should be the same, maybe some silly stuff around uid/gid of system users and some auth/encyption stuff would be different/better
[15:23] <teward> ikonia: time then is the problem - learning dovecot in this case would take too long for the migration plan at the workplace.  Kind of getting things handed to me, rather than being consulted with first :/
[15:23] <ikonia> shouldn't take long, it's pretty clear english in terms of a config
[15:23] <ikonia> just visually comparing parameters would be enough
[15:24] <teward> ikonia: so, what, compare original to a default from 14.04, drop things in where necessary?
[15:24] <ikonia> you'll probably find the only real differences are the auth/encyption stuff
[15:24] <ikonia> teward: more "merge"
[15:24] <ikonia> or "port
[15:24] <ikonia> "
[15:24] <ikonia> you may find totally identical functionality, in which case, just copy the whole file
[15:24] <teward> i wonder how stock the configs are on this origin system
[15:24] <ikonia> but that wouldn't be my starting point
[15:25] <rbasak> nacc: two publications in the same pocket going backwards? Yeah, sounds like a parent override is needed to me. Apart from warning or failing, I don't see what else we could do in that case.
[15:25]  * teward goes hunting for the packages
[15:26] <nacc> rbasak: yeah, back-to-back, becuase the first was in error (per the publishing log on lp)
[15:26] <nacc> rbasak: ack, will add that, so i can import clamav for jgrimm :)
[15:27] <jrwren> teward: why not rsync with -a?
[15:28] <jrwren> teward: iirc dovecot config changed a bit. you will want migrate the config.
[15:29] <teward> jrwren: i'm going through config line by line now to try and find what changed - doesn't change the fact it's a PITA to do
[15:29] <teward> jrwren: didn't see -a on the version of rsync on the origin system
[15:29] <jrwren> teward: yup. i did it once. it was a PITA
[15:29] <teward> AFAICT so far, it's pretty stock
[15:29] <jrwren> teward: rsync -a was there in 9.04, i'm pretty sure.
[15:29] <teward> but ehh
[15:30] <teward> jrwren: already ran the rsync, but i'll do that next time
[15:30] <teward> until the new server is 'up' I expect to have to rsync data again
[15:30] <teward> :p
[15:32] <teward> jrwren: imap imaps pop3 pop3s in older, I assume that the s indicates SSL-secured?
[15:33] <jrwren> teward: yes
[16:02] <teward> jrwren: funny story: dovecot comes with a 'migrate the configuration' tool :/
[16:02] <teward> lol
[16:03] <ikonia> teward: I wouldn't trust that tool
[16:03] <jrwren> teward: well, i wish I knew that a couple yrs ago
[16:03] <ikonia> it's a bit hit and mess (of course depends on your config)
[16:03] <teward> ikonia: it gives me a starting point to see what's evil
[16:03] <ikonia> very true
[16:03] <teward> i'm not using it for actually generating the config
[16:03] <teward> i'm using it as a guide to know what the heck changed :P
[16:03] <ikonia> nope, can be useful
[16:04] <teward> it *looks* to me like this is a super basic configuration...
[16:04] <teward> based on the warnings and what i'm seeing lying around in dovecot
[16:04] <teward> but... only testing will tell
[16:12] <jsheeren> hi, anyone got any experience with emulex oneconnect skyhawk 10gb/s nics?
[16:13] <ikonia> why don't you just ask the question
[16:14] <ikonia> I've used emulex 10g and fibre - just not the specific skyhawk version, however based on your question my answer should be "no"
[16:14] <jsheeren> i cannot get the card to detect a link; in the bios the card shows there's a link, but in ubuntu server 16.04 .. No link
[16:14] <ikonia> jsheeren: what bios ?
[16:14] <jsheeren> it's using the be2net driver
[16:14] <jsheeren> the dell server bios
[16:14] <jsheeren> dell poweredge r620
[16:15] <ikonia> ok, so thats just a basic link loop connectivity test
[16:15] <ikonia> the bios version of "green light on the port"
[16:15] <jsheeren> eys
[16:15] <jsheeren> **yes
[16:15] <ikonia> how are you checking the link in ubuntu
[16:15] <jsheeren> using ethtool
[16:15] <ikonia> ethtool doesn't support 10G I think (I'm not sure)
[16:16] <jsheeren> i'm guessing so, 'cause it's not showing any advertised link modes
[16:16] <jsheeren> nor speed
[16:16] <jsheeren> anyway i can check the link in ubuntu besides ethtool?
[16:16] <ikonia> what is the device name on ubuntu
[16:17] <jsheeren> enà1
[16:17] <jsheeren> eno1
[16:17] <jsheeren> sorry; dmesg shows:  eno1: link is down
[16:17] <ikonia> if you run "sudo ethtool eno1" what do you get
[16:17] <ikonia> ethtool does support 10G
[16:17] <jsheeren> settings.. supported ports (fibre) supported link modes 1gb and 10 gb
[16:18] <jsheeren> cannot paste it (using the drac at the moment)
[16:18] <ikonia> understandable
[16:18] <jsheeren> link detected = no
[16:18] <ikonia> does ubuntu see the card
[16:18] <jsheeren> yes
[16:18] <jsheeren> the be2net driver initialises the card according to dmesg
[16:19] <ikonia> if you do "ip link show" against that card what do you see
[16:21] <jsheeren> ikonia: no-carrier;broadcast;multicast;up
[16:21] <jsheeren> then state down
[16:21] <ikonia> so have you tried configuring the card ?
[16:22] <rbasak> nacc: am I OK to run the importer out of your git tree to do merges? Eg. exim4.
[16:23] <nacc> rbasak: i pushed exim4 last night, iirc
[16:23] <nacc> rbasak: sorry, should have e-mailed last night
[16:23] <rbasak> Oh. I didn't expect that. No problem, I'll just use it!
[16:24] <nacc> rbasak: i'm trying to get the parent override stuff in so i can import clamav and then i'll update hte importer git repository properly
[16:24] <rbasak> OK
[16:25] <rbasak> nacc: did someone else ask for exim4? Or is that just an example? Just wondering if I'll clash with anyone to merge it.
[16:25] <nacc> rbasak: jgrimm did, iirc
[16:25] <nacc> rbasak: not on the list, but in my 1x1 last week
[16:25] <nacc> sorry, totally blanked on e-mailng that to the list
[16:26] <rbasak> Was that to merge or for an importer example do you know?
[16:26] <jgrimm> rbasak, i asked for it.  knowing it needed merged
[16:26] <rbasak> jgrimm: ah OK. Are you fine with me taking the merge?
[16:26] <jgrimm> rbasak, yep!
[16:26] <rbasak> I'm being unproductive so thought I'd hit up some merges.
[16:26] <rbasak> OK thanks ;)
[16:26] <nacc> rbasak: btw, the new algorithm's gitk graphs are much cleaner -- esp. wrt to proposed and release
[16:26] <jgrimm> :) thanks sir
[16:26] <jsheeren> ikonia: yep in the interfaces file
[16:26] <jsheeren> but it stays down
[16:27] <ikonia> jsheeren: what happens when you try to bring it up
[16:28] <jsheeren> nothing
[16:28] <jsheeren> there is no link
[16:28] <ikonia> it must do something ?
[16:29] <nacc> jsheeren: are you able to (with ip link) set the link manually up? istr there are classes of devices where the link auto-detection doesn't always work (historic, might still happen sometime)
[16:31] <nacc> rbasak: quick question, if you have a moment
[16:32] <jsheeren> nacc: i tried that; but no joy
[16:33] <nacc> jsheeren: ah ok
[16:33] <jsheeren> i'm guessing there's a driver issue
[16:33] <rbasak> nacc: sure
[16:33] <jsheeren> i contacted our contact at dell for this
[16:33] <jsheeren> i'm hoping he has good news for us tomorrow
[16:33] <nacc> rbasak: do you mind if we do a hangout?
[16:33] <jsheeren> got to go
[16:33] <jsheeren> thank you all for the suggestions/tips!
[16:34] <rbasak> nacc: inviting...
[16:38] <teward> ikonia: jrwren: if I want to tell Dovecot the order of where to check for the mailboxes, is that done as mail_location=FIRSTLOCATION:SECONDLOCATION   ?
[16:38] <ikonia> I think so
[16:39] <ikonia> not got a config open in front of me to check
[16:39] <jrwren> teward: i don't nkow. i'd have to read docs. its been a few yrs.
[16:39] <teward> was just curious if you knew offhand, I'll dig
[16:39] <ikonia> nope, not off hand
[16:39] <teward> i'm currently rsyncing the mailboxes which are *not* in user directories >.<
[16:39] <teward> (23GB+)
[17:29] <coreycb> ddellav, ceilometer 5.0.3 uploaded to the wily review queue, thanks
[17:40] <coreycb> beisner,  nova 1:2014.1.5-0ubuntu1.5~cloud0 is ready to promote to icehouse-proposed
[17:40] <teward> ikonia: I think I have this all done, now, the warnings the system triggered definitely helped, only way to know is to test later heh
[17:40] <teward> thanks to you and jrwren for your pointers/advice/suggestions
[17:42] <beisner> coreycb, ok, pushed that
[17:42] <coreycb> beisner, thanks
[17:42] <beisner> yw coreycb
[17:49] <coreycb> beisner, neutron 2:8.1.0-0ubuntu0.16.04.2~cloud0 is also ready to promote to mitaka-updates
[17:50] <beisner> coreycb, any stable charm implications re: pkg version?
[17:50] <beisner> ie. 8.0.0 --> 8.1.0
[17:51] <coreycb> beisner, I think they've all landed, but let me check
[17:53] <beisner> coreycb, yah i see         ('8.1', 'mitaka'),   in neutron-gateway @ stable/16.04
[17:54] <beisner> landed may 18
[17:55] <coreycb> beisner, yes, and neutron-api / neutron-openvswtich are good too
[17:57] <beisner> coreycb, ah yes, was looking for landscape clear signals.  they've marked fix-committed on that side.  pushing!
[17:58] <coreycb> beisner, yay :)  btw james fixed up that charm-helpers code so we shouldn't have to deal with the version bumps anymore, in the next version of the charms at least
[17:58] <beisner> coreycb, sweet!
[18:01] <beisner> coreycb, pushed re: bug 1580674
[18:11] <ikonia> teward: very nice work
[18:12] <teward> ikonia: give me a stick, i think i need to beat myself with it 'cause dovecot gave me a headache, and I should have learned this a year ago when doing my linux certification training heh
[18:13] <Yuri4_> Guys, how do I make 2 servers to be exactly the same? I already have 1 set up. I need second to back up first under load balancer.
[18:14] <teward> Yuri4_: 'exactly' the same is not possible, there *will* be minor differences
[18:14] <teward> image the first one, put the image on the new one, adjust hostname and IP data
[18:14] <teward> that's how I'd do it
[18:15] <Yuri4_> teward, but once the data on server 1 change, the data on server 2 stays the same
[18:15] <Yuri4_> so...
[18:16] <teward> Yuri4_: network storage between the two servers for sharing of data, and then that problem goes away; secondary issue you're always going to have though is that there's the one central datastore then
[18:16] <teward> and you don't state if loadbalancing is done at the same physical location, or between two servers not at the same location
[18:16] <Yuri4_> teward, and how do I do that
[18:16] <Yuri4_> ?
[18:23] <Yuri4_> teward?
[18:55] <pentiumone133> i have a remote ubuntu server that i need to change the IP address on, what ould be the best way to do so to ensure I dont get locked out of it forever
[18:57] <pentiumone133> just change interfaces fine and bounce the nic somehow?  whats the best way to bounce it these days?  i know that you used to do init.d/networking restart but im seeing some articles that the begavior of that script is different now
[18:57] <pentiumone133> ifdown eth0 && ifup eth0 in a screen session?
[18:58] <sarnold> maybe skip the && -- if the first fails you don't want the second to be ignored
[18:58] <pentiumone133> good point
[18:59] <sarnold> tych0: that reminds me, I saw in your lxd networking blgpost that you suggested restarting the networking service -- I thought we blocked that from doing anytuing in recent releases?
[19:00] <pentiumone133> id really like to take the existing IP that it is using and move that to eth0:1, and give eth0 a new address
[19:00] <pentiumone133> then i can take eth0:1 up and down with the ip that i care about without loosing connectivity
[19:00] <tych0> sarnold: oh, could be actually
[19:01] <tych0> sarnold: i mostly pulled that from some instructions i wrote a while ago
[19:01] <tych0> let me see.
[19:01] <pentiumone133> in my case it is an 11.04 box
[19:01] <sarnold> zounda
[19:02] <sarnold> zounds, too
[19:03] <patdk-wk> man, 11.04 hasn't been supported since a lifetime ago
[19:04] <pentiumone133> exactly why im doing this.  replacement is ready to go but i need the replacement to have the same IP
[19:04] <sarnold> pentiumone133: if this is just a temporary measure maybe just use ip addr add ... and skip the /etc/network/interfaceds and so on?
[19:05] <pentiumone133> it will be permenant but because it is remote, i need to be able to get into hte old machine if i have to after the new box is live
[19:05] <pentiumone133> at least for a day or two before they can overnight it to me
[19:06] <pentiumone133> basically if i change over and SHTF i need to be able to bring the old one back without buying a plane ticket
[19:07] <pentiumone133> although, it is in vegas, so maybe that is a better option
[19:07] <sarnold> hehe
[19:27] <newbsie> Why is it bad to leave root login if you disable password based login?
[19:27] <newbsie> Is it because the username is known?
[19:28] <AndyWojo> because if you log directly in to root, and someone makes a change / causes an issue, you can't see who did it
[19:28] <AndyWojo> If they logged in as their user, and used sudo, that is tracked.
[19:29] <newbsie> AndyWojo: ahhhh... so root user doesn't get logged like other accounts.
[19:31] <sdeziel> newbsie: root or any other users don't have their actions logged (unless you use auditd). On the other hand, when someone uses sudo, this gets logged
[19:31] <newbsie> sdeziel: gotcha
[19:32] <AndyWojo> well that's not true
[19:32] <AndyWojo> the actions are logged as root
[19:32] <AndyWojo> you just don't know *who* it is
[19:32] <AndyWojo> so just to show you what I mean
[19:32] <AndyWojo> log in as yourself, and sudo su -
[19:33] <AndyWojo> Then do the following two commands:    whoami      who am i
[19:33] <AndyWojo> When you do, who am i, it shows your real user, even if you are root
[19:33] <sdeziel> AndyWojo: what do you call "actions logged" the shell history?
[19:37] <jrwren> newbsie: its bad because nothing is gained by doing so.
[19:39] <newbsie> jrwren: I guess the short version is, just setup a new user and enable sudo on it and disable root logins over ssh.
[19:39] <jrwren> yes, especially since that is the default.
[19:40] <newbsie> jrwren: what do you mean by it is the default? My box spins up with root user.
[19:41] <jrwren> newbsie: ubuntu hasn't had an password enabled root account in a very long time. Your box may not be ubuntu?
[19:42] <newbsie> jrwren: I'm sorry, yes you are correct. I misunderstood you. My box spins up with a key-based login
[19:42] <newbsie> jrwren: pre-set by hosting provider (digital ocean)
[19:43] <jrwren> newbsie: and they use root for that instead of the "ubuntu" user eh? that is a shame. They shouldn't. They are doing it wrong. Sorry.
[19:43] <stokachu> jrwren: i dunno i consider DO droplets as throwaway vms
[19:43] <stokachu> just having a root user to deploy an application is normally all you want
[19:43] <newbsie> jrwren: yeah, first time I login, it is as root
[19:44] <jrwren> stokachu: it doesn't matter, its still wrong. cloudimg and CPC is the right way.
[19:44] <jrwren> newbsie: that is disappointing. Oh well. TIL.
[19:45] <newbsie> jrwren: I think their focus is on easy to get going, more than setting barrier which security kind of is.
[19:45] <jrwren> that was windows focus throughout the 90s. It did not end well for internet security ;p
[19:46] <newbsie> jrwren: It didn't end well for security, but it ended well for marketshare among consumers.
[19:47] <jrwren> newbsie: indeed. Which approach is better for humanity overall?
[19:47] <newbsie> jrwren: besides, it's not like the internet is more secure today with the proliferation of *nix systems in general.
[19:47] <jrwren> newbsie: its not? can you prove that assertion?
[19:48] <jrwren> newbsie: I do not mean to suggest that the internet is secure, however, the removal of entire exploit vectors has been good for us.
[19:48] <newbsie> jrwren: Better for humanity? A company existing is pretty good result imo.
[19:49] <newbsie> jrwren: what I meant to say is that, *nix systems are still vulnerable.
[19:49] <sdeziel> newbsie: everything is vulnerable ;)
[19:50] <jrwren> newbsie: now we are getting into economics. Is company existing when quality of life is low for all better than a company not existing, but overall quality of life is better?
[19:50] <newbsie> sdeziel: of course unless you aren't connected :)
[19:50] <jrwren> newbsie: when you say *nix systems are vulneraable, are you implying all of them, or only some? which some? what are the vectors? they are much different than they were and that is good and that is my point.
[19:51] <newbsie> jrwren: So your argument is that it is harder?
[19:51] <jrwren> newbsie: Yes that is part of it.
[19:52] <newbsie> jrwren: My point is that the approach is often viewed in a vacuum, and that is a limited view.
[19:53] <patdk-wk> are we only talking known vectors? or also unknown? quality of developemnt? ...
[19:54] <jrwren> newbsie: I see. I really like that point. I really dislike blanket prescriptions. Still, in this case, I see no benefit to not doing what cloudimg does, but I'll admit I'm wearing blinders.
[19:54] <newbsie> jrwren: MS view was to get ease of use, so that every home can have a computer.
[19:54] <newbsie> jrwren: DO is trying to get more users, and not putting up walls. Security is at your own choice.
[19:56] <sdeziel> while I'm in favor of sudo in general, most of the time the audit trail isn't reliable because people are used to do sudo -i/sudo su - which bypasses sudo logging
[19:56] <jrwren> newbsie: I do not see how they will get more users or how it is putting up walls to deviate from the way AWS, Azure and every Ubuntu Certified Cloud Partner does it. It only makes things harder by being different for no reason.
[19:58] <newbsie> jrwren: You login and you immediately have access to everything. If you are new, you might know about su....
[19:58] <newbsie> I meant to say, you might not know about it. Similarly, I came in asking very basic questions.
[19:59] <jrwren> newbsie: if you are new, you will be referencing ubuntu docs often all of which know the way the ubuntu cloudimg does it, all of which document using sudo.
[19:59] <jrwren> newbsie: yes, we side tracked from your original questions into a rather interesting discussion.
[19:59] <jrwren> I like DO. I am only disappointed in their deviance from the standard.
[20:00] <newbsie> jrwren: I found it easier, but I also worked with AWS, and their Amazon Linux logs you in as ec2-user.
[20:03] <patdk-wk> their amazon linux != ubuntu
[20:03] <patdk-wk> and they don't claim it is either
[20:04] <newbsie> patdk-wk: yeah, it is based on redhat I belive
[20:04] <patdk-wk> it's a redhat/centos clone
[20:05] <newbsie> patdk-wk: but in general aren't the different flavors kind of similar in the end.... I mean I get the difference in tools included, layout, and so on, but to me as a infrequent user, they all look kind of the same.
[20:05] <patdk-wk> how are they in the least the same?
[20:06] <patdk-wk> sure, bash on one, is mostly the same as bash on another
[20:06] <patdk-wk> except ubuntu doesn't use bash by default, so there goes that
[20:06] <patdk-wk> config files are totally different
[20:06] <patdk-wk> ubunt uses apparmor and not selinux
[20:06] <patdk-wk> they are highly different
[20:06] <patdk-wk> but if you only look at the surface, sure you could mistake one for the other
[20:07] <newbsie> patdk-wk: but those are just the tools to me. Can't you just install it?
[20:07] <patdk-wk> no
[20:08] <patdk-wk> not without starting to custom compile your own kernel
[20:08] <patdk-wk> changing things
[20:08] <patdk-wk> and well, then you just end up with the other system
[20:08] <jrwren> i agree with you newbsie, they are all the same.
[20:08] <newbsie> patdk-wk: Well, I'm not knowledgeable about that. To me, I just install whatever I need, and I notice often you can just install whatever you need.
[20:09] <jrwren> they are the same until they are different.
[20:09] <patdk-wk> yes, but follow enough documentation for one, and it likely won't work for the other
[20:09] <patdk-wk> it will be close, but you will run into issues quickly
[20:09] <patdk-wk> and it might be simple to fix, and it often will run you into a fun rabbit hole :)
[20:09] <jrwren> yes, what patdk-wk said.
[20:10] <jrwren> and if you don't knwo the differences, you won't even know you are in teh rabbit hole
[20:10] <newbsie> but aren't the differences mostly in where configuration files are. The package is often already there.
[20:11] <patdk-wk> in the simplest of cases sure
[20:12] <newbsie> hmmm.... So why does this matter?
[20:13] <newbsie> Like what does these differences do for the user?
[20:13] <patdk-wk> everything
[20:13] <patdk-wk> it's a mindset
[20:13] <patdk-wk> it's a way of thinking
[20:13] <patdk-wk> it doesn't do anything for the user, except users will fine one or the other easier for them
[20:13] <jrwren> well, this is #ubuntu-server so to me, users never see this stuff. devs and admins see this stuff. Users use the software and services the devs and admins build and deploy.
[20:13] <patdk-wk> and then they know what one they are more comfortable with
[20:14] <jrwren> the devs and admins will care about these differences when they have to make software X work on non-ubuntu distro Y and things don't work.
[20:14] <patdk-wk> for me, updating rhel system, was just always painful and prone of failure
[20:14] <newbsie> To me, all these flavors don't add anything. I use Ubuntu, because it is almost always available everywhere, and information is easily accessible.
[20:15] <jrwren> dig deeper. you'll eventually discover what they do add and have another reason to use ubuntu ;]
[20:15] <patdk-wk> I use whatever I'm given
[20:15] <patdk-wk> but since I have a huge repo of software I maintain for ubuntu/debian, I'll perfer ubuntu
[20:15] <newbsie> I frankly is becoming a devops person, but my experience is quite limited with this.
[20:15] <patdk-wk> though, I used to maintain the stuff for rhel when I used that before ubuntu existed
[20:16] <patdk-wk> and for slackware, before rhel existed
[20:16] <jrwren> you maintain your own repos?
[20:16] <patdk-wk> me? sure, what sane admin wouldn't
[20:16] <newbsie> To me these differences is actually making it harder....
[20:16] <jrwren> do you use reprepro?
[20:16] <patdk-wk> I used to
[20:16] <jrwren> what do you use now?
[20:16] <patdk-wk> oh, for that level, I gave up years ago
[20:17] <patdk-wk> I use the ppa's since they where made
[20:17] <newbsie> I don't see the value, but perhaps somebody that tweaks servers, and say need performance for something and they find this flavor suits their need better....
[20:17] <patdk-wk> doing it manually was much work back then :)
[20:17] <jrwren> how do you manage rollbacks, given that PPA doesn't support more than one version of the same package?
[20:17] <patdk-wk> I don't rollback
[20:17] <jrwren> newbsie: maybe someday you'll see the value.
[20:17] <patdk-wk> I have a testing ppa I use first
[20:17] <patdk-wk> and a production ppa
[20:17] <patdk-wk> and a few others for more customized stuff
[20:18] <newbsie> jrwren: yeah, if I dig deep enough.
[20:18] <patdk-wk> I can always just republish, I have all versions on my system
[20:18] <jrwren> patdk-wk: soo... what if something rolls out to production, is deployed, but a bug is found later and rather than push a new deb (likely takes long) you want to rollback?
[20:18] <patdk-wk> just compile it and install from .deb?
[20:18] <jrwren> republish... so... delete from PPA, wait for the delete to process, and re-upload?
[20:19] <jrwren> hahaha, yeah, compile it and install from .deb is an option.
[20:19] <patdk-wk> the only thing I really miss, that kindof sucks is
[20:19] <sarnold> apt-get install foo=version.number.goes.here
[20:19] <patdk-wk> I wish I could mark things as security updates on ppa's
[20:19] <jrwren> these are the problems I'm currently facing. Many solutions. I'm wondering what is best.
[20:19] <patdk-wk> marking as security update is much more annoying to me, than rolling back :)
[20:19] <jrwren> sarnold: that only works if you point to many PPAs and they have version.number.goes.here and ersion.number.goes.there
[20:20] <jrwren> sarnold: it is the option I like best.
[20:21] <patdk-wk> I'm also kindof suprised at how many other people use my ppa
[20:21] <patdk-wk> get random emails and irc messages from some of them
[20:24] <apb1963> Got no response in #ubuntu and since printing is a server feature... here goes.  ubuntu 14.04 LTS; I'm using Nimbus screenshot within Firefox - trying to print.  It appears to send it to the printer OK, no errors or anything to indicate a problem.  But the printer just sits there idle.  Nothing in the print queue. "echo test | print" works.  I'm not sure where to go from here.
[20:45] <nacc> apb1963: not sure how printing is a server feature, and you should read !patience, but i'm guessing one (firefox) is using cups and possibly print (which is the mailcap helper) is using something else, not sure. YOu could try testing with 'lp', iiuc. Or use the GUI to print a test page?
[20:48] <apb1963> nacc: "echo test | print lpr" works.
[20:49] <apb1963> nacc yes to cups ... and the HP driver for the HP printer.
[20:51] <nacc> apb1963: are you sending to a different printer from firefox? not sure what nimbus screenshot is, but can you print anything from firefox?
[21:02] <apb1963> nacc: Yes.  File->Print works
[21:02] <apb1963> nacc: Same printer.
[21:03] <nacc> apb1963: i'd ask the nimbus folks what tehy are doing differntly, then
[21:04] <apb1963> yeah, kinda thinking the same at this point.  You should take a look at it though, very nice screenshot utility with many features.  Other than it not printing for me of course.
[21:04] <apb1963> thanks for your help!
[21:06] <nacc> apb1963: when i need screen shots, i hit print screen :)
[21:06] <nacc> also i never need screen shots :)
[21:06] <nacc> apb1963: i would not think they are doing anything special to print, so my guess is they are printing no data, maybe
[21:09] <apb1963> dunno
[21:09] <apb1963> sounds reasonable but... strange.
[21:10] <apb1963> so yeah, I'll ask what they have to say about it.
[21:11] <apb1963> what's nice about nimbus is it lets you annotate the screenshot.  So add text, color, blur portions of the screen that are "private", etc.
[21:12] <nacc> apb1963: is that an ubuntu package?
[21:13] <apb1963> firefox add on
[21:13] <nacc> apb1963: ah, then definitely should contact the upstream/addon project first :)
[21:15] <apb1963> I like to ask on IRC first since it's often the case others here are using the same addon and have already figured out the answer.  But yes, that's my next stop now.  Thank you :)
[21:18] <apb1963> nacc: actually, I did find this in the cups error log: E [01/Jun/2016:12:57:33 -0700] [Client 16] IPP read error: Invalid media name arguments.
[21:19] <apb1963> sort of implies a configuration error
[22:52] <drab> hi, are there any recommendations for backing-up a bunch of servers?
[22:52] <drab> I'm leaning toward using rsync with hardlinks, there's a couple really good scripts out there, or maybe backupninja
[22:52] <drab> the main problem I'm facing is how to get to the data
[22:53] <drab> in a push model, I'd have to have each server hold a ssh private key and put the pub on the backup machine
[22:53] <drab> in a pull model I need to get in as root to be able to fetch /etc or other root-only pieces of the fs
[22:53] <drab> is there a known-better pattern to do this?