[02:49] <rchavik> hi, is there an equivalent in to to 'yum history' and 'yum history undo <transaction>' ?
[02:50] <sarnold> I don't know what those do, but /var/log/dpkg.log has some information on what was installed when
[02:53] <rchavik> it lists history of package installs, and installed dependencies.   the undo is particularly helpful because it can automatically remove the dependencies too
[02:54] <rchavik> pity there's no equivalent in ubuntu
[02:56] <sarnold> oh
[02:57] <sarnold> if you uninstall a package that dragged in dependencies, apt-get autoremove can clean those up
[02:57] <dpb1> yummy
[02:57] <sarnold> the deborphan package can help if apt has lost track of what was installed strictly for dependencies..
[02:58] <rchavik> got it, thanks
[03:35] <JanC> apt-get also has the --autoremove option for the remove command, which will do that in one step
[03:35] <sarnold> oh sweet
[03:36] <JanC> or you can set APT::Get::AutomaticRemove to make that automatic
[03:36] <sarnold> "--auto-remove, --autoremove". dude. <3
[03:38] <JanC> careful if you mix several APT tools though, as I'm not sure all of them use the same database to store installed-as-dependency info
[03:38] <JanC> (I remember they didn't in the past, not sure they do now)
[03:39] <sarnold> twenty years of typing apt-get update &&  apt-get -u dist-upgrade has kinda burned that into my fingers
[03:39] <JanC> I mean apt-get vs. aptitude vs. ...
[03:39] <sarnold> you'd think I' shorten that .. but no.
[03:39] <sarnold> so I tend to forget that aptitude even exists.
[03:42] <JanC> I guess it's trivial to make an alias for update-then-upgrade  :)
[03:43] <JanC> or a bash function if you want to make it somewhat fancier
[03:43] <sarnold> it's been a busy two decades
[04:39] <lyn||orian> muscle memory probably now
[07:01] <lordievader> Good morning
[07:52] <Neo4> where is located syslog file in ubuntu?
[07:52] <Neo4> I've read it should be in /etc/syslog.conf , but there is nothing
[07:52] <Neo4> Has ubuntu syslog.conf file?
[07:53] <Neo4> and what does this command ps auxwww | grep syslog ?
[07:54] <Neo4> book that I'm reading about Unix, and for 2005 years, but I think changed nothing so far
[08:03] <Neo4> who know what is hostname lookup?
[08:25] <lordievader> Neo4: What version of Ubuntu are you running?
[08:26] <lordievader> And what is it that you are trying to accomplish? Read the syslog or configure the syslogger?
[08:30] <Neo4> 16.03
[08:31] <Neo4> nothing, in book written this is the main log file
[08:31] <Neo4> there placed all paths where your system stores logs
[08:32] <Neo4> I wanted look at, but didn't find, book old and unix
[08:40] <parlos> Good Morning
[09:27] <lordievader> Neo4: Logfiles are typically stored in `/var/log`, though with 16.04 you use systemd which comes with journald. To access those logs you need to use the `journalctl` utility.
[09:28] <lordievader> Hey parlos
[09:30] <JanC> Neo4: Ubuntu uses rsyslogd instead of some older syslogd, so the syslog configuration is in /etc/rsyslog.conf
[09:30] <Neo4> ok
[09:30] <JanC> but as lordievader says, you can see logs with journalctl too
[10:10] <adac> Is there a standard way of removing all old kernels?
[10:16] <rbasak> adac: "apt autoremove". With --purge if you wish. This will remove everything that apt thinks is unused, including old kernels.
[10:17] <rbasak> At least from Xenial onwards. Not sure about Trusty.
[11:16] <adac> rbasak, thanks!
[11:16] <adac> is purge needed as well?
[11:18] <adac> mean so that the kernels do get removed?
[11:21] <rbasak> adac: the payload will get removed just with autoremove. purge also removes config files and knowledge of the package from the package manager.
[11:21] <rbasak> I almost never use remove on its own.
[11:21] <adac> rbasak, ok yes thanks!
[11:27] <adac> rbasak, "apt autoremove" is something different then "apt-get autoremove"?
[11:28] <rbasak> apt is a friendlier front-end with some defaults changed.
[11:28] <rbasak> Since apt-get is generally locked in to interface and defaults because scripts use it
[11:28] <adac> rbasak, ok thanks
[11:29] <adac> rbasak, so one should generally use apt now?
[11:29] <tomreyn> adac: either is fine, apt may be more user friendly
[11:29] <tomreyn> for scripting things, use apt-get
[11:30] <adac> kk
[11:32] <adac> hmm even tough I did "apt-get autoremove --purge" it stil shows me a lot of images left still
[11:32] <adac> dpkg --list | grep image
[11:32] <adac> https://pastebin.com/XxrEeiGA
[11:34] <tomreyn> !info linux-image-generic xenial
[11:34] <tomreyn> do you have update-manager installed?
[11:35] <rbasak> adac: the mechanism depends on apt considering the kernel packages "automatically installed".
[11:35] <rbasak> The apt-mark command will tell you what is marked auto and what is marked manual.
[11:35] <adac> ok have to check thanks guys!
[11:35] <rbasak> Usually there's a metapackage like linux-image-generic marked manually installed that depends on the latest actual kernel package
[11:36] <rbasak> And the actual kernel packages remain marked automatic.
[11:36] <rbasak> If you have your kernels installed in some special way, that may break.
[11:37] <adac> ok need to go trough this be back in some time surely have some more questions :)
[11:38] <adac> I think I forgot --purge on that last host where the *images* are still there
[11:38] <adac> on another host where I used --purge now definitely the images are gone
[11:46] <adac> no that was not the isue. checking this marked stuff now
[11:47] <adac> apt-mark autoshow shows me:
[11:47] <adac> https://pastebin.com/wr8QCfuw
[11:48] <adac> rbasak, can i get rid of those old images then somehow?
[11:49] <rbasak> adac: you can purge the old package manually
[11:50] <adac> rbasak, ok simply by package name
[11:52] <adac> worked
[11:52] <adac> thanks again rbasak and tomreyn
[16:00] <rbasak> nacc: dpb1: https://irclogs.ubuntu.com/2018/03/26/%23ubuntu-server.html#t13:52
[16:00] <nacc> rbasak: teward: do you have a link to the 16.04 request?
[16:03] <rbasak> nacc, dpb1: https://lists.ubuntu.com/archives/ubuntu-server/2015-June/007080.html
[16:04] <JediMaster> Hey all
[16:04] <JediMaster> Is there a tool to migrate from a basic /etc/network/interfaces to the new /etc/netplan/*.yaml file? And yes, I've tried what the ubuntu docs say, "netplan ifupdown-migrate" isn't a valid option (at least in bionic)
[16:04] <dpb1> nacc: Hey, in between waiting on reviews, could you please dive down and see if this upgrade to nginx makes sense.  We'll still try to get teward's input, but would help to have your validation
[16:05] <rbasak> nacc: dpb1: also https://lists.ubuntu.com/archives/ubuntu-release/2015-July/003310.html
[16:05] <dpb1> JediMaster: where do you see the docs saying 'ifupdown-migrate'
[16:05] <dpb1> JediMaster: that's not supposed to be there and we don't recommend an automated tool to migrate ATM
[16:05] <JediMaster> I'm trying to clone a VM in VMWare using chef's 'knife vsphere' command, and vmware sets the IP, gateway and other bits via the /etc/network/interfaces file as it looks like it's not caught up with netapp on 17.10/18.04 yet. I could easily write a script to run a command to convert the script then run netplan to bring the interfaces up
[16:06] <nacc> dpb1: ack, will do
[16:06] <nacc> rbasak: thank you
[16:06] <JediMaster> dpb1, https://wiki.ubuntu.com/Netplan under "Commands" near the top
[16:06] <nacc> dpb1: cough, that should be just a link to netplan.io, no?
[16:06] <nacc> cyphermox: --^
[16:10] <JediMaster> I'm getting to the point that I think I'll have to write a script to do the migration myself lol
[16:11] <JediMaster> it's probably not *that* hard, just didn't want to re-invent the wheel
[16:13] <JediMaster> Still not entirely sure what it is that writes the /etc/network/interfaces file when VMWare clones the machine, I'm guessing it must be the 'open-vm-tools' package, in which case that probably needs updating to work with netplan
[16:14] <JediMaster> dpb1, so I presume I'll need to write one myself then, just while the vmware tools don't support netplan?
[16:15] <JediMaster> it's a super simple config, and will always be the same other than different IPs
[16:19] <dpb1> nacc: :/
[16:19] <dpb1> JediMaster: what vmware tool is that?
[16:20] <dpb1> nacc: I'll fix that now, thanks
[16:21] <nacc> dpb1: i'm *guessing* the wiki page predates netplan.io and was never updated once the other page went live
[16:21] <dpb1> yes
[16:22] <JediMaster> dpb1: My best guess is that it's the open-vm-tools package in ubuntu that writes to the /etc/network/interfaces file when you clone a VM and specify a new IP/gateway etc
[16:22] <nacc> cpaelzer: --^ i think you were looking at that package?
[16:22] <dpb1> JediMaster: what action do you take from the outside?  *just* pick an ubuntu vm and clone it?
[16:24] <JediMaster> dbp1: I've just made an Ubuntu 18.04 (yes beta) template machine, the netplan file was made correctly from the installer and it has network access, I then use chef's 'knife vsphere' integration that talks to VMWare's Vsphere, which clones the machine and sends commands, I believe via vmware tools (open-vm-tools), to set the new IP/gateway and DNS
[16:25] <JediMaster> I highly doubt that vmware/vsphere would actually write directly to the disk, so I suspect it's the open-vm-tools package that does the network changes. It's worked perfectly on Ubuntu 14.04 and 16.04, but then gets stuck waiting for the network interface to configure on 18.04, as it's written to the wrong file
[16:26] <nacc> JediMaster: you also could install ifupdown (iirc) in 18.04
[16:26] <dpb1> JediMaster: ok
[16:27] <JediMaster> nacc.... ah, I didn't know that was an option, but that seems a more dirty hack than writing a script to create a yaml file for netplan somehow ;-)
[16:27] <dpb1> JediMaster: see https://netplan.io -> examples -> I really do need ifupdown, can I still use it?
[16:27] <dpb1> JediMaster: can you do that?
[16:28] <dpb1> JediMaster: also, please take the advice in that first sentence and file your workflow.  The detail that you give in your IRC comments here would be great in a bug.  Say exactly what didn't work.
[16:28] <dpb1> sorry, *file a bug
[16:29] <JediMaster> dpb1, Sure, I'd be happy to, thanks for your help, nacc too
[16:29] <JediMaster> Should I file one in both netplan and open-vm-tools?
[16:30] <nacc> JediMaster: +1 :)
[16:30] <dpb1> JediMaster: that same bug, just *target* it to both projects
[16:30] <JediMaster> ah yes, of course
[16:30] <dpb1> tyvm
[16:30] <JediMaster> No problem, I'll get on to it shortly, thanks
[16:31] <JediMaster> netplan is certainly more complex than ifupdown & /etc/network/interfaces syntax, but it's so much more powerful, it'll just take a bit of getting used to =)
[16:34] <bladernr> Hey gang, while I'm testing a customer config, I wanted to see ask if booting from NVMe is a viable option. System has several platter drives meant for data storage and VM hosting, and an NVMe meant for the root FS.  Is that a valid deployment scenario?
[16:35] <dpb1> bladernr: I'd think it would come down to EFI/bios support?  Unless I'm missing something?  it's just a disk to ubuntu.
[16:36] <bladernr> dpb1, ok, that's what I thought, I just wanted to validate that. (I've never had my hands on a system with NVMes before now).
[16:36] <bladernr> thanks
[16:36] <dpb1> bladernr: oh, ok
[18:34] <shibabandit> Hope I'm in the right channel and apologies if this was already asked... but attempting to figure out why certain configurations of ubuntu cloud image are missing that used to be there. We typically use the endpoint 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt' (or xenial instead of trusty). Used to have the image combination of  'us-east-1', 'ebs-ssd', 'amd64', and 'hvm'. Any help is appreciate
[18:37] <sarnold> rcj,Odd_Bloke, ^^ does shibabandit's question sound familiar?
[18:37] <sarnold> shibabandit: I don't spot any hvm in that list ..
[18:39] <rcj> fginther: ^
[18:40] <sarnold> thanks rcj
[18:40] <rcj> shibabandit: it's broken, we're looking into it
[18:40] <fginther> shibabandit, yes, I'm currently working on the issue
[18:42] <shibabandit> Thank you rjc. I noticed irregularities to what I had seen in the past for both trusty and xenial, which are the LTS endpoints we use. Would you be able to recommend the right place to get updates on their availability? Would it be this chat or is there a web page I should be checking?
[18:50] <rcj> shibabandit: https://bugs.launchpad.net/cloud-images/+filebug is the place to file bugs and then you can track updates on fixes
[18:51] <sarnold> oh cool, I don't think I've seen this yet :)
[18:51] <sarnold> Odd_Bloke: ^^ it's been handled, feel free to ignore ;)
[18:51] <rcj> shibabandit: We'll put a link on the top page of cloud-images.u.c because they're only on the individual releases where you'll see the bug link (ex. https://cloud-images.ubuntu.com/xenial/current/)
[18:52] <shibabandit> Thank you for your help!
[18:54] <rcj> shibabandit: We're going to revert trusty's released.current.txt to the prior serial which has a full compliment of images until the publication is complete.
[18:54] <zero_shane> hi all - I'm testing Bionic Beta 1 via netboot - and all of my VMs or metal installs hang on 'update-grub'.  I searched through launchpad ... but no dice.  Is this a good place to discuss, or should I take my business elsewhere?
[18:56] <sarnold> zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too
[18:56] <sarnold> zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too
[18:56] <zero_shane> ok - will check there - they're pretty low user count - but will try there
[18:56] <ahasenack> zero_shane: which install image is that, server or desktop? And I presume it's beta2, right? Or one of the ubuntu variants?
[18:57] <zero_shane> server
[18:57] <zero_shane> http://releases.ubuntu.com/18.04/ubuntu-18.04-beta2-live-server-amd64.iso
[18:57] <zero_shane> the only server ISO I could find for Bionic
[18:57] <zero_shane> I had to download the Bionic netboot kernel and initrd which isn't bundled in this ISO
[18:58] <ahasenack> interesting, cdimage.u.c has a non-live one
[18:58] <ahasenack> maybe that's with the old installer, I'm not sure
[18:58] <ahasenack> http://cdimage.ubuntu.com/releases/18.04/beta-2/ubuntu-18.04-beta2-server-amd64.iso
[18:59] <zero_shane> ISOs everywhere .... I'll check that one out too - thx for the pointer
[19:06] <dpb1> so
[19:07] <dpb1> zero_shane: this would be the best: http://cdimage.ubuntu.com/daily-live/current/
[19:07] <dpb1> zero_shane: that is the new installer, much faster, less questions, etc
[19:08] <dpb1> zero_shane: you can read more about it here: http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html
[19:11] <zero_shane> hmm @dpb1 - it appears that's just a new interactive installer, right ?   I don't care about those - we deploy 10s of thousands of machines via Preseeds
[19:11] <zero_shane> it definitely looks like a nice overhaul/replacement for the old text based installer, though
[19:11] <dpb1> zero_shane: ok, then yes.  for preseed, stick with the old d-i based one
[19:12] <zero_shane> :)
[19:12] <sarnold> kirkland`: pretty screenshots on http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html  :D thanks! everything needs more screenshots..
[19:13] <dpb1> zero_shane: however, I'd use the one from here... http://cdimage.ubuntu.com/ubuntu-server/daily/current/, the beta itself isn't as interesting as the most up to date (it will be closer to what we ship)
[19:14] <dpb1> zero_shane: but, I understand you are having issues with what you are trying, so if you repeat the issue and think it's a bug, please do give more details on it, we are very interested in getting that kind of feedback here.
[19:19] <zero_shane> @dpb1 - will try the daily images - thx !
[19:26] <shibabandit> I see the AMI listings showing up now in 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt', thank you this resolves my issue.
[22:10] <ProCycle> Does anyone have experience with using systemd to run multiple instances of mariadb on ubuntu 16.04?
[22:10] <ProCycle> I'm looking at https://mariadb.com/kb/en/library/systemd/
[22:11] <roaksoax> ProCycle: why not just put them in lxc container s?
[22:11] <dpb1> +1
[22:11] <dpb1> lxd
[22:11] <ProCycle> It has a short blurb about it but I can't seem to find the mariadb@.service file anywhere to figure out how to use it
[22:11] <dpb1> you'll save yourself a huge headache
[22:11] <roaksoax> indeed
[22:11] <dpb1> lxc launch ubuntu:xenial
[22:12] <TJ-> That's a template file ProCycle
[22:12] <TJ-> ProCycle: apt-file search reports mariadb-server-10.1: /lib/systemd/system/mariadb@.service
[22:13] <ProCycle> I've never used containers before, seems like a whole can of worms
[22:13] <TJ-> container of worms? :)
[22:13] <sarnold> *groan*
[22:14] <TJ-> Hey! I was gardening all day, I had containers of worms :)
[22:14] <ProCycle> Thankyou TJ- found it
[22:14] <dpb1> ProCycle: well, running multiple instances of mysql on the same filesystem surely has it's own challeges
[22:15] <ProCycle> I was just running multiple databases on one server instance but that seems to open a can of worms when dealing with mariabackup
[22:16] <ProCycle> They aren't high performance databases, just one main one and a bunch of seldom used databases
[22:16] <TJ-> systemd's templating for multiple instances is very useful
[22:17] <TJ-> and very elegantly implemented
[22:17] <ProCycle> dpb1, are you referring to the fact you need separate directories or something else more sinister?
[22:18] <ProCycle> TJ-, Okay looks like the template uses /etc/mysql/conf.d/my%I.cnf
[22:18] <TJ-> ProCycle: yes
[22:19] <ProCycle> Inside that cnf file do I still need to do the whole [mysqld1] group naming or just a plain copy with [mysqld] ?
[22:19] <dpb1> ProCycle: really a portion of the cattle vs pet argument.  single-purpose your box device with containers.
[22:20] <dpb1> then that container is focused on one thing
[22:20] <ProCycle> Yeah I debated that, typically I would just run multiple VMs for each but seemed wasteful
[22:22] <dpb1> that's the nice thing about containers
[22:22] <dpb1> density
[22:22] <dpb1> you can run 10-100x more per server than vms, really you are just paying for an init system and your application.
[22:22] <ProCycle> But having a separate server instance for each different project makes backups easier than combining all of them into one instance
[22:23] <ProCycle> One of these days I'll learn how to use containers, I just don't have the time to right now
[22:23] <ProCycle> Thanks for the input though, it's certainly something I considered
[22:59] <cocoa117> i am trying to create multiple routing table and mark IP packet to certain IP using ppp0 rather then default route
[23:00] <cocoa117> however it appears the return packet from remote never reach to the application, e.g. curl timeout
[23:00] <cocoa117> i run tcpdump showed the remote IP send packet back, but local app never received them
[23:01] <cocoa117> can anyone think any reason this might be?
[23:23] <ProCycle> TJ-, I got it working (needed to create data directory and set perms, run mysql_install_db, and run the secure script)
[23:24] <TJ-> ProCycle: yes, that'd be expected
[23:25] <ProCycle> That section on multiple instances sorta says that but doesn't really spell it out
[23:26] <TJ-> yeah, they're familiar with the program so they forget to mention the hidden assumptions
[23:32] <ProCycle> Huh there seems to be problem with the mysql_install_db script
[23:33] <ProCycle> I ran it with --user=mysql but there's one directory it didn't set the group to mysql on the install
[23:33] <ProCycle> which is strange cause the default database when it installs mariadb it does
[23:33] <ProCycle> the mysql directory