[02:49] hi, is there an equivalent in to to 'yum history' and 'yum history undo ' ? [02:50] I don't know what those do, but /var/log/dpkg.log has some information on what was installed when [02:53] it lists history of package installs, and installed dependencies. the undo is particularly helpful because it can automatically remove the dependencies too [02:54] pity there's no equivalent in ubuntu [02:56] oh [02:57] if you uninstall a package that dragged in dependencies, apt-get autoremove can clean those up [02:57] yummy [02:57] the deborphan package can help if apt has lost track of what was installed strictly for dependencies.. [02:58] got it, thanks [03:35] apt-get also has the --autoremove option for the remove command, which will do that in one step [03:35] oh sweet [03:36] or you can set APT::Get::AutomaticRemove to make that automatic [03:36] "--auto-remove, --autoremove". dude. <3 [03:38] careful if you mix several APT tools though, as I'm not sure all of them use the same database to store installed-as-dependency info [03:38] (I remember they didn't in the past, not sure they do now) [03:39] twenty years of typing apt-get update && apt-get -u dist-upgrade has kinda burned that into my fingers [03:39] I mean apt-get vs. aptitude vs. ... [03:39] you'd think I' shorten that .. but no. [03:39] so I tend to forget that aptitude even exists. [03:42] I guess it's trivial to make an alias for update-then-upgrade :) [03:43] or a bash function if you want to make it somewhat fancier [03:43] it's been a busy two decades [04:39] muscle memory probably now === JanC_ is now known as JanC [07:01] Good morning [07:52] where is located syslog file in ubuntu? [07:52] I've read it should be in /etc/syslog.conf , but there is nothing [07:52] Has ubuntu syslog.conf file? [07:53] and what does this command ps auxwww | grep syslog ? [07:54] book that I'm reading about Unix, and for 2005 years, but I think changed nothing so far [08:03] who know what is hostname lookup? [08:25] Neo4: What version of Ubuntu are you running? [08:26] And what is it that you are trying to accomplish? Read the syslog or configure the syslogger? [08:30] 16.03 [08:31] nothing, in book written this is the main log file [08:31] there placed all paths where your system stores logs [08:32] I wanted look at, but didn't find, book old and unix [08:40] Good Morning [09:27] Neo4: Logfiles are typically stored in `/var/log`, though with 16.04 you use systemd which comes with journald. To access those logs you need to use the `journalctl` utility. [09:28] Hey parlos [09:30] Neo4: Ubuntu uses rsyslogd instead of some older syslogd, so the syslog configuration is in /etc/rsyslog.conf [09:30] ok [09:30] but as lordievader says, you can see logs with journalctl too [10:10] Is there a standard way of removing all old kernels? [10:16] adac: "apt autoremove". With --purge if you wish. This will remove everything that apt thinks is unused, including old kernels. [10:17] At least from Xenial onwards. Not sure about Trusty. [11:16] rbasak, thanks! [11:16] is purge needed as well? [11:18] mean so that the kernels do get removed? [11:21] adac: the payload will get removed just with autoremove. purge also removes config files and knowledge of the package from the package manager. [11:21] I almost never use remove on its own. [11:21] rbasak, ok yes thanks! [11:27] rbasak, "apt autoremove" is something different then "apt-get autoremove"? [11:28] apt is a friendlier front-end with some defaults changed. [11:28] Since apt-get is generally locked in to interface and defaults because scripts use it [11:28] rbasak, ok thanks [11:29] rbasak, so one should generally use apt now? [11:29] adac: either is fine, apt may be more user friendly [11:29] for scripting things, use apt-get [11:30] kk [11:32] hmm even tough I did "apt-get autoremove --purge" it stil shows me a lot of images left still [11:32] dpkg --list | grep image [11:32] https://pastebin.com/XxrEeiGA [11:34] !info linux-image-generic xenial [11:34] linux-image-generic (source: linux-meta): Generic Linux kernel image. In component main, is optional. Version 4.4.0.119.125 (xenial), package size 2 kB, installed size 14 kB [11:34] do you have update-manager installed? [11:35] adac: the mechanism depends on apt considering the kernel packages "automatically installed". [11:35] The apt-mark command will tell you what is marked auto and what is marked manual. [11:35] ok have to check thanks guys! [11:35] Usually there's a metapackage like linux-image-generic marked manually installed that depends on the latest actual kernel package [11:36] And the actual kernel packages remain marked automatic. [11:36] If you have your kernels installed in some special way, that may break. [11:37] ok need to go trough this be back in some time surely have some more questions :) [11:38] I think I forgot --purge on that last host where the *images* are still there [11:38] on another host where I used --purge now definitely the images are gone [11:46] no that was not the isue. checking this marked stuff now [11:47] apt-mark autoshow shows me: [11:47] https://pastebin.com/wr8QCfuw [11:48] rbasak, can i get rid of those old images then somehow? [11:49] adac: you can purge the old package manually [11:50] rbasak, ok simply by package name [11:52] worked [11:52] thanks again rbasak and tomreyn [16:00] nacc: dpb1: https://irclogs.ubuntu.com/2018/03/26/%23ubuntu-server.html#t13:52 [16:00] rbasak: teward: do you have a link to the 16.04 request? [16:03] nacc, dpb1: https://lists.ubuntu.com/archives/ubuntu-server/2015-June/007080.html [16:04] Hey all [16:04] Is there a tool to migrate from a basic /etc/network/interfaces to the new /etc/netplan/*.yaml file? And yes, I've tried what the ubuntu docs say, "netplan ifupdown-migrate" isn't a valid option (at least in bionic) [16:04] nacc: Hey, in between waiting on reviews, could you please dive down and see if this upgrade to nginx makes sense. We'll still try to get teward's input, but would help to have your validation [16:05] nacc: dpb1: also https://lists.ubuntu.com/archives/ubuntu-release/2015-July/003310.html [16:05] JediMaster: where do you see the docs saying 'ifupdown-migrate' [16:05] JediMaster: that's not supposed to be there and we don't recommend an automated tool to migrate ATM [16:05] I'm trying to clone a VM in VMWare using chef's 'knife vsphere' command, and vmware sets the IP, gateway and other bits via the /etc/network/interfaces file as it looks like it's not caught up with netapp on 17.10/18.04 yet. I could easily write a script to run a command to convert the script then run netplan to bring the interfaces up [16:06] dpb1: ack, will do [16:06] rbasak: thank you [16:06] dpb1, https://wiki.ubuntu.com/Netplan under "Commands" near the top [16:06] dpb1: cough, that should be just a link to netplan.io, no? [16:06] cyphermox: --^ [16:10] I'm getting to the point that I think I'll have to write a script to do the migration myself lol [16:11] it's probably not *that* hard, just didn't want to re-invent the wheel [16:13] Still not entirely sure what it is that writes the /etc/network/interfaces file when VMWare clones the machine, I'm guessing it must be the 'open-vm-tools' package, in which case that probably needs updating to work with netplan [16:14] dpb1, so I presume I'll need to write one myself then, just while the vmware tools don't support netplan? [16:15] it's a super simple config, and will always be the same other than different IPs [16:19] nacc: :/ [16:19] JediMaster: what vmware tool is that? [16:20] nacc: I'll fix that now, thanks [16:21] dpb1: i'm *guessing* the wiki page predates netplan.io and was never updated once the other page went live [16:21] yes [16:22] dpb1: My best guess is that it's the open-vm-tools package in ubuntu that writes to the /etc/network/interfaces file when you clone a VM and specify a new IP/gateway etc [16:22] cpaelzer: --^ i think you were looking at that package? [16:22] JediMaster: what action do you take from the outside? *just* pick an ubuntu vm and clone it? [16:24] dbp1: I've just made an Ubuntu 18.04 (yes beta) template machine, the netplan file was made correctly from the installer and it has network access, I then use chef's 'knife vsphere' integration that talks to VMWare's Vsphere, which clones the machine and sends commands, I believe via vmware tools (open-vm-tools), to set the new IP/gateway and DNS [16:25] I highly doubt that vmware/vsphere would actually write directly to the disk, so I suspect it's the open-vm-tools package that does the network changes. It's worked perfectly on Ubuntu 14.04 and 16.04, but then gets stuck waiting for the network interface to configure on 18.04, as it's written to the wrong file [16:26] JediMaster: you also could install ifupdown (iirc) in 18.04 [16:26] JediMaster: ok [16:27] nacc.... ah, I didn't know that was an option, but that seems a more dirty hack than writing a script to create a yaml file for netplan somehow ;-) [16:27] JediMaster: see https://netplan.io -> examples -> I really do need ifupdown, can I still use it? [16:27] JediMaster: can you do that? [16:28] JediMaster: also, please take the advice in that first sentence and file your workflow. The detail that you give in your IRC comments here would be great in a bug. Say exactly what didn't work. [16:28] sorry, *file a bug [16:29] dpb1, Sure, I'd be happy to, thanks for your help, nacc too [16:29] Should I file one in both netplan and open-vm-tools? [16:30] JediMaster: +1 :) [16:30] JediMaster: that same bug, just *target* it to both projects [16:30] ah yes, of course [16:30] tyvm [16:30] No problem, I'll get on to it shortly, thanks [16:31] netplan is certainly more complex than ifupdown & /etc/network/interfaces syntax, but it's so much more powerful, it'll just take a bit of getting used to =) [16:34] Hey gang, while I'm testing a customer config, I wanted to see ask if booting from NVMe is a viable option. System has several platter drives meant for data storage and VM hosting, and an NVMe meant for the root FS. Is that a valid deployment scenario? [16:35] bladernr: I'd think it would come down to EFI/bios support? Unless I'm missing something? it's just a disk to ubuntu. [16:36] dpb1, ok, that's what I thought, I just wanted to validate that. (I've never had my hands on a system with NVMes before now). [16:36] thanks [16:36] bladernr: oh, ok === mdeslaur_ is now known as mdeslaur [18:34] Hope I'm in the right channel and apologies if this was already asked... but attempting to figure out why certain configurations of ubuntu cloud image are missing that used to be there. We typically use the endpoint 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt' (or xenial instead of trusty). Used to have the image combination of 'us-east-1', 'ebs-ssd', 'amd64', and 'hvm'. Any help is appreciate [18:37] rcj,Odd_Bloke, ^^ does shibabandit's question sound familiar? [18:37] shibabandit: I don't spot any hvm in that list .. [18:39] fginther: ^ [18:40] thanks rcj [18:40] shibabandit: it's broken, we're looking into it [18:40] shibabandit, yes, I'm currently working on the issue [18:42] Thank you rjc. I noticed irregularities to what I had seen in the past for both trusty and xenial, which are the LTS endpoints we use. Would you be able to recommend the right place to get updates on their availability? Would it be this chat or is there a web page I should be checking? [18:50] shibabandit: https://bugs.launchpad.net/cloud-images/+filebug is the place to file bugs and then you can track updates on fixes [18:51] oh cool, I don't think I've seen this yet :) [18:51] Odd_Bloke: ^^ it's been handled, feel free to ignore ;) [18:51] shibabandit: We'll put a link on the top page of cloud-images.u.c because they're only on the individual releases where you'll see the bug link (ex. https://cloud-images.ubuntu.com/xenial/current/) [18:52] Thank you for your help! [18:54] shibabandit: We're going to revert trusty's released.current.txt to the prior serial which has a full compliment of images until the publication is complete. [18:54] hi all - I'm testing Bionic Beta 1 via netboot - and all of my VMs or metal installs hang on 'update-grub'. I searched through launchpad ... but no dice. Is this a good place to discuss, or should I take my business elsewhere? [18:56] zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too [18:56] zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too [18:56] ok - will check there - they're pretty low user count - but will try there [18:56] zero_shane: which install image is that, server or desktop? And I presume it's beta2, right? Or one of the ubuntu variants? [18:57] server [18:57] http://releases.ubuntu.com/18.04/ubuntu-18.04-beta2-live-server-amd64.iso [18:57] the only server ISO I could find for Bionic [18:57] I had to download the Bionic netboot kernel and initrd which isn't bundled in this ISO [18:58] interesting, cdimage.u.c has a non-live one [18:58] maybe that's with the old installer, I'm not sure [18:58] http://cdimage.ubuntu.com/releases/18.04/beta-2/ubuntu-18.04-beta2-server-amd64.iso [18:59] ISOs everywhere .... I'll check that one out too - thx for the pointer [19:06] so [19:07] zero_shane: this would be the best: http://cdimage.ubuntu.com/daily-live/current/ [19:07] zero_shane: that is the new installer, much faster, less questions, etc [19:08] zero_shane: you can read more about it here: http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html [19:11] hmm @dpb1 - it appears that's just a new interactive installer, right ? I don't care about those - we deploy 10s of thousands of machines via Preseeds [19:11] it definitely looks like a nice overhaul/replacement for the old text based installer, though [19:11] zero_shane: ok, then yes. for preseed, stick with the old d-i based one [19:12] :) [19:12] kirkland`: pretty screenshots on http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html :D thanks! everything needs more screenshots.. [19:13] zero_shane: however, I'd use the one from here... http://cdimage.ubuntu.com/ubuntu-server/daily/current/, the beta itself isn't as interesting as the most up to date (it will be closer to what we ship) [19:14] zero_shane: but, I understand you are having issues with what you are trying, so if you repeat the issue and think it's a bug, please do give more details on it, we are very interested in getting that kind of feedback here. [19:19] @dpb1 - will try the daily images - thx ! [19:26] I see the AMI listings showing up now in 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt', thank you this resolves my issue. === mikal_ is now known as mikal === maxb_ is now known as maxb [22:10] Does anyone have experience with using systemd to run multiple instances of mariadb on ubuntu 16.04? [22:10] I'm looking at https://mariadb.com/kb/en/library/systemd/ [22:11] ProCycle: why not just put them in lxc container s? [22:11] +1 [22:11] lxd [22:11] It has a short blurb about it but I can't seem to find the mariadb@.service file anywhere to figure out how to use it [22:11] you'll save yourself a huge headache [22:11] indeed [22:11] lxc launch ubuntu:xenial [22:12] That's a template file ProCycle [22:12] ProCycle: apt-file search reports mariadb-server-10.1: /lib/systemd/system/mariadb@.service [22:13] I've never used containers before, seems like a whole can of worms [22:13] container of worms? :) [22:13] *groan* [22:14] Hey! I was gardening all day, I had containers of worms :) [22:14] Thankyou TJ- found it [22:14] ProCycle: well, running multiple instances of mysql on the same filesystem surely has it's own challeges [22:15] I was just running multiple databases on one server instance but that seems to open a can of worms when dealing with mariabackup [22:16] They aren't high performance databases, just one main one and a bunch of seldom used databases [22:16] systemd's templating for multiple instances is very useful [22:17] and very elegantly implemented [22:17] dpb1, are you referring to the fact you need separate directories or something else more sinister? [22:18] TJ-, Okay looks like the template uses /etc/mysql/conf.d/my%I.cnf [22:18] ProCycle: yes [22:19] Inside that cnf file do I still need to do the whole [mysqld1] group naming or just a plain copy with [mysqld] ? [22:19] ProCycle: really a portion of the cattle vs pet argument. single-purpose your box device with containers. [22:20] then that container is focused on one thing [22:20] Yeah I debated that, typically I would just run multiple VMs for each but seemed wasteful [22:22] that's the nice thing about containers [22:22] density [22:22] you can run 10-100x more per server than vms, really you are just paying for an init system and your application. [22:22] But having a separate server instance for each different project makes backups easier than combining all of them into one instance [22:23] One of these days I'll learn how to use containers, I just don't have the time to right now [22:23] Thanks for the input though, it's certainly something I considered === devil is now known as Guest20480 [22:59] i am trying to create multiple routing table and mark IP packet to certain IP using ppp0 rather then default route [23:00] however it appears the return packet from remote never reach to the application, e.g. curl timeout [23:00] i run tcpdump showed the remote IP send packet back, but local app never received them [23:01] can anyone think any reason this might be? [23:23] TJ-, I got it working (needed to create data directory and set perms, run mysql_install_db, and run the secure script) [23:24] ProCycle: yes, that'd be expected [23:25] That section on multiple instances sorta says that but doesn't really spell it out [23:26] yeah, they're familiar with the program so they forget to mention the hidden assumptions [23:32] Huh there seems to be problem with the mysql_install_db script [23:33] I ran it with --user=mysql but there's one directory it didn't set the group to mysql on the install [23:33] which is strange cause the default database when it installs mariadb it does [23:33] the mysql directory