[03:43] <sarthor> HI, I have ubuntu installed on PC, and have two NIC, one NIC is configured automatically when I was installing OS, and name as "ens18", I have other LAN card also plugged in the same box, How to find the name of that lan card, because the name are not like eth0/ eth1 and eth2 .... HELP
[03:44] <sarnold> try 'ip link'
[03:53] <sarthor> ifconfig -a shown that.. thanks
[04:59] <cpaelzer> good morning
[06:02] <lordievader> Good morning
[07:20] <tobasco> jamespage: coreycb https://www.cvedetails.com/cve/CVE-2017-2592 is not in main python-oslo.middleware for xenial (mitaka)
[07:20] <tobasco> https://review.openstack.org/#/c/425734/
[07:20] <tobasco> python-oslo.middleware is already the newest version (3.8.0-2).
[07:21] <tobasco> 500 http://se.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
[07:35] <tobasco> can see it triaged but the fix is not released
[10:27] <joelio> yep
[10:36] <rbasak> nacc: I think it's distro-info-*data* that needs updating, which is a separate source package.
[10:37] <rbasak> The distro-info snap part has a stage-packages of ridsro-info, which is confusing
[10:38] <rbasak> I suspect that if we need distro-info from source, we don't need it staged explicitly, but we do need distro-info-data staged as that comes from the archive perhaps?
[10:38] <rbasak> So a rebuild might work.
[10:41] <rbasak> nacc: do you remember why distro-info is built from the tarball?
[13:22] <ahasenack> rbasak: hey, I got a distro-info-data update today in bionic, and it has a cosmic line in the ubuntu csv
[13:31] <frickler> thedac: coreycb: updated https://bugs.launchpad.net/neutron/+bug/1750121 . I'll be mostly off for a long weekend now, will check back next week
[13:32] <coreycb> tobasco: ok that is on our security team's radar, though I think I'll just give them a hand and do the SRU myself
[13:33] <coreycb> frickler: thanks! i'll check in with david when he gets in.
[13:33] <coreycb> frickler: he is thedac btw
[13:34] <frickler> coreycb: yes, I found that out and highlighted him, too ;)
[13:35] <coreycb> frickler: ah missed that, cool :)
[14:04] <Neo4> Hi, I've done first shell scrip, it installs apache2, mysql, php, phpmyadmin and node.js nvm
[14:04] <Neo4> https://gist.github.com/kselax/0b07445fba101e6b74732f64814070c0
[14:20] <joelio> Neo4: I'd imagine most people in here use a configuration management too like ansible or puppet
[14:20] <joelio> s/too/tool
[14:20] <joelio> but fair play if it's yout first :)
[14:20] <Neo4> joelio: that is not modern, better to use your own
[14:20] <Neo4> joelio: your own script give you more flaxibility, you can put editing files there, as I did in mine
[14:21] <joelio> no, it doesn't but anyway
[14:21] <joelio> ansible and puppet allow you to create hierarchies of looups of configuration data
[14:21] <Neo4> joelio: before editing file I output a tip and run vim with file, then user are able to press CTR+z an go to shell and read tip and then return back to file type fg
[14:22] <Neo4> joelio: ok, I saved this names, ansible and puppet, and will look on youtube what they are
[14:22] <joelio> ok, if it works for you fair enoiugh, just don't expect to get much traction in this channel... like I said, people use more specific tools
[14:22] <joelio> they were bourne out of ssh in a loop with shell scripts ;)
[14:23] <joelio> Neo4: I'd look at ansible first, it's probably easier to get into
[14:24] <Neo4> joelio: I'm going to write a few my scripts that will create automaticaly virtual host, for example put this file ~/bin/newvh newvh - means new virtual host, and user do this newvh site.com
[14:24] <Neo4> and he got /var/www/site.com and apache should be configured automatically and reload
[14:25] <Neo4> joelio: and one script that will install wp
[14:25] <Neo4> ~/bin/cpwp (copy wordpress) , cpwp site.com
[14:25] <Neo4> and script should go to github and download from there wp to /var/www/site.com
[14:25] <Neo4> joelio: is it cool? :)
[14:26] <joelio> I'd just use nginx and ansible modules, but whatever you wish :)
[14:26] <Neo4> joelio: it looks so cool, I used to do this all manually
[14:26] <joelio> if you can work it and it looks good, then fair enough, that's the most important part - if you as an admin are comfortable with the tools
[14:27] <Neo4> joelio: it will streamline your work
[14:27] <Neo4> joelio: you will do more work for short time
[14:27] <joelio> what will?
[14:27] <Neo4> joelio: shell, how much time it will take to create virtual host? and put there wordpress tite?
[14:28] <joelio> hah, well considering I never deploy wordpress (yuk) then no, it really woruild
[14:28] <joelio> *wouldn't P:)
[14:28] <Neo4> it takes much time if do it manually, sometimes you forgot commands, somtimes other errors
[14:28] <Neo4> with shell it's one minute, type two row
[14:28] <joelio> dude, I getcha, look at what configuration management tools do then come and speak to me again :)
[14:28] <Neo4> joelio: it can be other CMS, doesn't matter,
[14:29] <joelio> I don't depoloy CMS's mate, I write automation for cloud services
[14:29] <Neo4> joelio: I don't thing they better than own tools?
[14:30] <joelio> trust me, they are better than shell, for my usecases
[14:30] <joelio> if you read up on them, then you will see why :)
[14:30] <Neo4> joelio: in programming people to use frameworks only when they work in team for other people will able to understand what is going on, it might in linux people use general tools as well for other people understand it
[14:31] <Neo4> joelio: no, I doubt
[14:31] <Neo4> joelio: they should be worse, because they are made for common use
[14:31] <joelio> dude, I do this everyday for a living, sure
[14:32] <joelio> shell scripts have their place, but not for large scale system automation working with heteregenous environments
[14:32] <Neo4> joelio: ok, I look there, but I really very impressed, how easy deploy apps using shell :)
[14:32] <joelio> hell, then you ain't seen nothing yet :D
[14:32] <Neo4> joelio: can't calm down...
[14:33] <Neo4> joelio: yes, I'm a newbie
[14:34] <joelio> heh, it's cool dude!
[14:34] <joelio> being keen on tech is something to be encouraged :D
[14:34] <joelio> also, http://timstaley.co.uk/posts/why-ansible/
[14:36] <Neo4> joelio: I think  a mail server with shell also will be easy deploy and fast
[14:37] <Neo4> joelio: if you want to send spam, you can buy servers and fast deploy a few mailservers
[14:37] <Neo4> or VPN or others apps
[14:37] <Neo4> it's very big possibilities
[14:39] <Neo4> joelio: do you know about backup? I am thinking now. about theory
[14:40] <Neo4> joelio: I read in book there we can write shell script that will remotely using ssh connect to our server and get from there data and store in our local computer, This is possible to do using scp command or srsynk
[14:41] <Neo4> is this called backup?
[14:42] <Neo4> I think write script that will connect to server, and take make database damp, make archaive from /var/www/site with this damp and put to my local computer, and this task should run in cron
[14:50] <joelio> Neo4: sure, sounds fine. There are other tools available for backups btw.  Depending on your site and how busy it is, you may need to lock the tables before dumping the database too, so consistency is fine. I think mysqldump does that if you're using that
[14:51] <joelio> can create a tar of the site and db backup etc and use ssh to pull that (look up ssh keys too, you'll need those)
[14:51] <Neo4> joelio: yes, using, I have never done backup, I see on my shared hosting it had been set up, but I didn't know how it was working
[14:51] <joelio> there are off the shelf solutions too like duplicity/backupmypc etc
[14:52] <joelio> if your host does it, leverage that
[14:52] <joelio> just be aware on checking the update timestamp of the backups etc.. to make sure you've got a valid, recent backuo
[14:52] <Neo4> joelio: I read about remote backup in 'Linux bible', this was the main solution, make remote connection
[14:52] <joelio> and *the* most important part of backups - TEST YOUR RESTORES! :D
[14:53] <joelio> yea, that's fine, via ssh
[14:53] <Neo4> joelio: no, now host nothing do, if you have VPS you have to do all yourself
[14:53] <joelio> use an ssh key and then it's non-interactive
[14:53] <joelio> there are loads of tutorials etc
[14:53] <Neo4> joelio: yes, need to break down this theme on days
[15:08] <nacc> rbasak: pong
[15:09] <nacc> rbasak: you're right, we just need an updated d-i-d
[15:09] <nacc> rbasak: which you'd get on a rebuild (which appears needed for USNs anyways)
[15:13] <rbasak> nacc: thanks. Can distro-info just be used from the package now, rather than built from tarball?
[15:20] <nacc> rbasak: I *think* so -- there were times where it wasn't up to date in enough time, possibly? I'm not 100%, but I think that should be safe. You'll need to add it as a stage-package in git-ubuntu, I think
[15:21] <nacc> tbh, i would do it as two separate commits, cleanbuild each and diff the squashfs images
[15:25] <nacc> rbasak: sorry I didn't comment that better
[15:25] <nacc> rbasak: oh, we might have needed a newer python3-distro-info
[15:25] <nacc> rbasak: that's what it was
[16:02] <station> was I clumsy, hdd rack slipped directly into motherboard, the leg of an capacitor was injured and had   to replace the hall cap. but since its an eBay motherboard and I'm dealing with first timer Server Motherboard and apparently Im still very slippery. I'm still not out of the clouds. the MB supermicro A1SRi-2758F is cycling trough al kinds of code errors at System Initialising … 71 is the most stubborn one and seems to be 
[16:02] <station> ved by http://www.supermicro.com/support/faqs/data_lib/FAQ_18625.pdf
[16:02] <station> with USB in … for AMI.BIOS flash it stops at F2 code with seems to be Recovery process started
[16:02] <station> mentioned in this PDF if I'm right F3 means it found it AMI.BIOS file https://www.supermicro.com/manuals/other/AMI_BIOS_POST_Codes_for_Grantley_Motherboards.pdf
[16:02] <station> so on OSX I flashed as Microsoft FAT 16 , the keyboard is on at F2 so its functioning for CTRL+HOME
[16:31] <teward> mail servers are FUN!  >.>
[16:32] <teward> unrelated, server team: you've got a glowing report to the CC from me detailing how you all helping me was instrumental in helping to get NGINX ready to go for BIonic.  (I was talking to the CC on something else, and gave a "By the way" admiration of how helpful you all have been)
[16:36] <nacc> dpb1: fyi, i'll work on getting as much of php* sync'd over the next few weeks
[16:36] <nacc> I need to send a bunch of stuff to Debian first, I think
[16:46] <dpb1> cpaelzer: ^
[16:46] <dpb1> nacc: awesome!
[16:46] <dpb1> :)
[17:05] <sdeziel> nacc: https://bugs.launchpad.net/ubuntu/+source/php7.0/+bug/1770222 (HTH)
[17:07] <nacc> sdeziel: I can probably help the team do it, but I'm no longer at Canonical (fyi)
[17:08] <sdeziel> nacc: oh, I wasn't aware
[17:25] <cpaelzer> big +1 nacc, thanks
[17:26] <nacc> sdeziel: np
[17:26] <nacc> cpaelzer: i can send MPs for PHP MREs, or just do them myself, which would you prefer?
[17:31] <cpaelzer> nacc: I'd say decide case by case - if it is trivial/no-discussion - then I'd not want us to block you
[17:32] <cpaelzer> nacc: if there is a reasonable change that needs to be considered make an MP
[17:32] <nacc>  cpaelzer: yeah, typically the MREs are pretty straight forward; I may need to coordinate with mdeslaur though if there are CVEs
[17:32] <nacc> cpaelzer: +1, thanks
[18:24] <eshas>  I want to use the ubuntu cloud image for 18.04 on KVM ppc64le. I have done following:wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-ppc64el.img
 qemu-img convert -O raw bionic-server-cloudimg-ppc64el.img bionic-server-cloudimg-ppc64el.raw
 how do I now create passwd, ssh to imag and use it then to boot?
[18:31] <sarnold> eshas: probably best is to use cloud-init to perform the first-boot initialization tasks
[18:32] <eshas> so cloud init should alread be there in the .img
[18:32] <eshas> should I just boot it?
[18:32] <eshas> what about user/login?
[18:33] <nacc> eshas: cloud-init is a program that runs at init-time, it's what initialized (customizes) a cloud image, if you want it to
[18:33] <eshas> yes,
[18:33] <eshas> but cloud-init v18.2 should be there in image itself
[18:34] <eshas> or do I need to apt-get install it?
[18:34] <sarnold> http://cloudinit.readthedocs.io/en/latest/topics/modules.html#users-and-groups
[18:34] <nacc> eshas: it will be there in the cloud images
[18:34] <eshas> I was under impression after converting to raw I have to mount the image, chroot, passwd etc
[18:35] <eshas> and then boot it so i can use the username/passwd
[18:35] <eshas> but I am not sure how to mount and make changes to it
[18:35] <eshas> let me just first try to use the .img as is
[18:36] <sarnold> once upon a time that was probably the way things worked -- but the idea with cloud images these days is to have a single image that all the hosts can boot and a simple declaritive way to add users, ssh keys, install packages, etc., so when you start a cloud image, it comes up with the day's security updates already applied, your users and ssh keys as needed, etc, and is ready to use without manual
[18:36] <sarnold> mucking
[18:38] <eshas> how do I add my users?
[18:38] <nacc> eshas: with cloud-init.
[18:38] <nacc> eshas: as we've said a few times now :)
[18:39] <eshas> hmm..cloud-init will run when I boot and do initialization to pick network info etc
[18:39] <eshas> I dont know how to specify user/passwrd there
[18:39] <eshas> you mean config drive data?
[18:39] <sarnold> the trouble with cloud-init is how to supply the data to the thing at boot time. that's severely underdocumented.
[18:40] <eshas> yes, thats what I am stuck at.. how to ssh or have a valid username/passwd on first boot
[18:40] <nacc> there is also #cloud-init
[18:40] <nacc> and lots of docs
[18:40] <eshas> do I mount .raw / and chroot
[18:40] <eshas> hmm
[18:40] <eshas> this is more of cloud image usage
[18:41] <sarnold> just set aside an hour to read all the docs and *then* try to solve problems ;)
[18:41] <eshas> is there a channel for cloud image?
[18:41] <nacc> eshas: no, i think it's just a matter of learning cloud-init
[18:42] <nacc> eshas: i don't think it has much to do with 'cloud image usage', tbh
[18:42] <eshas> ok, I do know about cloud-init and use it but mainly for network config etc
[18:43] <nacc> eshas: you can do just about everything in cloud-init
[18:43] <eshas> there is no default user/password for ubuntu cloud-image
[18:43] <eshas> ?
[18:46] <nacc> eshas: i would doubt it
[19:04] <ahasenack> I don't understand lp's diff in https://code.launchpad.net/~ahasenack/ubuntu/+source/apache2/+git/apache2/+merge/345312
[19:04] <ahasenack> it's too big, it has changes outside of debian/
[19:05] <ahasenack> https://pastebin.ubuntu.com/p/jfvkf4vydR/ locally it's sane and as expected
[19:05] <nacc> ahasenack: looking
[19:06] <sarnold> guessing, it's showing everything different since 2.4.29-1ubuntu4.1
[19:06] <ahasenack> my guess too, which would imply debian/sid is out-of-date in lp
[19:07] <nacc> ahasenack: the branch is at 2.4.33-3
[19:07] <ahasenack> debian/sid?
[19:08] <nacc> yeah
[19:08] <nacc> (per the web UI)
[19:08] <ahasenack> that's correct then
[19:08] <nacc> ahasenack: yeah, give me a sec
[19:09] <ahasenack> oh, wait
[19:09] <ahasenack> there was a warning about empty directories
[19:09] <ahasenack> WARNING: empty directories exist but are not tracked by git:
[19:09] <ahasenack> docs/manual/style/lang
[19:09] <ahasenack> docs/manual/style/xsl/util
[19:09] <ahasenack> could that be related?
[19:09] <nacc> yeah
[19:09] <nacc> all the changes are related to fully added files
[19:10] <ahasenack> from docs/
[19:10] <nacc> and modules/ maybe?
[19:10] <ahasenack> the warning only mentioned the two dirs above
[19:10] <nacc> hrm
[19:10] <nacc> yeah docs/ for sure is all adds
[19:10] <ahasenack> let me do a clean clone elsewhere
[19:11] <nacc> did it go from empty to not?
[19:11] <ahasenack> let me check 2.4.33
[19:12] <nacc> ahasenack: and, afaik, the importer is keeping up right now (I don't have visiblity to the reports, but I periodically check and the git page is moving)
[19:17] <zave> hi all, is this the place to ask a question about systemd unit files? i'm getting an error when trying to start a service. the error is code=exited, status=200/CHDIR ... that's a permission issue?
[19:17] <ahasenack> nacc: those directories are still empty in 2.4.33: https://pastebin.ubuntu.com/p/DP8KT5Gbqb/
[19:17] <ahasenack> they are the only empty dirs in both versions
[19:17] <nacc> ahasenack: i can see in git properly tht the merge is normal
[19:17] <nacc> ahasenack: i would ask in #launchpad what's going on
[19:17] <nacc> ahasenack: there are bugs here, iirc
[19:18] <dpb1> zave: likely directory not existing?
[19:18] <ahasenack> nacc: ok, thanks for checking
[19:18] <dpb1> zave: `journalctl -u <unit>` give any more?
[19:18] <sarnold> zave: check the unit file to see if it instructs systemd to change directories or run only if a directory exists or something similar
[19:18] <nacc> ahasenack: i didn't check the merge proper, but my diff matches your pastebin
[19:19] <ahasenack> good
[19:19] <nacc> ahasenack: tbh, i don't usually pay much attention to the web UI -- but I would still ping in #launchpad, just to see if it's something obvious
[19:20] <ahasenack> nacc: it's just that, at first, I obviously selected "ubuntu/devel" as the target, which is incorrect for a merge. And the size of the diff is what alerted me to that
[19:20] <ahasenack> nacc: so I resubmitted with the correct target, but the diff was still wrong :)
[19:20] <nacc> ah
[19:21] <nacc> did you fully repropose it?
[19:21] <ahasenack> at first yes, then I saw it was still wrong, then I deleted it
[19:21] <ahasenack> but it's still wrong
[19:21] <nacc> hrm
[19:22] <nacc> there were some known issues with libgit2 that cjwatson was aware of (I'm reviewing my IRC logs :)
[19:22] <zave> unicorn_abm-demo-01.service: Main process exited, code=exited, status=200/CHDIR
[19:22] <zave> unicorn_abm-demo-01.service: Control process exited, code=exited status=200
[19:22] <zave> systemd[1]: unicorn_abm-demo-01.service: Unit entered failed state.
[19:22] <zave> systemd[1]: unicorn_abm-demo-01.service: Failed with result 'exit-code'.
[19:23] <zave> unicorn_abm-demo-01.service: Main process exited, code=exited, status=200/CHDIR unicorn_abm-demo-01.service: Control process exited, code=exited status=200 systemd[1]: unicorn_abm-demo-01.service: Unit entered failed state. systemd[1]: unicorn_abm-demo-01.service: Failed with result 'exit-code'.
[19:23] <zave> sorry about that.
[19:23] <nacc> ahasenack: ok, i hit this with apache2 some time ago, as well. pygit2 itself was generating that weird diff, even though it correctly found the merge_base being debian/sid
[19:24] <nacc> i appear to have dropped it after that and not followed up :(
[19:24] <zave> dpb1: that was the journalctl log entry
[19:25] <ahasenack> nacc: ok, and about those two empty dirs, what care should be taken before uploading? The tarball that git ubuntu build-source generates, I noticed during that run that it rejected one method of obtaining the tarball because of a hash mismatch, and I think it then downloaded it from lp
[19:25] <dpb1> zave: as seth said, I'd look at the unit file next
[19:25] <ahasenack> would that be the telltale about this empty-dir problem?
[19:25] <nacc> ahasenack: yes, i expect so, but I'm not 100%
[19:26] <ahasenack> ok, thanks for sticking around :)
[19:26] <nacc> ahasenack: i would obviously trust lp (even for debian tarballs) over our pristine-tar, if there is a mismatch
[19:26] <ahasenack> that one I can upload, so if the mp gets approved, I'll be careful about that
[19:26] <nacc> ahasenack: i'd like to see the log with the mismatch if you have it, though
[19:26] <ahasenack> oh, sure, let me run it again
[19:27] <ahasenack> the orig tarball is not a symlink in that case
[19:27] <nacc> ahasenack: wht do you mean (sorry, only half here)
[19:28] <ahasenack> like
[19:28] <ahasenack> lrwxrwxrwx  1 andreas andreas   96 mar  6 12:25 sssd_1.16.0.orig.tar.gz -> /home/andreas/git/packages/sssd/.git/git-ubuntu-cache/ubuntu/sssd/1.16.0/sssd_1.16.0.orig.tar.gz
[19:28] <ahasenack> that's what g-u usually gives us
[19:28] <nacc> right, it puts it into the cache and symlinks it
[19:28] <nacc> (when it downloads from LP)
[19:29] <ahasenack> I can delete that cache safely, right? rm ./.git/git-ubuntu-cache/*
[19:29] <nacc> yeah
[19:29] <ahasenack> k
[19:31] <ahasenack> nacc: I cleaned the cache, and the orig tarball in ../: https://pastebin.ubuntu.com/p/gPxnTbmC8T/ so far, still running
[19:31] <zave> sarnold: do you have a sec to look at my unit file, please? i don't think there's anything there that fits that description ..https://pastebin.com/3qkzzfc1
[19:32] <sarnold> zave: what's namei -l /home/deploy/apps/abm_staging/amb_demo_01/current   look like?
[19:33] <nacc> ahasenack: it's technically possible for pristine-tar to do something funky to tarballs, but we should have complained about it at import time
[19:33] <nacc> ahasenack: i'd file a bug, so we can figure out what happened
[19:34] <ahasenack> I might also file a bug because it crashed after
[19:34] <nacc> :)
[19:34] <ahasenack> https://pastebin.ubuntu.com/p/9SjvRkJbYz/ whole output
[19:34] <zave> sarnold: user was originally deploy, i just changed it a minute ago to root to see if that would affect this problem ... here's the output of what you asked ...https://pastebin.com/jTj9gpPz
[19:35] <sarnold> zave: *curious*, I wonder if the symlink upsets systemd?
[19:35] <nacc> ahasenack: ah it would appear pristine-tar is not cleaning up the generated tarball when it fails verification
[19:35] <ahasenack> yeah
[19:35] <sarnold> zave: can you try /home/deploy/apps/abm_staging/abm_demo_01/releases/20180501225458 directly and see what happens?
[19:36] <sarnold> zave: .. I've got to run, good luck with this, and please report back :)
[19:36] <zave> kthx
[20:32] <plagerism1> We have a server which has bcm5709 cards in them, but I am unable to initialize the interfaces
[20:33] <tomreyn> is ubuntu readily installed, yet, or is this while running the installer?
[20:34] <plagerism1> tomreyn: it is installed
[20:34] <tomreyn> how did you install it?
[20:34] <tomreyn> you said it's "16.04", is it 16.04.0 or a different point release?
[20:34] <plagerism1> Installed via USB media.  It was 16.04.4
[20:35] <plagerism1> I will attempt to reinstall
[20:35] <tomreyn> can you show lsb_release -ds and cat /proc/version ?
[20:35] <tomreyn> should be just two lines, no need for a pastebin
[20:44] <plagerism1> tomreyn: Ubuntu 16.04.4 LTS kernel is 4.4.0-116-generic
[20:44] <plagerism1> Network wasn't initialized during the install
[20:45] <plagerism1> Would that make a difference?
[20:45] <tomreyn> well that'd explain why it doesn't try to bring them up, and why it didnt include things in initrd
[20:46] <tomreyn> plagerism1: still, "modinfo bnx2" should not return an error
[20:46] <tomreyn> does it?
[20:46] <plagerism1> Tomeryn: for what it's worth, I did not install it
[20:47] <tomreyn> it's a bit unusual that 16.04.4 would install with the GA kernel, also, normally it'd prefer the HWE one, i think (although it should actually prompt)
[20:48] <tomreyn> !info linux-image-generic xenial
[20:49] <plagerism1> Which kernel should I try?
[20:49] <tomreyn> that's not the 16.04.4 GA kernel also
[20:50] <sdeziel> tomreyn: server now defaults to the GA kernel
[20:50] <sdeziel> https://wiki.ubuntu.com/Kernel/LTSEnablementStack#LTS_Enablement_Stacks: "Server installations will default to the GA kernel and provide the enablement kernel as optional."
[20:51] <tomreyn> you should be on this https://packages.ubuntu.com/xenial/linux-image-4.4.0-124-generic (a dpeendency of linux-image-generic) or this https://packages.ubuntu.com/xenial/linux-image-4.13.0-41-generic (a dependency of linux-image-generic-hwe-16.04) kernel.
[20:51] <plagerism1> Okay
[20:51] <tomreyn> sdeziel: thanks, good to know. the one plagerism1 has is neother GA nor HWE, though.
[20:52] <plagerism1> Weird
[20:52] <sdeziel> tomreyn: by GA, I meant the latest 4.4.0-X :)
[20:53] <tomreyn> sdeziel: are you saying https://packages.ubuntu.com/xenial/linux-image-4.4.0-124-generic is not the latest 4.4.0-X ?
[20:53] <sdeziel> tomreyn: no, I'm with you, -124 is the latest and what everyone should be running if on 4.4
[20:54] <tomreyn> thanks for clarifying
[20:54] <tomreyn> plagerism1: i'm assuming that when you said "Ubuntu 16.04.4 LTS kernel is 4.4.0-116-generic" that you were reporting the kernel version your system is running?
[20:54] <tomreyn> (since that's what i asked for, not an internet research)
[20:54] <sdeziel> my point is that 16.04.2 and subsequent point releases will still default to installing a 4.4 kernel
[20:55] <plagerism1> Yes you are correct
[20:55] <tomreyn> sdeziel: alright, i got so much
[20:56] <tomreyn> plagerism1: okay, chances are that 4.4.0-116-generic is just what the 16.04.4 installer comes with and you could not update it since for lack of network access.
[20:57] <plagerism1> Yea
[20:58] <tomreyn> plagerism1: i guess you could either reinstall while bringing the system online during installation, or you could boot from the installer image and chroot into the existing installation and just install the missing patches, this would get you an upgraded kernel, thus a new initrd, and since you'll have internet access at the time it should also include what's needed to bring up your nics this time.
[21:02] <plagerism1> I don't know what I did, but nics are working now
[21:02] <plagerism1> I installed the extras kernel and rebooted
[21:03] <plagerism1> All 7 nics
[21:04] <plagerism1> Gonna dist-upgrade and hope for the best
[21:06] <tomreyn> how did you install it with out network access?
[21:56] <plagerism1> Tomeryn: I mounted the USB drive.  The only thing I did install was linux-image-extra-4.4.0-116 and rebooted.
[21:56] <plagerism1> Interestingly enough after dist upgrade and reboot to 4.4.0-224 nics were gone
[22:01] <plagerism1> I suppose if I install linux-image-extra-virtual, this would keep dependencies with kernel updates?
[22:03] <sdeziel> plagerism1: that's the theory :)
[22:04] <Blueking> I wanna build filserver with ubuntu.. should I go intel xeon or ryzen ?
[22:05] <plagerism1> sdeziel: thanks
[22:06] <sarnold> fileservers don't usually need much cpu power, just enough to run whatever compression, hashing, and raid algorithms your filesystem needs .. ECC memory is a good choice
[22:06] <Blueking> ecc are a requirement yes
[22:06] <Blueking> gonna use it for mediaserver too
[22:07] <sarnold> is that involving on-the-fly transcoding?
[22:07] <sarnold> does that parallelize well?
[22:07] <Blueking> maybe,  havn't dug into to really
[22:07] <Blueking> into it
[22:07] <sdeziel> I once made the mistake to build a smb/cifs server using an embedded type of CPU AMD C-60 ... that turned out to be too a little weak
[22:08] <sarnold> heh
[22:08] <Blueking> amd c-60 ?
[22:08] <sarnold> yeah I've never heard of it either
[22:08] <sarnold> I suspect there's areason for that :)
[22:08]  * lyn||ian has not heard of that
[22:08] <sdeziel> yeah, old one. You should be fine with Xeon/Ryzen
[22:09] <Blueking> xeon or ryzen :P
[22:09] <sdeziel> https://paste.ubuntu.com/p/bgSSP28yqz/
[22:10] <Blueking> old is that cpu ?
[22:10] <sdeziel> I went for it because of the 9W TDP
[22:10] <sarnold> nice
[22:11] <Blueking> ok I have apc with xeon dual core  xeon e3 v3 1230L  12,5 or 25W
[22:11] <sdeziel> 2011
[22:11] <sdeziel> makes a decent lxd server though :)
[22:11] <Blueking> lxd ?
[22:11] <sdeziel> not bad for VMs either
[22:12] <sdeziel> https://help.ubuntu.com/lts/serverguide/lxd.html
[22:12] <sarnold> sdeziel: I was thinking it'd probably make a nice little maas thingy..
[22:12] <Blueking> would xeon E3 1230L be enough for filserver purpose ?
[22:12] <Blueking> + some features
[22:12] <sdeziel> that's the cool thing with AMD, they do not castrate their low-end models so it has all the smart of that generation
[22:13] <sarnold> nice
[22:14] <powersj> Blueking: I use a E3-1225 and it works just fine for file serving and hosts various lxd systems (e.g. grafana, influx, unifi software)
[22:14] <Blueking> powersj -> on-the-fly transcoding?
[22:14] <powersj> that I don't do
[22:15] <sarnold> Blueking: yeah that looks nice. 32 gigs might be a bit tight depending upon what you want to do with it, but it certainly feels like it ought to be able to fill your NIC from your disks :)
[22:15] <Blueking> would cpu do it ?
[22:16] <Blueking> I am looking for a supermicro mobo with 10GBit nic onboard..
[22:16] <Blueking> might go dual cpu mobo
[22:17] <powersj> Blueking: a quick search and there is a thread on the freenas forums about the 1230l
[22:20] <Blueking> powersj  link ?
[22:24] <Blueking> mobo for xeon E3  with 10Gbit nic onboard have not enough pci-e lanes..
[22:25] <sarnold> maybe xeon-d boards would be a good fit? iirc they've got more pcie lanes
[22:43] <tomreyn> just like eypc's and threadrippers do
[22:47] <nacc> ahasenack: rbasak: dug a bit, i think we assumed that if `gbp buildpackage` fails, it would clean up the tarballs that failed to verify, say. It does not.
[22:47] <nacc> so that's the backtrace part
[23:24] <nacc> cpaelzer: dpb1: fyi, i think i need to bootstrap phpunit again, unless slangasek or another AA wants to help untangle the builds from Debian.
[23:24] <nacc> I have started sending stuff to Debian, for universe packages, which we will hopefully be able to sync
[23:46] <kevr> what would be the best way to query if a service is set to start at boot, regardless of sysv or systemd
[23:46] <kevr> ?
[23:47] <sdeziel> systemctl is-enabled $foo
[23:48] <sdeziel> I don't know if that would work for sysv scripts though
[23:48] <kevr> there's a systemd-sysv-install script as well
[23:48] <kevr> that helps with dealing with that
[23:48] <kevr> thanks
[23:49] <sdeziel> I can't find any sysv service to test with
[23:50] <kevr> i tested
[23:50] <kevr> works fine on bionic.
[23:52] <sdeziel> which service was that?
[23:52] <kevr> it's a custom init.d service script which no systemd port
[23:52] <kevr> old school
[23:52] <kevr> well of course systemd does do the porting for me
[23:52] <kevr> ;)