[00:00] sarnold, https://pastebin.com/62dH02gj [00:00] Could it have to do with the fact I was previously running bind? [00:00] I'm still new to this, I apologize for any novice level questions. [00:01] heh, I've never seen dig used with just the @server parameter before.. [00:01] try dig @localhost www.google.com A [00:02] https://pastebin.com/bfsezwMW [00:02] Same with different ID it seems [00:03] This is the tutorial I followed https://webilicious.xyz/linux/complete-powerdns-setup-guide-on-ubuntu-server/ [00:03] But I previously had installed bind from another tutorial. [00:03] The tutorial for powerdns shows there should be 1 server, but mine reports 2 servers with DiG [00:06] Xase: okay, how about asking your server for a record that it should actually have? maybe smy suggestion of google.com was a bad ida [00:08] sarnold I haven't set any up. I was going to set it up to work with ISPConfig. [02:03] nacc: I found a stable kernel and distro for my server === Guest34494 is now known as karstensrage === xase_ is now known as Guest45743 === mgagne is now known as Guest82592 [05:39] Xase: I see now that I forgot to ask you yesterday if you were looking for a recursive resolver or an authoritative dns server. [06:03] Hello Everybody [06:06] Is live patching available for Ubuntu 16.04.3 LTS [06:06] raddy: first update your server, 16.04.5 is out [06:07] raddy: alot of new security flaws came out since [06:07] !livepatch [06:07] Canonical Livepatch is a service offered by Canonical for 64 bit 16.04 installs that modifies the currently running kernel for updates without the need to restart. More information can be found at https://ubottu.com/y/livepatch and https://www.ubuntu.com/server/livepatch [06:12] Good morning === ogra_ is now known as ogra [07:43] but can they live patch from .3 to .5 ! [08:33] i'm hosting a mirror server for getdeb/playdeb, a now unmaintained third party software repository for ubuntu. there are people using my mirror directly through apt. i'd like to use this opportunity to somehow indicate that they should remove this repository and run ppa-purge against it. is there a way i could send such a message? [08:34] i've seen some kind of a redirect to a new hostname with a message (such as this-archive-is-no-longer-maintained.example.org) which then showed up on apt output in the past, but am not sure how to do this or whether it's a good idea. [08:35] thuis was an earlier, unrelated occasion where some apt archive did this to send a message === lifeless_ is now known as lifeless === lotuspsychje_ is now known as lotuspsychje [13:03] Ohai [13:03] For some reason the Ubuntu launchpad PPA keeps timing out on me, I'm not sure how to fix. [13:12] Helenah: you could install mtr-tiny and check where the packet flow breaks. mtr -i 1 -c 5 -r it's an advanced tracert tool thingy. [13:12] hmm [13:12] I'll give it a try [13:12] also check if the DNS is resolving, etc... [13:13] It is [13:16] blackflow: Could node 7 be the problem? https://paste.ubuntu.com/p/RbY2tSpbvj/ [13:16] It's never up [13:18] Helenah: no, it only means that particular node is limiting/dropping icmp packets [13:18] and loss% is only relevant if the _last_ node _upward_ shows any [13:19] hmm [13:19] Helenah: welp looks like networking on your end is fine, the trace goes deep into canonical turf. what's teh PPA url? [13:19] ppa.launchpad.net [13:19] Or you mean the full URL? [13:20] It's the Greek Schools repo [13:21] I don't know it, can you post it? or better yet, check via browser if it's accessible? [13:22] blackflow: https://paste.ubuntu.com/p/X2G3zF6gWS/ [13:24] Helenah: well if you can ping or trace up to and including that ip (use -n for mtr to see IPs), then I doubt there's anything you can do. possibly some transitional issue. [13:24] I really need this software, it's used for my fat clients. [13:25] see if you can pull the file directly with wget, eg. wget http://ppa.launchpad.net/ts.sch.gr/ppa/ubuntu/pool/main/l/ldm/ldm_2.18.06-1+t201807230407~ubuntu18.04.1_amd64.deb [13:26] blackflow: Worked [13:26] But with APT, the same packages time out, there is no getting around it... [13:26] This is a fresh install. [13:26] try shove it in /var/cache/apt/archives/ and see if apt/dpkg will reuse it from there. Other than putting the file in the apt cache like that, I don't know if anything else needs to be done [13:57] Helenah: is apt using a proxy perhaps? [13:57] Helenah: check /etc/apt/apt.conf.d/* and related files, maybe do "grep -i proxy -r /etc/apt" [13:59] that ^ or this: apt-config shell PROXY Acquire::http::proxy [13:59] is that case insensitive? [13:59] looks like [13:59] in fact, this seems better: apt-config dump Acquire::http::prox [13:59] $ apt-config dump|grep -i proxy [13:59] Acquire::http::Proxy "http://squid-ds216.lxd:3128/"; [14:00] yeah :) [14:00] hey sdeziel i have my zfs smb share mounted on windows and nix but i cant seem to write anything to it. [14:00] cpaelzer: what is "preparing packages" here, do you know? https://bileto.ubuntu.com/#/ticket/3392 [14:00] madLyfe: by default, zfs filesystems are owned by root so maybe you need to chown some dirs? [14:00] the packages are built in the ppa [14:01] yeah, first check the unix side: make sure the user you want can write to the dirs/files you want [14:01] then repeat over samba [14:01] there are layers of permissions here [14:03] hmm [14:03] hmm [14:04] ahasenack: I'm not remembering the prepare step [14:04] I also didn't see it mentioned in the dos [14:04] docs [14:04] as a status [14:04] I did click "build" one more time after the packages were built in the ppa, and bileto wasn't "seeint" that [14:05] seeing [14:05] did you hit publish? [14:06] no [14:06] as usual, when creating the ticket, I forgot to select "cosmic" [14:06] it was at its default of zesty or something old like that [14:06] so I clicked build again after changing it to cosmic [14:06] oh, it moved [14:06] it seems the diff is not created for cosmic [14:06] now it's green [14:06] I re-triggered the diff [14:07] thanks [14:07] I set lander to approved [14:07] now it's starting the tests, all looks good [14:07] ahasenack: when you click on diff you'll see a log of the former diffs [14:07] there was none [14:07] ok [14:07] despite the old (zesty) diff being exitsing [14:07] so I thought why not re-create [14:07] and that seems to have brought it back to normal [14:13] ahasenack: It isn't [14:14] Helenah: can you pastebin the apt-get update output? [14:33] sdeziel: did you use the winbind to sync system users to samba usr db? [14:37] madLyfe: I only run smbd so I manually sync the users [14:37] madLyfe: I am probably using a weird setup though [14:38] atm im the only one accessing the share(from a couple locations), can i have it just inherit the ubuntu server user/pass? [14:38] madLyfe: I prefer to decouple the Unix and samba accounts [14:38] madLyfe: all my samba users have /bin/false as their shell [14:40] sdeziel: /bin/false/ as their shell? [14:43] madLyfe: the samba accounts have matching Unix accounts but I set their shell to be /bin/false [14:44] madLyfe: the idea is those users can only use samba and not connect to the server using SSH for example [14:45] can you sync the unix accounts(only one in my case) and manually add on samba users later? ones that wouldnt be added to the server? or would it sync those as well? [14:47] madLyfe: I am not sure I understand your question. How could you sync Unix -> samba is the samba user is only created later? [14:52] sdeziel: samba supports the ability(through another installed package?) to sync the systems users/password database? if i only had one user on the system i would only have one in samba. if i was to add more samba users later, would those then get synced to system as well? or is it only a one way sync from system to samba? or do i have the whole thing wrong? [14:52] madLyfe: for every samba user, there needs to be a corresponding linux user [14:53] to sync passwords, the maybe simplest way (but also error prone?) is via "unix password sync" [14:53] you will also need "passwd chat" [14:53] I think there is a default/example in ubuntu's smb.conf [14:53] but I haven't used that in a while [14:53] so useradd also adds that user to the ubuntu server as well? [14:54] ahasenack: It's an LTSP chroot I'm trying to set up. [14:55] Hi people! [14:57] sdeziel: You can configure SSH to only allow users in certain groups to SSH in. [14:57] Helenah: yes, I know thanks :) [14:57] Have an old computer, it was dumped on the trash lol, was wondering about to use it as server with ubuntu server: It's Intel Core 2 Duo e7500 @ 2.93GHZ, 4GB of RAM, 1 160G HDD (for system, i.e.) and another disk with 1TB. x64 arch. processor, what do you think about this for data, download and local apache server? [14:58] people really throw away anything... [14:58] franciscodelgado: In the UK, that's called robbing [14:58] Right? [14:58] Wow so I don't want to live on the UK [14:58] (Just saying) [14:58] franciscodelgado: this would make a pretty decent headless server [14:58] In spain it's called to take what another ones don't want anymore lol [14:58] franciscodelgado: I would've done the same thing tho [14:59] throwing computers in the trash should be criminal in the UK [14:59] Core 2 Duo is nice btw, especially for a server. [14:59] franciscodelgado, run forensics on it first... [14:59] sdeziel: Yeah, all those toxins, and that wasted metal [14:59] Helenah: I feel like a little child on christmas now hahaha [14:59] Helenah: yup [15:00] madLyfe: /usr/sbin/useradd only cares about linux, and smbpasswd only cares about samba. There are effectively two user databases [15:00] xase: forensic? [15:00] I went passed a skip on my estate, it had computers, hifi systems, fridges, freezers, so much electronics, I believed most of it worked and was just thrown because the owner was looking for an excuse to buy new. [15:00] after the users are created, then the password can be sort of kept in sync if it's changed via samba. If it's changed in linux, then maybe via a pam module to also change it in samba [15:00] madLyfe: it gets complicated the more users you have, that's why such setups normally resort to using ldap [15:00] Yeah like scrape the hard drive, make sure there isn't anything useful on it? [15:00] franciscodelgado: I run 80% of my home infra on a similar machine also with a Core 2 Duo [15:01] You never know. [15:01] xase, oh right [15:01] How about you shred the drive? I don't know about the laws in Spain, however in the UK, if there is illicit material on it, for example CP, it's enough to get you put on a criminal register. [15:01] Or just wipe the harddrive completely clean first. You don't want to be caught with someone else [15:01] so I will give a try on it, it's incredibly silent also [15:02] Don't even check what's on it. [15:02] Yeah that Helenah [15:02] Just shred it. [15:02] ahasenack: that seems way over my head [15:02] Don't know what the hell was going on with this pc to waste it [15:02] Checking is a way of incriminating yourself [15:02] Helenah, I thought about the CP issue after I said it. [15:02] Shred the drive. [15:03] I think there is an option on ubuntu-server installer to do womething like shred, right? like overwrite it with zeroes or similar [15:03] madLyfe: it's simpler than it sounds. The Linux/Unix account is used to access the files on the samba server itself. The samba account is used by clients to authenticate against the server [15:03] something* [15:04] franciscodelgado: before the installer started its thing, you can fire up another console and do something like that: cat /dev/zero > /dev/sda [15:05] sdeziel: oh, cool, another command to my notebook :D [15:05] franciscodelgado: If you really want to use the drive, do dd if=/dev/zero of=/dev/sda several times [15:05] You wanna overwrite the shadow several times [15:06] Helenah, yes, I think it's the best option, clean it all and forget what the HDD could contains [15:07] franciscodelgado: The idea is to "Not know". [15:07] Will begin with this tonight [15:07] You don't wanna have yourself know by checking the drive. [15:07] Because that's where information slips if you get put under suspicion. [15:07] Helenah, sorry, maybe it's because of my english, i tried to say "forget the idea of wanting to know what is inside" [15:08] I understand [15:08] :D [15:08] xD [15:08] There are more non-natives on IRC than there are natives. [15:09] Yes, and the fact is almost people on IRC speaks in english so, everyone have to learn some to come here [15:10] It's better to know English anyway, if you have to join a channel like #ubuntu-es, you are missing out on the majority of the community which could've supported you otherwise. [15:11] of course [15:14] franciscodelgado: I use this for extra safety when erasing drives: https://paste.ubuntu.com/p/rSJhqT2XkR/ [15:15] And, about here in Spain, about CP, if I find a computer or HDD or whatever containing CP the first thing to do is call the police, they will try to find the owner and you will be left on the shadows, you are suposed to be helping them [15:15] but now that I look at man shred, it seems that is supports shredding whole drives too [15:17] In data centres, drives are shredded, even if they are only a week old, if they had somes data on them, they are shredded, never reused. [15:17] It's to protect the data centre [15:17] So there is no room for accusions [15:18] aham [15:18] there is shred(1) and shred (physically) [15:20] unfortunately the former can't really be relied on. [15:21] tomreyn: on files, I'd agree but on whole disks/partitions I'd be much less worried [15:22] if its whole disks, i'd rather use ata 'secure' erase, or rather combine the two, but only if i failed to use full disk encryption. [15:28] alright... so I purged bind, and reinstalled bind. but I still have my local router ip listening on port 53 AND 127.0.0.1 and :::53. I'm having trouble setting up bind for my name servers. [15:29] All the tutorials seem to be for local dns. Or isn't quite clear on how to configure for external. [15:30] I can't use powerdns which seemed to be a lot easier, it's not supported real well by ispconfig. [15:51] Hello, I was asked to figure out how to setup up a "stage" server for productions machines so they all will point for all updates to our internal server that I need to setup. This server would be a gate keeper between ubuntu server and out production servers. So all of our production servers would be getting updates only from our internal "stage" server. When ever I update stage server then productions machines will upgrade to that version as well. Well [15:51] in reality there would be development stage server and production stage server. Production stage server would be pointing to development. I never did set this up. Where do I start? Any suggestions or links? [16:09] cryptodan_mobile: nice [17:03] kstenerud_: following your freeipa pastebin instructions now (https://pastebin.ubuntu.com/p/8pnKw3pHj4/) to see what's going on [17:13] kstenerud_: so two things missing from that pastebin so far [17:13] kstenerud_: one we talked about, the reboot. [17:13] kstenerud_: the other one I just remembered is to make the ip a static one, and not dhcp assigned, to avoid surprises [17:14] I'm doing that now over here [17:17] kstenerud_: third, I think this answer is wrong: [17:17] * Enter an IP address for a DNS forwarder, or press Enter to skip: (machine's IP address) [17:17] it's not your own address: it's your home dns, [17:18] or the libvirt provided one [17:18] using yourself as a forwarder would create a loop [17:22] ahasenack: So if I'm using libvirt provided one, what would it be? Would I find it in resolv.conf? [17:22] it would be x.x.x.1 [17:22] the .1 of the libvirt network [17:22] ok [17:23] or, another way, it would be the bridge ip on the host [17:23] in my case, [17:23] virbr0: flags=4163 mtu 1500 [17:23] inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 [17:23] 192.168.122.0/24 is the libvirt "default" network [17:23] or, don't set any forwarder, but then you won't take advantage of the host's dns cache [17:24] or its knowledge about other libvirt networks [17:24] I use squid in a lxd container, in another network, so I use libvirt's .1 DNS so that I can reach the proxy by name [17:24] from the vm [17:25] kstenerud_: the dns forwarder config means, "forward the dns request to this forwarder if the name being asked is not one of my own zones" [17:25] usually that would be the root servers, but if you have a forwarder configured, the forwarder is asked instead [17:25] but if I don't configure a forwarder it should still complete installation, right? [17:26] kstenerud_: yeah, that must have been it, the config just finished for me on a brand new vm [17:26] kstenerud_: yes, but I haven't gone down that route [17:26] in my case it probably wouldn't finish because of my proxy named "squid-ds216.lxd", I would have to replace that with an IP, or not use the proxy [17:27] since the root servers don't know about squid-ds216.lxd :) [17:27] kstenerud_: I also did the other two changes: fixed ip, and reboot after that [17:27] ahasenack: OK so just to be clear, you used the x.x.x.1 address for the dns forwarder, and also to make the address static, and the reboot? [17:27] yes [17:27] ok [17:27] now, don't follow the ip tip blindly [17:27] make sure your x.x.x.1 is a dns server [17:28] try dig with it [17:28] In theory it should work fine with DHCP since I'm only going to run it for a few mins [17:28] dig @x.x.x.1 gnu.org [17:28] yeah, it's just to avoid surprises [17:38] hmm install failed again :/ [17:39] did you check /etc/hosts? [17:39] you must have something else going on [17:40] did you use the bind9 ppa? [17:40] Do you have this in your hosts: [17:40] 127.0.1.1 cosmic-freeipa.example.com cosmic-freeipa [17:40] no, that's what I told you to remove :) [17:40] Without that it won't auto-populate fields [17:40] you have to have that entry with the real ip [17:40] not 127.x.x.x [17:40] and drop the bit without the domain [17:40] 192.168.122.40 cosmic-freeipa.example.com [17:41] just one line, like that [17:42] Hi, I am trying to instal NVM on an ubuntu server, and despite bashrc being modified, it seems the added lines have no effect : https://github.com/creationix/nvm [17:42] Thoses are the lines added in bashrc : [17:42] export NVM_DIR="$HOME/.nvm" [17:42] [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm [17:42] [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion [17:43] But even after a new login, nvm is undefined and $NVM_DIR is empty [17:44] I had no problem installing nvm on a non server ubuntu [17:44] sylario: what's with the \.? [17:44] idk [17:44] use just ., or replace "." with "source" (no quotes) [17:44] and no \ [17:46] It changed nothing [17:46] This code is the same on my ubuntu workstation and it works [17:47] And echo $NVM_DIR is still empty [17:47] not sure why [17:47] if you source .bashrc, does it get defined? [17:47] Yes, with the first line I posted here [17:48] do you have a ~/.profile? [17:48] Thoses line have been added by the nvm install script [17:48] that is what sources ~/.bashrc [17:48] yes [17:49] do you have a $BASH_VERSION variable defined? Try echo $BASH_VERSION [17:49] 4.3.48(1)-release [17:49] also check "getent passwd " and confirm that the shell for that user is /bin/bash (it's the last field) [17:50] admin❌1000:1001::/home/admin:/bin/bash [17:51] It seems my bashrc is full of config for color prompt, yet then you will have to trace the login path [17:52] check if ~/.bashrc could be exiting before your NVM_DIR addition [17:52] how do I do that? [17:57] sylario: that's usual because the shell on the server doesn't know your terminal can show colours [17:57] usually* [18:00] Is there something I could put in bashrc or profile to check if they have been run? [18:00] How do I debug that? [18:00] have them touch a file in /tmp [18:00] they are just shell scripts, so you can echo something or create a file or such [18:03] I added touch /tmp/profile at the start of profile, i delogged relogged, and the file is not in tmp [18:03] so your shell is probably bash [18:03] what does that mean? [18:04] head -n 5 ~/.profile [18:05] # ~/.profile: executed by the command interpreter for login shells. [18:05] # This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login [18:05] ls -l [18:05] that's just 2 of 5 lines, but yes [18:06] I have no idea what I should conclude from that [18:07] so do you have ~/.bash_profile or ~/.bash_login ? [18:07] I have a bash_profile [18:08] ok [18:08] well, as the message on top of ~/.profile you just partially quoted says, if you run bash, then ~/.bash_profile (if it exists) is executed instead of ~/.profile [18:08] so RVM tanked my shell [18:09] ~/.bash_profile can source ~/.profile [18:09] https://www.irccloud.com/pastebin/y0KQ3tPi/ [18:10] maybe I can put that in bashrc and delete bash_profile ? [18:11] maybe. and maybe the script it sources is not compatible with other shells [18:12] * scriptS [18:14] Thanks a lot [18:14] it works! [18:14] I added source ~/.profile [18:15] Now I have coloration in ls [18:16] sylario: hmm, aliases for ls are added in the standard .bashrc on Ubuntu IIRC [18:16] sylario: weird that you had to do anything to get those [18:16] yes, and my bashrc was not run [18:16] because rvm created a bash_profile [18:16] i think ~/.profile sources bash_rc if run by bash [18:17] i think ~/.profile sources ~/.bash_rc if run by bash [18:19] now I can install node and npm to restart the cursed deployment tool that use npm and bower and node and ember (and bootstrap) [18:20] howdy. getting "We are currently unable to retrieve the requested key. Please try again later." on https://auth.livepatch.canonical.com/. email is verified. [18:20] sylario: had you considered https://github.com/rvm/ubuntu_rvm [18:21] I should try to do more bash instead of doing python/ruby script [18:21] @tomreyn did not knew it existed [18:21] thx [18:21] sylario: it's the first thing said under 'basic install' at https://rvm.io/rvm/install [18:22] I installed rvm on this server 5 years ago [18:22] maybe you followed some other instructions [18:22] i see [18:23] 5 years is when ubuntu goes EOl, hope you upgraded in the meantime [18:23] it's ubuntu 18.04 [18:23] ahasenack: I'm not sure what I'm doing wrong, but no matter what it always fails with Unable to retrieve CA chain: [Errno 111] Connection refused [18:23] :-) [18:23] kstenerud_: did you check /etc/hosts? :) [18:24] did you test the forwarder with the dig command? [18:24] is the output of the hostname command the fqdn? [18:24] It was an unbuntu 12.04 at first according to the hosting interface [18:25] https://pastebin.ubuntu.com/p/g7qfmwf6P8/ [18:25] kstenerud_: what is on line 13? [18:25] Not sure. That got added by one of the apt installs I think [18:26] Line 12 is what I added [18:26] and when you added it, the other one was there alreayd? [18:26] try removing 13 again, and reboot. See if it's cloud-init during boot that is adding it [18:28] yup it got added after reboot [18:29] kstenerud_: ok, so it's cloud-init [18:29] so [18:29] maybe mine isn't messing with it because I supply a custom user-data to import my ssh key, set my local proxy and local ubuntu mirror [18:29] kstenerud_: there are a few ways to sort it [18:30] hammer, and non-hammer [18:30] hammer is "apt purge cloud-init" [18:30] non-hammer is to edit /etc/cloud/cloud.cfg and remove some lines [18:30] maybe these 3: [18:30] - set_hostname [18:30] - update_hostname [18:30] - update_etc_hosts [19:00] ugh it did it again [19:00] hosts is clean. hostname returns fqdn, but I still get connection refused [19:03] This is what I'm doing: https://pastebin.ubuntu.com/p/yj35Gp8GSK/ [19:04] [13/28]: publishing the CA certificate [19:04] [error] RuntimeError: Unable to retrieve CA chain: [Errno 111] Connection refused [19:05] kstenerud_: it would be nice to see where it's trying to connect. strace/tcpdump should tell you [19:06] DNS/Cert/hosts modifications/FreeIPA, what could go wrong! [19:07] kstenerud_: do you have cosmic-proposed enabled by any chance? [19:07] grep proposed /etc/apt/sources.list returns nothing [19:07] the ca server probably failed to start, the logs could tell why, maybe it's obvious in there [19:08] but it just worked out of the box for me, in a fresh cosmic vm [19:08] and your bind9 ppa [19:08] kstenerud_: note there's also /etc/apt/sources.list.d [19:09] I'm running all of this in a uvt-kvm created vm. Everything in that pastebin is exactly what I did, in that order [19:09] can I attempt? [19:09] well, we did changes after that pastebin [19:09] do you have an updated? [19:09] ah, I see [19:09] let me check that [19:10] I literally copy-paste that line by line into a terminal [19:10] did you test the forwarder with dig? [19:11] yup [19:11] the one thing we still have different is that I setup a static ip [19:11] https://pastebin.ubuntu.com/p/yDqsMjSh6T/ [19:13] huh [19:13] /etc/hostname as the FQDN? [19:13] yup [19:14] installing from the PPA now [19:15] hahahaha [19:15] 402 packages [19:15] :) [19:15] dpb1: yeah, freeipa is weird [19:15] I think it's a redhat bug, and since they develop on rh... [19:18] kstenerud_: in the meantime, can you try to fetch some logs? [19:18] like the install log it suggests [19:19] The logs just reiterate the error, and a python stack trace leading to a cli call [19:19] ok [19:19] those packages are finished installing [19:19] now next [19:21] no, something must have failed to start, otherwise there wouldn't be a connection refused [19:21] check /var/log/pki [19:22] ok hang on I need to rebuild the vm. Running a static address broke things and I can't get into it anymore [19:23] you didn't copy mine bit by bit, did you? :) [19:23] I'm in the magic phase now [19:23] It's the same subnet so it should have worked [19:23] but I also had a mac address in there [19:23] my cpu is really churning [19:23] it's like I'm on hangouts [19:23] doh! [19:24] kstenerud_: careful what you copy and paste from the internet! :) [19:24] lol [19:24] the mac address isn't needed [19:24] but it was there already, so I kept it [19:24] https://netplan.io/examples has a static address config example [19:24] oh just to stop it from cycling ips? [19:25] it's what cloud-init generated for me [19:25] dpb1: stop mining bitcoins, that'll solve the CPU usage :P (just kidding xD0 [19:25] it's a filter [19:25] dpb1: check your /etc/hosts, in another terminal probably [19:25] teward: my nuc has thusfar mined .0000000001 bitcoins, I'm afraid [19:26] kstenerud_: I'm past the 13/28 failure you pasted earlier at least, still chugging [19:26] * dpb1 wonders why he has 3 other uvt-kvm machines [19:27] ahasenack: http://paste.ubuntu.com/p/Jv7ZWgGCbT/ [19:27] note, the magic is still running. [19:27] 👍 [19:28] this thumbs up looks remarkably different from the web page where I copied it from [19:28] it's even the wrong hand [19:28] Doesn't render with the default font [19:29] https://www.dropbox.com/s/kg05oz6pfqf52yu/thumbs.png?dl=0 [19:29] I get a nice square box [19:29] very solid, sturdy looking [19:29] hehe [19:30] https://imgur.com/a/ANF6PJH [19:30] OK, it's done now kstenerud_ I have a nice 'next steps' screen [19:31] using exactly what I posted? [19:31] yes [19:31] weird... [19:31] cut-and-paste [19:31] my uvt-kvm is not virgin, but it's pretty unmodified [19:31] mine is whatever the defaults are [19:32] that's the ubuntu font, no clue why the emoji doesn't render [19:32] anyway [19:32] dpb1: black magic from the system perhaps? (Emoji don't work in a lot of IRC clients heh...) [19:33] teward: ya, I have to admit, I may have done something to get it working. been a while [19:33] i keep having to ask this, is there a way to run package autopkgtests from within a 16.04 system, and if so what's the commands :P [19:33] Rerunning the install with a static address and grabbing lunch brb [19:33] (I'm on weechat) [19:33] so terminal comes into play for me [19:33] teward: there is a bunch (of commands) [19:33] kstenerud_: ko [19:33] ahasenack: i forget what they are for 16.04's commands, happen to know any of them offhand or where I can find details? [19:33] teward: you basically need to setup vms or lxds first, and then run the tests in them with an autopkgtest (or adt?) command [19:33] i know they cahnged names between 16.04 and 18.04 [19:34] teward: are the executables autopkgtest* or adt*? [19:34] adt* [19:34] kstenerud_: do you have that autopkgtest session noted down somewhere? [19:34] and LXD isn't much of a problem, I already use it so I can utilize those pretty well [19:34] utilize that environment (and build the LXDs for the autopkgtests)* [19:35] teward: here is an irc session I had with kstenerud_ about autopkgtests: https://irclogs.ubuntu.com/2018/08/17/%23ubuntu-server.html#t16:59 [19:35] just rename the autopkgtest prefix to adt I think [19:35] or maybe check if there isn't something in xenial backports [19:36] kstenerud_: you can put that bind9 mp up I think, with these instructions you have, since they worked for dpb1 [19:36] ahasenack: +1 [19:42] ahasenack: that helped. But so did this thing I found: https://people.debian.org/~mpitt/autopkgtest/README.running-tests.html [19:42] (google helps?) [19:42] (at least to run the basic autopkgtests I need to run) [19:43] +1 === miguel is now known as Guest7814 [20:59] OK, MP is in. The fact that different uvt-kvm setups can cause app installs to succeed or fail is worriesome, though [21:00] kstenerud_: maybe we can revisit this one at the sprint. You seem to have gotten it to work yesterday [21:00] then today all is failing [21:00] yeah :/ [21:02] kstenerud_: please also mention in the MP (description I think: can't think of a DEP3 header for this now) that debian is using the same patch [21:02] we are always conserned with adding delta to debian [21:02] Oh. I got the patch from fedora. Is it in debian? [21:02] you can find a link to debian's patch in salsa.debian.org, bind9 project [21:03] timo pushed it to debian [21:03] kstenerud_: see https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1769440/comments/56 and https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1769440/comments/59 [21:03] Launchpad bug 1769440 in bind9 (Ubuntu) "freeipa server install fails - named-pkcs11 fails to run" [High,Confirmed] [21:10] kstenerud_: take a look at some logwatch bugs, see if perhaps many can be killed in one swoop: https://bugs.launchpad.net/ubuntu/+source/logwatch [21:10] all the "unmatched" types [21:11] kstenerud_: also, https://code.launchpad.net/~kstenerud/ubuntu/+source/bind9/+git/bind9/+merge/354002 should be against ubuntu/devel, since cosmic isn't released yet