[01:02] I used the nfs export module of webmin to create an nfs4 share and it created some kind of link in the filesystem. How do I remove that === Aztec03 is now known as Aztec03|afk === Aztec03|afk is now known as Aztec03 [06:33] i am on beta 18.04 ,i wan to configure kvm with openvswitch can some one guide me ,i have created openvswitch and kvm configure but i could not see/add ovs to kvm network [06:46] good morning everyone. I have a situation for which I need your expert help here as digging out how to fix this has been very unproductive for the last couple of days [06:47] I have a ubuntu 16.04 minimal server image with LVM and xfs which boots perfectly well and was generated using OpenStack DIB (disk image builder) [06:48] however when I try to add a package which requires init ram disk rebuild (like overlayroot or others) I end up not beeing able to reboot the machine [06:49] I have nailed down the issue being the fact that LVM does not seem to be taken into account because tried without LVM on xfs+extX successfully and LVM + any of extX, xfs being a failure [06:49] of course I have added the modules in /etc/initramfs-tools/modules without more success [06:50] what is also very strange is that my initial and working initrd file is 9 Mb big and the failing regenerated one is 32 Mb [06:50] any clue what could be the cause, where to look and how to fix this ? [06:57] one other thing is that my 1st initrd file in the DIB image has been generated using dracut which is also present in my packages list so may be it somehow also interferes with initramfs-tools ? Just a wild guess [07:31] rbasak: I'd like a monhtly, biweekly or even weekly (fast then) server-next bug scrub [11:11] Hello, I have created a service for autossh, http://paste.debian.net/1020536/, but when I reload systemctl daemon and restart the service, I got the following status: received signal to exit (15) Do you know what could be wrong? [11:22] mojtaba: is anything even wrong? when you restart a (simple) service it gets sent a signal. Does it not start back after service restart? [11:23] blackflow: When I execute the autossh on the terminal it works fine. But using systemd it does not work. [11:24] it does not sart? [11:24] *start [11:24] blackflow: By not working, I mean I can not ssh back from the other system. [11:24] but is the process active? checked with ps or top? [11:25] blackflow: It says, Starting the service, and then next line, services started, and gave me the ssh child pid. But the next line says received signal to exit (15) [11:26] Well, two things. First, systemd services are default root unless you specify User= under [Service]. That means autossh will start as root and will look into /root/.ssh/ for config, keys, etc... [11:26] so you should put your user's name under User= in the unit file [11:27] the second thing was, as I don't know autossh, does it remain in foreground when you start it? or does it fork and exit? [11:27] blackflow: I see. So I have to define user as my current user? Where should I put it? [11:28] I just told you. In the unit file you wrote, under [Service] section. See systemd.exec(5) manpage for more info. [11:28] blackflow: I have used -f flag option, so it is supposed to work in background. [11:29] mojtaba: in that case the service can't be simple, but forking. [11:30] blackflow: what is forking and where should I put it? [11:30] see systemd.service(5) manpage for Type= [11:30] mojtaba: but ideally, you'd not want that. drop -f and have system manage it directly. [11:33] blackflow: Thank you very much. I added the user and removed -f. It is working now. [11:35] you're welcome. [11:53] blackflow: should I create another user for the reverse-ssh? I mean do you know how can I make it more secure? [11:53] blackflow: Is there something that I have to consider as a precaution? === miguel is now known as Guest36805 [12:08] mojtaba: can't hurt to run the tunnel as another user. :) [12:08] blackflow: autossh is making the reverse tunnel as root to the remote machine. [12:08] Is that Ok, or should I change it? [12:08] The other machine is a VPS. [12:17] depends on the use case. ideally you'd want to not use root account to ssh into, unless you have to. [12:18] of course using pubkey authentication and blocking passwords is a must. [12:28] blackflow: I am using public key to log in, but I am logged in as a root. [12:41] I have created a reverse ssh from node A to node B, Do you know any command that I can use to connect to node A through node B, using a third system? I am looking for one command, instead of making ssh to node B and then again ssh to node A. [12:42] mojtaba: look into ProxyCommand ssh option [12:43] mojtaba: here's an example use case for ansible, that uses one host as a "trampoline" (so called "bastion" host) to automatically ssh through one machine into another: https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/ [12:44] blackflow: thanks. To make the reverse ssh from node A to B, I am using pubkey. But from node B to A I prefer to use password. (I think it is more secure, isn't it?) [12:44] you keep calling it "reverse". Aren't you merely creating an ssh tunnel? [12:45] "reverse ssh" would be if you initiated the connection from the server to your client... [12:45] (reverse from the POV of the client) [12:45] blackflow: No, I am creating reverse ssh using autossh and -R flag. [12:45] blackflow: computer A is behind the NAT and I am creating the reverse ssh from A to a VPS, and then I use my laptop to ssh to VPS and connect to A. [12:45] that's forwarding. not sure why you call it "reverse" [12:46] blackflow: form vps I can connect to node A, using ssh -p PORTNUMBER User@localhost [12:47] mojtaba: password auth is always less secure [12:47] mojtaba: @localhost? that just connects to itself, no? [12:47] unless you redefined the IP of "localhost" [12:48] mojtaba: if you want more security require both a public key and a password [12:48] sdeziel: uh.... AND password? [12:48] then what stops someone from ignoring the pubkey and keep bruteforcing the password? [12:48] sdeziel: I am connecting from a VPS, and I don't have physical access to it. I though may be someone would have access to the VPS and could connect to that machine using the keys. [12:48] blackflow: that is if more security is needed [12:48] unless you meant the key passphrase? [12:48] sdeziel: are you sure? if you allow passwords, then pubkeys can be ignored. [12:48] blackflow: no, I meant both [12:48] then you're wrong. [12:49] blackflow: What If I put passphrase on the keys? [12:49] password auth must be completely disabled. otherwise pubkey can be ignored and just password (attempted) brute forced. [12:49] mojtaba: a passphrase on the key is only to secure the key itself [12:49] blackflow: No, I connect to 127.0.0.1 with the defined port in system A. [12:49] blackflow: ever heard of two factor authentication? [12:50] sdeziel: yes, but ssh password auth ain't it. [12:50] 2FA is something different [12:50] sdeziel: Ok, so I can secure the keys using passphrase. [12:50] blackflow: please if you don't know something, don't call me wrong [12:50] blackflow: I've been using TFA with OpenSSH for many years, works well [12:51] sdeziel: that's okay. but that's not what PasswordAuthentication for OpenSSH means. [12:51] 2FA != PasswordAuthentication [12:51] blackflow: AuthenticationMethods publickey,password [12:51] blackflow: sdeziel: Can I create pub and private keys somewhere else and then scp them later? [12:51] sdeziel: that's not 2FA [12:51] mojtaba: yes but why not create it on the target instead? [12:52] blackflow: how so? [12:52] This is 2FA: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-two-factor-authentication [12:52] this ^ is another form of TFA [12:52] AuthenticationMethods publickey,password is just a list of allowed methods. meaning the client could ignore pubkey and try password. [12:52] sdeziel: Ok, so I have to send the private key to the source? [12:53] mojtaba: I'd advise to simply create the key pair on the destination instead. This way it has the proper perms and all [12:53] mojtaba: otherwise, yeah, send both the key and the .pub [12:54] only the pubkey is needed on the server you're connecting to. that's the whole point of "private". [12:54] sdeziel: I want to connect from VPS to node A, which is behind NAT. So I have to create the keys on node A? (just to confirm) [12:54] or VPS? [12:54] there is also forwarding of authentication via -A so you can use one keypair for forwarding too. [12:55] mojtaba: if you want node A to ssh to somewhere where you have inbound access, then yes [12:55] so on node A, you'd run ssh nodeB -R9999:127.0.0.1:22 [12:55] blackflow: I can connect passwordless to both VPS and node A. Can I use my keys on my laptop? So I don't need to generate extra key for VPS to node A. [12:56] then from your location you could ssh nodeB -p 9999 [12:56] mojtaba: yes [12:56] and you'd be poking node A's SSH [12:56] mojtaba: you generate the private-public key pair on your laptop and upload ONLY the pub key to the servers. [12:56] mojtaba: use authentication agent (enabled by default on Ubuntu), and you can use -A for ssh connection to forward the authentication [12:57] blackflow: I have done that before, and I can connect directly from my laptop to both VPS and node A. Now I want to connect from my laptop to node A through VPS. How can I use the keys on my laptop? [12:57] blackflow: Thanks. I will check it. [12:57] mojtaba: but if node A is behind a NAT (without port forward), how why you SSH in for the firts time? [12:57] with -A for ssh connection [12:57] mojtaba: use the same PUBLIC key on both A and the VPS [12:58] sdeziel: I have configured the router before to do port forwarding. But it might move somewhere else, that is why I am creating the reverse ssh tunnel. [12:59] blackflow: Can I forward authentication for two different keys? [12:59] mojtaba: the reverse tunnel requires node A to SSH to the VPS (which I assume is the box with stable access for you, right?) [12:59] sdeziel: yes [12:59] mojtaba: OK then yes, ssh -R can do it [13:00] sdeziel: So I have to use flag -A to connect from my laptop to node A? [13:00] sdeziel: Do you know the exact command? [13:00] mojtaba: ssh -A alone [13:00] mojtaba: but that's not related to a SSH reverse tunnel though [13:01] mojtaba: that will simply carry your SSH agent along with where your client goes [13:01] sdeziel: in my laptop I am using config file for ssh, so to connect to VPS I simply type 'ssh vps' [13:01] mojtaba: what's unclear to me though is why would node A be more easily reachable by the VPS than your laptop? [13:02] and to connect to node A, I type 'ssh nodeA' [13:02] sdeziel: VPS has static IP, but my laptop has dynamic IP. [13:02] mojtaba: add "ForwardAgent yes" to the config stanza [13:02] mojtaba: and node A? [13:03] does it has a dynamic IP too? [13:03] sdeziel: That one has dynamic IP address as well. [13:03] But when I connect to node A from VPS, I just simply type localhost. [13:03] sdeziel: I have to add ForwardAgent in VPS settings in config file? [13:03] mojtaba: you mean you "ssh localhost -p SOMETHING" ? [13:04] sdeziel: from VPS I type ssh -p PORTnumber localhost [13:04] sdeziel: from VPS I type ssh -p PORTnumber user@localhost [13:04] mojtaba: OK, so you seem to have the reverse tunnel already setup, which is good [13:04] sdeziel: yes [13:05] blackflow: sdeziel: blackflow helped me for that. (Thanks again) [13:05] mojtaba: instead of using the SSH agent forwarding which has some security ramifications, you may want to use something else like ProxyCommand [13:05] mojtaba: on your laptop, you'd use something like that: [13:05] sdeziel: Thanks. I will look in to it. [13:06] Host nodeA [13:06] ProxyCommand ssh VPS -W localhost@PORTnumber -l user [13:06] what security ramifications? using ProxyCommand, if that command is "ssh" requires authentication again. with -A you just forward your initial one. [13:07] that's the whole point of keys and -A. it doesn't lessen the security in any way. [13:07] blackflow: http://manpages.ubuntu.com/manpages/bionic/en/man5/ssh_config.5.html [13:08] "Agent forwarding should be enabled with caution." [13:08] okay, and why? [13:09] blackflow: it's the paragraph right after that in the man page [13:09] sdeziel: I know. I've read it. that also applies to not using -A [13:10] -A merely forwards the auth through the next ssh session. the same "warning" applies regardless of whether you connect to machine A or to B through A [13:10] and has nothing to do with -A but with forwarding X11 [13:11] blackflow: suppose you "ssh -A foo" and I also have access (with root) to foo [13:11] blackflow: while you are connected to foo, I can abuse your agent to usurp your identity and connect to other destinations as you [13:11] you can do it regardless of -A on the first machine as well. [13:12] blackflow: that has nothing to do with X11 forwarding [13:12] the warning is only for situations where you forward X11 and connect via proxy thinking the proxy offers extra security. it doesn't. [13:12] sdeziel: it does, it also says so in the paragraph which you quoted. [13:12] sdeziel: Is it @ or : before PORTnumber in ProxyCommand ssh VPS -W localhost@PORTnumber -l user [13:13] sdeziel: read the warning for "ForwardX11". If you enable it, then you expose your X11 to any machine you connect to. [13:13] the warning is there ONLY if someone thinks that using an ssh proxy makes it more secure than connecting to the proxie'd machine directly. it doesn't. [13:13] mojtaba: you are right, it's a ":" [13:14] in this case, mojtaba is in control of both machines and uses proxy to bypass NAT. which in itself does not make -A any less secure than connecting to the third machine directly. [13:16] sdeziel: I added this line to the config file: [13:16] mojtaba: something like that: https://paste.ubuntu.com/p/YxcHkBVW32/ [13:16] Host NodeA [13:16] ProxyCommand ssh VPS -W localhost:PortNumber -l UserName [13:16] But it is not working as expected, the command line is asking for the password of the NodeAuser@VPS_IP. [13:17] mojtaba: in the proxy command, make sure that "ssh VPS" matches the host entry you already have for the VPS [13:18] sdeziel: Ok, so I have to remove the -l username part? [13:19] mojtaba: sec, I made some errors, I'll send another paste [13:20] https://paste.ubuntu.com/p/WHGMJW28mm/ [13:20] sdeziel: Ok thanks. I removed the -l part and it is sending my laptop's username @ Hostname [13:23] mojtaba: so from your laptop can you simply type "ssh nodeA"? [13:24] sdeziel: It is asking the password of the NodeA user, although I can directly ssh to node A using private key. [13:25] sdeziel: Do you know what should I do to use that authentication key instead of password? [13:25] mojtaba: the part I fail to understand is opening another ssh connection, but to localhost. that just.... connects it to itself, doesn't it? did you alter the IP for "localhost"? [13:26] or am I misunderstanding what you're trying to do [13:26] blackflow: No, I didn't change it. It is working now with the config sdeziel si suggesting. But it is asking for password instead of using the auth key. [13:26] mojtaba: could you share the output of "ssh -v"? [13:28] sdeziel: It is generating some output and then asks for the password. How should I grab the output? [13:28] sdeziel: I know pastebinit [13:29] mojtaba: yeah, paste it all (initial command included) and also may your ssh_config? [13:29] sdeziel: How should I paste it? ssh | pastebinit ? [13:29] mojtaba: did you set up the pubkey authentication on the third machine? if yes, then you need to -A on your client side, OR set up another key pair on nodeA to connect to nodeB. you also need to disable PasswordAuthentication if you want keys to be effective. [13:31] blackflow: I can connect from my laptop to both VPS and node A using the keys that I have created before. [13:31] How should I use flag -A? [13:31] mojtaba: run ssh on you laptop, let it fail. start pastebinit, copy the SSH output and paste it in pastebinit, then Ctrl-D [13:31] mojtaba: yes, but if you do not *disable* PasswordAuthentication then using keys has no security benefit. [13:31] mojtaba: the proxycommand is to avoid needing "ssh -A" [13:31] mojtaba: so it is kind of orthogonal [13:31] sdeziel: It doesn't fail. It asks for the password and when I type password it connects. [13:32] mojtaba: then Ctrl-C it at the password prompt [13:32] sdeziel: The problem now is how to connect using two different keys. one for vps and another one for nodeA. [13:32] mojtaba: as blackflow said, did you add your laptop's public key to node A's authorized keys? [13:32] mojtaba: why are you using different keys? [13:32] mojtaba: with the proxy command that will work [13:33] yeah. [13:33] mojtaba: with the proxy command, your laptop will ssh to VPS, then SSH to nodeA through the VPS tunnel [13:34] sdeziel: Yes, I have defined those before. I can connect to nodeA using ssh NodeA and to VPS by typing ssh VPS [13:34] But they have two different keys. [13:34] mojtaba: https://paste.ubuntu.com/p/4F5GXky9Vb/ [13:34] This is the output of the ssh [13:34] https://paste.ubuntu.com/p/DxmXmbZhKF/ [13:36] sdeziel: Thank you very much. It is working now. [13:36] blackflow: thanks a lot. [13:37] btw did you have to use -A for the ProxyCommand'ed ssh? Or is -A implied with it? [13:37] sdeziel: Do you know how can I make it persistence? I mean the reverse ssh. [13:37] mojtaba: good, you offered 3 keys to homed but none was accepted. Looks like you are missing some in homed's authorized_keys [13:37] blackflow: No, I used ProxyCommand. [13:37] mojtaba: yes, and that just executes a command, in your case ssh. did you have to use -A for it? [13:38] sdeziel: which line? [13:38] blackflow: No I didn't use -A. [13:38] mojtaba: in your paste, the last few line with "Trying private key" [13:38] mojtaba: those are the keys you tried to auth with for homed [13:38] rbasak: hi [13:38] mojtaba: and none was accepted so you ended up asked for a password [13:38] rbasak: could you educate me a bit on git tree objects? [13:39] rbasak: in particular, I'm trying to understand methods like dsc_to_tree_hash() in git-ubuntu [13:39] mojtaba: do you expect on of those key to work or do you use a specially named one? [13:39] sdeziel: that's weird, I don't have those keys in my .ssh directory. [13:39] is that like used like a simluated import, just to get what hash it would have, but without importing it? [13:39] mojtaba: and on the laptop side? with or without -A ? [13:39] sdeziel: I have nowhere used -A. [13:40] mojtaba: k, thanks. [13:40] blackflow: thank you! [13:40] mojtaba: then how are you using your keys? [13:40] sdeziel: With you latest configuration, it connects using the correct key. [13:41] sdeziel: I define them in the config file. [13:41] mojtaba: please share that config [13:41] or the relevant portions of it [13:41] sdeziel: Ok. Just a sec. [13:44] sdeziel: http://paste.ubuntu.com/p/vWnD2ktPHn/ [13:46] mojtaba: line 20 should be identical to line 7, no? [13:47] mojtaba: you are trying to auth to node A with a user named "osmc", is that what you intended? [13:47] mojtaba: that's from your previous paste [13:47] sdeziel: well yes. [13:48] sdeziel: line 20 and 7 are the same [13:49] sdeziel: blackflow: Any suggestion regarding the config file? [13:50] mojtaba: the config looks good [13:51] mojtaba: what logs do you have on node A? [13:51] sdeziel: how can I check it? [13:52] mojtaba: on node A: "tail -f /var/log/auth.log" [13:52] mojtaba: then try to connect again, you should see a bunch of lines printed by sshd [13:53] sdeziel: no such file or directory! [13:55] mojtaba: it seems to be a debian box so that's surprizing [13:55] mojtaba: maybe /var/log/authlog ? [13:55] or /var/log/secure? [13:56] ahasenack: sure. What do you want to know? [13:56] what I asked just after? :) [13:56] sdeziel: I have faillog and lastlog [13:56] Oh [13:56] Sorry [13:57] ahasenack: it's not simulated, it's the real thing. [13:57] git is garbage collected. So you can create objects that have hashes with no reference to them, and in the short term they will continue to exist. [13:57] what does it mean to have a git tree? I originally thought they were branches [13:57] so a branch is a tree with a name, sort of? [13:57] We create the tree object first to examine it, and only after examination of the result do we create a commit that uses it. [13:58] mojtaba: grep -sl sshd /var/log/* [13:58] Not quite. [13:58] A blob is a binary...blob. It's hashed to get its...hash. [13:58] A tree is a list of entries. [13:58] Hi [13:58] Entries can be references to blogs or other trees. [13:58] Entries have some other minimal metadata too. [13:58] sdeziel: dpkg.log [13:59] so this tree is like a temporary scratch area [13:59] A tree is also given a hash based on the hash of the list of its entries. [13:59] I've installed webmin there exists postfix and dovecot but I don't know how configure that all [13:59] the way we use it [13:59] A commit contains some metadata and a reference to a tree. [13:59] mojtaba: that's unexpected [13:59] webmin is really nice thing [13:59] A branch is a reference to a commit, as is a tag. [13:59] sdeziel: It is raspberry pi [14:00] When a commit is created, first the underlying tree is established. The metadata is added, and then the whole thing is stored and its hash retrieved. [14:00] Usually the branch pointer is updated to point to the new commit. [14:00] All those steps happen anyway. [14:00] sdeziel: https://paste.ubuntu.com/p/pgvJBGXcdd/ [14:00] The importer performs the first step itself directly so that it can examine the result before it does the rest. [14:00] mojtaba: ah, then I don't know where the authlog would be [14:01] sdeziel: how is that one useful? [14:01] rbasak: ok, I was thinking "simluated" as in, "if it doesn't work, let's discard it" [14:01] mojtaba: dpkg.log is not useful unfortunately [14:01] sdeziel: No, I mean the auth logs. [14:02] ahasenack: we could do that. "Discard" in this case would be just forgetting the hash, because git will garbage collect it itself later. In practice, I'm not sure if we do ever discard it. [14:02] rbasak: a developer working with a git repo would normally not use trees like this, right? [14:02] rbasak: right, do nothing, let it be gc'ed [14:02] ahasenack: a developer working with a git repo normally never deals with tree objects directly. [14:03] They get created implicitly when commits are created. [14:03] ok, we have just broken down one of the interim/internal steps when we create this tree to examine it before creating the commit [14:03] (actually that gets optimised and it gets done when "git add" is called, but let's ignore that detail) [14:03] ahasenack: right [14:03] ok, thx :) [14:03] mojtaba: the auth.log would have messages from sshd as to why it didn't let osmc in [14:05] mojtaba: looking again at your ssh -v output, I now realize that your client is not proposing to auth with ~/.ssh/nodeA [14:06] mojtaba: as a test, could you copy/move ~/.ssh/nodeA to ~/.ssh/id_rsa? Please make sure you don't have any id_rsa key in the first place [14:07] sdeziel: Ok [14:07] sdeziel: should I change the name of both private and public keys/ [14:07] ? [14:07] mojtaba: yes [14:08] I need help troubleshooting file not found errors to see if it's permissions causing it. [14:08] I'm using /var/www/html web root, files are 644 and directories are 755 [14:09] skinux: the web server's error log should hint you [14:09] ^ that [14:09] sdeziel: stop stealing my words :P [14:09] (just kidding) [14:09] sdeziel: It says next authentication method: password [14:10] teward: strdup [14:10] mojtaba: could you share another ssh -v ? [14:12] sdeziel: https://paste.ubuntu.com/p/ZTnHZFVVVS/ [14:14] mojtaba: it's now trying to use ~/.ssh/shutterPI [14:14] sdeziel: Yes, it is the correct key [14:16] sdeziel: I've seen the log check here https://gist.github.com/skinuxgeek/4d4f86490f87805d1781782670551db9 [14:18] skinux: doesn't look like a permission error at first glance [14:20] skinux: maybe lower error_log? [14:21] mojtaba: ls -l ~/.ssh/shutterPI [14:22] Like what? [14:22] sdeziel: I have that file. This is the original file, which I renamted. [14:23] skinux: the default severity is "error" so I'd try that [14:23] I just did, same error [14:23] Primary script unknown. It makes no sense [14:23] skinux: this error seems to be from PHP-FPM [14:23] nginx uses user www-data, which is part of group www-data [14:24] Should that be running as as www-data too? [14:25] skinux: by default, PHP-FPM runs with www-data:www-data too [14:27] mojtaba: I'm still thinking about what it could be [14:28] sdeziel: What do you mean? [14:29] mojtaba: I have not thrown the towel yet ;) [14:29] sdeziel: :) [14:29] sdeziel: It is working fine now. [14:29] mojtaba: oh, how so? [14:30] With your last config file. [14:31] skinux: IIRC, PHP-FPM can log errors to syslog or a file, might want to check there [14:38] All PHP log says is that the log file is re-opened [14:39] THe log doesn't say anything about any requests, which it did on the 13th. It's got to be an nginx configuration issue then [15:35] rbasak: nacc: Final freeze is in a few days, but we're going to have a headache with nginx - it's going to be on a 'development' branch, unless we can convince the release team and the SRU team to let us jump to the 'stable' release branch directly post-release, which *could* have some blocking problems. [15:36] teward: what blocking problems [15:36] dpb1: no guarantee of 'no new features' [15:36] teward: doing an MRE to a 'stable' series is usually acceptable [15:36] as it stands a new release of NGINX came out since the last merge [15:36] teward: especially for LTSes [15:36] and while I *could* do a merge there, it's still in the devleopment branch [15:36] teward: since the one that nacc did? [15:37] yep [15:37] ok [15:37] so even if I do that merge [15:37] teward: what's the stable target for them? [15:37] between now and LTS release it could be 3 more dev versions before nginx releases a stable [15:37] (upstream, I mean) [15:37] dpb1: pick a date between the 20th and the last day of April [15:37] they don't set any final dates [15:37] they just 'release when ready' [15:37] historically it's on or around the 24th [15:37] but after, they are going to mark it "stable"? [15:37] in some way? [15:38] let's say that the day they make it stable devel was on 1.13.23 - that becomes 1.14.0 [15:38] they cut 'stable' from the then-development branch [15:38] OK [15:38] in 16.04 this 'worked' because between the version in xenial and post-release there were no changes except a version bump [15:38] which the release team let in [15:39] but I can't guarantee there won't be more features [15:39] when that time comes, I think an SRU post release would be an option, more painful than not having to do the sru, but probably preferrable. [15:39] OK [15:39] are bugfix-only things allowed in past FeatureFreeze but before FInalFreeze? [15:39] i forget ;) [15:40] wow there's been *two* releases [15:40] damn === mgagne_ is now known as mgagne [16:17] can one control services with: systemctl on Ubuntu 16 w/o using sudo? [16:17] not safely no [16:18] meaning, right now I have the unit set for a particular user, but it still asks for authentication [16:18] I might need to add a NOPASSWD entry in the sudoers file, huh? === JanC_ is now known as JanC [16:19] DammitJim: systemd allows having user services [16:20] that's what I thought sdeziel but I don't know why it's asking for authentication [16:20] DammitJim: might be useful if the said service doesn't require more privilege than the user you want to interact with [16:20] DammitJim: have you tried "systemctl --user" ? [16:21] let me try [16:21] I was logging on as the user and then just doing: systemctl start [16:21] the man page says: --user: "Talk to the service manager of the calling user, rather than the service manager of the system." [16:21] DammitJim: by default --system is implied [16:22] weird... I get an: Failed to connect to bus: No such file or directory [16:24] DammitJim: is the service unit in the per-user config dir? See man systemd.unit for those path [16:25] let me look that up... I just have the service unit defined in /etc/systemd/system [16:25] DammitJim: I haven't use user services to date so all this is based on assumption/man page reading ... in other words potentially wrong/erronous [16:26] thanks [16:27] np [17:17] Hello all. Is there a certain kernel version required to use losetup? I am trying to create a loop disk and it does not work... basically I just need to carve out some space on my VPS to keep for SFTP space, and I cannot partition the actual disk [17:19] Or is there another way to make a 'virtual' disk without losetup? [17:20] arrrghhh: and you never plan on rebooting your vps? [17:20] nacc, I assume I can set the loop disk via fstab? [17:20] arrrghhh: i mean, it's not persistent [17:21] arrrghhh: so every reboot, whatever was in that memory-backed disk is gone [17:21] arrrghhh: what did you mean by 'SFTP space'? [17:21] Oh I didn't realize I was creating a ramdisk [17:21] nacc: isn't losetup file backed with -j ? [17:22] arrrghhh: you need a device for quota I guess? [17:22] nacc, bad term I guess. I basically just need space for SFTP. The VPS is doing other tasks, and I need someway to 'reserve' ~30gb of space for SFTP purposes [17:22] sdeziel, basically yes [17:22] blackflow: i did't see them specify -j :) [17:22] blackflow: by default, it uses a loop device (iirc) [17:23] nacc, it would be backed by a .img file [17:23] this is the guide I was (attempting) to follow http://www.linuxandubuntu.com/home/creating-virtual-disks-using-linux-command-line [17:24] arrrghhh: this guide says to create partitions, that's not needed [17:24] ah, it's -f, not -j [17:24] sdeziel, I was following the 1gb portion of the guide, so single partition... [17:25] blackflow: i think you were right on -j, -f is for find [17:25] but yea either way it doesn't make a difference to me, I just need some way to 'reserve' ~30gb of space on the VPS [17:25] nacc: no it's the other way around, -j shows, -f associates [17:25] had to look it up, it's been a while since I used something like that. Nowadays I just use ZFS and zvols [17:25] blackflow: ah confusingly written manpage :) [17:25] indeed. [17:25] arrrghhh: assuming you run a Ubuntu kernel, the kernel part should let you use loop devices [17:26] I think xfs has built in quotas as well [17:26] sdeziel, it is Ubuntu but the kernel is ancient. This VPS is .... cheap. It's Ubuntu 16.04, but I am on some ancient 2.32 kernel [17:26] eew [17:26] arrrghhh: sounds like an OpenVZ host or something [17:27] ^^ yep [17:27] xen with host-based kernel [17:28] 2.6.32 sounds like RHEL/CentOS 6 [17:28] Description: Ubuntu 16.04.4 LTS [17:28] Linux server 2.6.32-042stab125.5 [17:28] arrrghhh: if that's indeed an OpenVZ kernel, then I don't think you can use loop devices as is. See https://www.jamescoyle.net/how-to/2132-mount-a-loop-device-in-an-openvz-container [17:28] stab, yeah, openvz [17:28] CloudLinux actually [17:29] 2.6.32-openvz-042stab128.2 is current, so your host needs maintenance :) [17:29] sdeziel, doesn't surprise me haha. so is there any other alternative to achieve what I am looking to do? [17:30] get a decent KVM-based VPS service? :) [17:30] $$$ [17:30] How much [17:30] how much is my current setup? dirt cheap. like stupid cheap. [17:30] yeah, how much [17:30] I don't really even need a VPS, but damn this was so cheap. $8/year [17:30] omg. that really is cheap. [17:31] yea I just added 50gb for $5/yr... lol [17:31] arrrghhh: I'm not even sure you are allowed to mount ext4 FS in such containers [17:31] hmph [17:32] arrrghhh: maybe you could use nbd with qemu to mount a file as a block device [17:32] arrrghhh: but it's been ages since I touch OpenVZ [17:32] yea it is very limited... [17:32] no idea if that's possible under ovz but it's a way to get a block device. [17:32] ok I'll look into it thx [17:33] arrrghhh: qemu-nbd, but since it needs to create a device under /dev I doubt it'd be possible under ovz [17:34] but eh.... going back to your orig requirement, are there user quotas available? [17:35] if basic quota works then yeah, no need for a blockdev [17:35] I have an overall quota for the whole VPS, I guess I'm not sure about user quotas [17:36] arrrghhh: can you by any chance attach other mounts or devices to the VPS? [17:37] sdeziel, looking at the webUI now, I do not see a way to do that. When I added the 50gb, it just showed up on / [17:40] arrrghhh: OpenVZ supports multiple quota levels (per VPS and per user/group inside the VPS) [17:40] arrrghhh: https://wiki.openvz.org/User_Guide/Managing_Resources#Turning_On_and_Off_Second-Level_Quotas_for_Container [17:40] maybe you have it enabled in yours [17:42] rbasak: you don't happen to be around? [17:44] heh. seems to be disabled, I'm betting this is why "The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates." [17:46] I'll open a ticket with the VPS provider and see if they have any solutions or if this is enabled... otherwise I might just have to deal with the space getting consumed, maybe I can set a quota on nZEDb [17:48] I don't know how they can offer both a VPS and some support for 8$/year ... [17:54] Let's just say their response time leaves some to be desired, and I haven't really attempted any 'support' yet. For example, they took my whole $5 for the additional 50gb - instead of adding it right when they took my money, I had to wait a few days and open a ticket to prod them into getting ti done... [17:55] it* [18:12] rbasak: nacc: dpb1: release team accepted 1.13.12 into proposed, so we're getting there. (No more issues to worry about, for now) [18:13] Hello everyone, may be you did not see my message early CET time today: I have a situation for which I need your expert help here as digging out how to fix this has been very unproductive for the last couple of days [18:13] I have a ubuntu 16.04 minimal server image using LVM partitioning and xfs which boots perfectly well and was generated using OpenStack DIB (disk image builder). [18:13] However when I try to add a package which requires init ram disk rebuild (like overlayroot or others) I end up not beeing able to reboot the machine [18:13] I have nailed down the issue to the fact that LVM does not seem to be taken into account because I tried without LVM on xfs+extX successfully and LVM + any of extX, xfs being a failure [18:13] Of course I have tried adding the modules in /etc/initramfs-tools/modules without any success [18:13] What is also very strange is that my initial and working initrd file is 9 Mb big and the failing regenerated one is 32 Mb. Trying dep instead of most in the conf file make the size got down to 15 Mb which is still twice the initial working version. [18:13] Any clue what could be the cause, where to look and how to fix this ? [18:13] One other thing is that my 1st initrd file in the DIB image has been generated using dracut which is also present in my packages list so may be it somehow also interferes with initramfs-tools ? Just a wild guess [18:15] olivierb-: do you have the corresponding -extra package for your kernel installed? I've seen that cause boot failures plenty .. [18:16] sarnold let me check this [18:18] yes it is installed in the image too [18:19] dang. there goes the easy solution. [18:19] how far in the boot does it get? [18:19] seems like it can not mount rootfs which is in LVM/xfs partition [21:58] Lately when I reboot my server several services fail to start with the error cannot bind address in use. I haven't made any configuration changes since everything worked as expected, I've only perform updates. At first it was just Dovecot that had the issue and today after installing more updates apache and ssh failed to start also because the address was in use. [21:58] you can use netstat's -p flag to find out which process already has those sockets bound [22:05] sarnold: thanks. Looks like my problem this time is the interface isn't up for some reason. It was just before rebooting. [22:05] mecotri: and you got back "address in use" errors for that? o_O [22:08] sarnold: I got that for dovecot and got cannot bind address for apache and ssh. I wrongly lumped them together as part of the same issue. Any ideas on seeing what's keeping the interface from coming up? My static addresses are set using /etc/netplan/01-netcfg.yaml [22:09] mecotri: could you file a bug against dovecot? there's a chance its systemd configuration is using the wrong "make sure networking is up" directive [22:09] (for some reason systemd seems to have immense trouble with this. :( ) [22:11] interface is not coming up, or some dovecot thing is not binding to it? [22:14] sarnold: Will do.