[00:31] guys, any idea how i can change boot order from maas ? [00:31] i just rebooted the servers and they all defaulted to pxe boot [00:31] i just want them to boot the OS [00:32] well, the boot order is a computer thing [00:33] if you boot them and it pxe boots, maas should have nothing for it, so it should then boot from disk === chat is now known as Guest47969 [01:05] bradm: well booting from disk is failing which is strange [01:05] and no idea why maas set the boot order for pxe first [01:06] gunix: if its failing to boot from disk I'd say that has nothing to do with maas, but more with whats on the disk [01:11] gunix: trying to dual-boot? [01:11] RoyK: I think gunix is just trying to deploy a machine with maas, nothing fancy [07:07] Good morning === Aztec03 is now known as SmokinGrunts === SmokinGrunts is now known as Aztec03 === chat is now known as Guest78574 [08:37] Hmm I created /var/log/journal directory and my servera gain crashed tonight. There is a file in it now that I wanted to check on what has happened [08:37] but the file seems be pretty much binary [08:38] ah i need to read it with journalctl -b [08:38] 1 [08:39] hmm but still again there are no old messages in there [08:39] only from the last boot If I do: [08:39] journalctl -b 1 [08:39] /var/log/journal/35a754864d1e47469f9af2c0262f700e/system.journal === _ruben_ is now known as _ruben [10:23] adac: did you check the standard syslog? journald is forwarding to it by default. [10:25] blackflow, yes in syslog I see the time before the machine stopped/was not reachable [10:25] but I cannot see what happened [10:25] only when the machine is rebooted there are entries again [10:25] before there is no error indicating what is happening [10:40] adac: sounds like a hard crash reboot, and when those happens it's usually due to faulty hardware like memory [11:47] bradm: RoyK: sarnold: The first deployment was ok but something happened after that. Now all deployments are failing. I noticed there was an issue when I tryed to reboot the servers and all failed to boot. [11:47] this is highly confusing [12:34] Hello, If I want to keep a process running on the remote machine, even after exiting ssh, should I run screen on my local machine and ssh in screen to the remote machine and run my command? Is there any other way? [12:35] mojtaba: nohup [12:35] ssh machine [12:35] nohup [12:35] logout [12:35] ahasenack: Is it like screen? [12:36] no, it will just prevent the process from getting the HUP signal when you logout [12:36] if it's something interactive, or something that prints stuff to the console that you want to see later, [12:36] then I recommend to use screen/byobu on the remote machine [12:36] ssh machine [12:36] byobu [12:36] start process [12:36] ctrl-a d [12:36] logout [12:36] ahasenack: thanks [12:48] ahasenack: do you have any idea why machines that got previously deployed by MAAS are now failing deployment ? [12:50] gunix: not without more data, no [12:51] ahasenack: what data would you need? [12:51] start by clicking on such a machine in the maas gui, inspect the data you get there. Notably installation logs [12:51] there are not installation logs. [12:51] if that's empty, try watching the machine's console while it deploys until it fails [12:52] and the events tab in maas [12:52] you might also want to hangout in the #maas irc channel, that's more specific than ubuntu server [12:52] I idle there as well [12:52] ahasenack: events: https://bpaste.net/show/811673766cfc [12:53] nobody on #maas is answering [12:53] i think they have no idea how the software works cause when i ask a more complex questions it's silence [12:53] but they are answering questions to easy questions :D [12:54] watch the console then [12:54] it's usually what I do when debugging such a problem with my nodes [12:54] ahasenack: this si the console: https://ibb.co/nc9q9n [12:55] it's stuck there . [12:55] it doesn't tell me if that's the first boot or second boot [12:55] during deployment there is a first boot, where the installer comes up and installs the ubuntu release on the disk [12:55] then it reboots into the newly installed system [12:56] do you see evidence in the console, as messages scroll by, that it was able to reach the network to install packages and download stuff? [12:58] https://bpaste.net/show/00edf81fcd8b [12:58] aaand it faield. [12:59] middle of that photo is a line that reads 'object XXXX from LD_PRELOAD cannot be preloaded (cannot open shared object file). ignored' [12:59] blackflow, thanks for the hint, I will contact my hosting provider! [13:00] TJ-: i saw that line but i have no idea how to interpret that [13:00] rbasak: if I could pick your brain for a minute wrt a packaging question [13:00] gunix: nor me, but it was the only anomaly visible [13:01] TJ-: yea and it doesn't tell me anything [13:01] ahasenack: from my point of view it seems like it is getting the image since that is the only way cloudinit could be obtained . [13:01] rbasak: package contents: https://pastebin.ubuntu.com/p/vnKrkqNczH/ [13:01] rbasak: the files under nvml_dbg are triggering an ldconfig call, flagged by the lintian as undesirable [13:01] rbasak: upstream told me this about those files: [13:01] "Files under nvml_dbg are builds with debugging symbols, logging, asserts [13:01] and expensive checks that we normally don't want users to run with." [13:02] rbasak: I'm wondering if they should be shipped in another package then, and what its name could be [13:02] libfoo-extra-debugging? Is there a precedence for this? [13:02] gunix: hm, no, cloudinit data is passed down via another way [13:02] gunix: do you see it reboot once, after doing a bunch of stuff? [13:03] gnuoy: anyway, in general, the debugging of such failures is either around the actual image installation on disk (that's the first boot), or networking problems [13:03] it does run apt-get at some point, so it needs to be able to reach the internet or a mirror [13:03] ahasenack: no, i don't see it reboot [13:04] try to ssh to the node while it's stuck in that powered-on state [13:04] ahasenack, I think gunix and I are different people [13:04] if you added your key to maas, you should be able to ssh in as ubuntu [13:04] gnuoy: hah [13:04] :) [13:04] ahasenack: i will recheck network config but that is ok afaik [13:05] try ssh in and poking around [13:05] also make sure you selected the right disk for your root partition in the node in maas [13:05] if there is more than one [13:05] it also pays to check the partitioning in general, and network config, in the node's page [13:06] ahasenack: network is ok [13:06] ahasenack: looking [13:08] ahasenack: this is the exact config that was working a few days ago [13:08] that's why this is highly confusing [13:09] the server is still stuck there btw ... at deleting temporary files [13:09] gunix: if it was commissioned, then the hardware view maas has of the node changed to reflect the current state [13:09] did you ssh in? [13:09] ahasenack: for a first iteration I'd remove them from the build. I think it's incorrect to ship them in the -dev package. [13:10] was your maas installed as a snap? [13:10] rbasak: they are more than just debugging symbols [13:10] ahasenack: the package can be fully functional without them, right? [13:10] You don't have to provide every development feature upstream provides in a package build. [13:10] rbasak: yes, but supposedly upstream would want them somewhere when debugging sometihng. They are installed by "make install" [13:11] Upstream wouldn't use the distribution package build to debug though, would they? [13:11] ahasenack: i installed using "apt instal maas" on ubuntu 16.04 [13:12] rbasak: it's a new package, they might ask users to do stuff [13:12] I'll propose we remove those files, but wanted to get another opinion [13:12] specially if there was precedence for this in ubuntu, an extra-debugging package [13:12] I imagine the right way to package both, if you really want to package both, would be to provide an alternate libfooX binary package with some other name that conflicts and provides it, so they could ask users to switch to that one instead. [13:12] there is no need to conflict [13:13] and those extra libs are not used since they are not in the linker's path. I suppose upstream would ask users to set LD_LIBRARY_PATH before some debugging task [13:13] I don't think that's appropriate for a distribution package. [13:13] but, their placement is enough to trigger debhelper's automatic ldconfig call [13:13] But this is perhaps a discussion from #ubuntu-devel. [13:20] ahasenack: I think my issue with weird things like this is that the package isn't a special snowflake and users shouldn't have to know about special per-package snowflakes before being able to do things like this. The (reasonable) desire to have a debugging version with extra assertions but lower performance is a generic one. The distribution should provide a generic solution. And the mechanism we [13:20] have for that is drop in replacement packages via conflicts/provides. Not weird stuff to do with getting the user to override paths. [13:20] The point of a distribution is to unify these things. [13:21] so there is a precedence with special builds that are drop-in replacements and have more logging/debugging/checks? [13:22] I'm not sure about specifically for libraries. [13:22] But there are plenty of packages that provide varying functionality depending on the user's package selection. [13:22] exim for example. [13:22] People in #ubuntu-devel may know more [13:23] I'm also unsure about simultaneously providing a concrete package and a virtual package that conflicts with it. [13:24] I'd want to read and/or test a bit more before committing to that. [13:24] In the meantime, IMHO it's not worth blocking to figure this out. [13:24] I asked them how they would expect end users to use these files, and if we can drop them [13:25] in the meantime, a test build confirmed the lintian ldconfig warning is gone when I remove them [13:25] This kind of thing is very common. [13:25] Upstreams need to understand that the point of a distribution is to unify things, and they shouldn't expect to be able to do every weird thing that they do in a distribution package. [13:26] also that nvml_dbg dir mixes devel and runtime builds of the library [13:26] I assume you could link to a debugging version of the library and get more info in your app [13:26] so that means two extra packages per library: dev and runtime [13:26] if we go down that route [13:27] dev, normal runtime, normal runtime with debugging symbols, debugging runtime, debugging runtime with debugging symbols. [13:27] It might be possible to collapse the last two down, as debugging runtime without debugging symbols makes no sense. [13:28] it might if it's just about exrta logging [13:28] If you want to do any of this, I would speak to an archive admin now on how you plan to arrange everything to save any wasted effort. [13:28] I also asked to confirm if this extra logging cannot be enabled via env vars [13:29] I'm hoping I can just drop that dir [14:12] hello [14:13] coreycb: I tested the changes you submitted on mistral (https://bugs.launchpad.net/cloud-archive/+bug/1757433) and it works for me on Xenial for both pike and queens version [14:13] Launchpad bug 1757433 in mistral (Ubuntu Artful) "[SRU] mistral-event-engine conflicts mistral-event" [High,Fix committed] [14:14] coreycb: however I'm not familiar with launchpad and I don't get how I am supposed to "change the tag from verification-pike-needed to verification-pike-done" [14:51] pgaxatte: edit the tags (below the description box) [14:54] TJ-: ah yes on the original comment :) [14:55] TJ-: i mean the first one which is the description :) [14:56] TJ-, coreycb: done! [15:09] hm, I have this lintian warning that I'm trying to override: [15:09] W: libpmemlog-dev: manpage-has-errors-from-man usr/share/man/man3/libpmemlog.3.gz 235: warning: macro `..,' not defined [15:09] I created debian/source/lintian-overrides [15:09] with [15:09] libpmemlog-dev binary: manpage-has-errors-from-man [15:09] reubilt [15:09] but the warning is still there [15:09] when I run "lintian" from the build directory [15:09] do I need to install the package? [15:22] hello [15:27] maybe you guy will have some idea. I've got two machines, ubuntu 12.04 and ubuntu 16.04 both have exact same sudoers config file, on both machines the users have almost identical groups (with one eception of one group the one for the virtualmachine), now when I try to run this command: /usr/bin/sudo -k -u lolek /bin/true on U16.04 I don't get pw prompt while on U12.04 the prompt is there [15:28] *guys [15:38] lolek: have you looked in /etc/sudoers.d/? [15:38] lolek: also, remember that sudo caches the fact that you entered the password. There's some default expiry time set. [15:39] rbasak: yes, but in both cases the sudoers file has got commented out that include for sudoers.d [15:39] rbasak: also for the cache is not a problem here as I'm using the -k switch [15:39] I don't know then, sorry. [15:40] lolek: how about /etc/pam.d/sudo ? [15:41] TJ-: checked, the same :/ [15:41] lolek: check /var/log/auth.log on both machines for differences/clues [15:42] right ... hmm let me find it [15:42] I should also mention that u12.04 has got sudo 1.8.3p1 while 16.04 has got version 1.8.16 [15:43] lolek: I cannot reproduce your issue here on 12.04 and 16.04 I see password prompt [15:45] TJ-: you've used your current user right? [15:46] lolek: I've tried several user accounts [15:46] :/ [15:46] you've got original sudoers file? [15:50] TJ-: ok that's interesting .. in auth.log somehow on 12.04 the user is represented in the auth.log having id 1000 but on u16.04 the id is 0 [15:50] o.O [15:50] lolek: well that'd do it! [15:51] TJ-: well yeah but why.. [15:51] lolek: well start off with 'getent passwd lolek' [15:51] when I do id lolek on 16.04 and 12.04 the id is not 0 [15:51] lolek: is sudo or /bin/true setuid root ? [15:52] nope [15:52] both machines same rights [15:55] TJ-: do you have 18.04 at hand? [15:55] lolek: yes [15:55] as I checked clean 18.04 installation and it's the same, it doesn't ask for pw [15:55] and also in the auth.log the user id is...0 [16:00] lolek: as i said earlier, it's /etc/pam.d/sudo -- on 12.04 it's 'auth' but on 16.04+ it's 'session' [16:01] oh [16:01] missed this one [16:01] hmm [16:04] TJ-: well I've changed it to auth on 18.04 and it still doesn't ask for pw :/ [16:07] I'm wondering where it's taking that uid = 0 [16:22] lolek: what does the 16.04 /etc/sudoers look like, can you pastebin it? [16:22] TJ-: sure, a sec [16:23] TJ-: http://pasteall.org/889746 [16:23] here it's the default one [16:23] tbh it's logical that it won't ask for pw if I want to switch to current user [16:23] but the question is why on 12.04 it asks for it [16:23] a bug? [16:27] I wonder if it's been added since. I checked the Debian changelog and didn't see mention but there were several upstream releases in between so they may have brought it in [16:31] mhm [16:32] lolek: is sudo-ldap installed on any of them? [16:33] I don't see any sudo-ldap package so no [18:04] Hello, I am going to write a script to make a backup when a specific computer joins the network. I want to repeat the process, until the directory is fully backed up. Could you please let me know how can I improve this code? http://paste.ubuntu.com/p/TbwKWcT5rz/ [18:19] mojtaba: why don't you just ping 192.168.2.17 ? [18:19] you don't even have to sleep you can set, say, 30 seconds between two pings [18:20] blackflow: ping -i 300 ? [18:20] that's a bit too long [18:21] one icmp packet every 30 seconds should be more than enough. [18:21] blackflow: ping -i 30 ? [18:23] blackflow: Do you know how should I check if rsync has been successful? [18:23] blackflow: should it be rsync && exit or rsync && break? [18:23] mojtaba: yes, but actually I think you'll have to put that into a loop and send one packet only, to have ping exit with success or failure [18:24] so something like ping -c1 -i 30 in a loop and when it exits with success, the host has replied [18:24] blackflow: Can I grep the exit of ping? [18:26] exit code? no, it's available in something like $? depending on your shell [18:26] blackflow: it is bash [18:27] you can also do things like while true; do ping -c1 192.168.2.17 > /dev/null && break; done [18:27] that will wait until ping exits with success. oh and I forgot -i [18:28] So I have to replace this with until statement in my code? [18:29] yup [18:29] blackflow: what about the rsync part? should it be rsync && break? [18:29] rsync should also return success or failure, so you can test its output [18:29] blackflow: How can I test its output? [18:30] however, note that rsync can report error on things like files changed during transfer, but otherwise having successful transfer [18:30] mojtaba: $? contains last set exit code [18:30] blackflow: Thanks. I will check it. [18:32] mojtaba: also consider using ~/.ssh/config instead of ssh options on the command line [18:32] blackflow: sure, thanks. === miguel is now known as Guest59626 [22:31] I am after a minimal ISO installer without needing internet access. Is there any way to achieve that similar to CentOS? [22:49] boxrick, thats what the ubuntu-server ISO is. Its a CD with about 300 packages, enough to do a minimal install [22:50] I always thought it had a fit when it didn't get any network [22:50] Perhaps I am wron [22:50] wrong* then [22:51] boxrick, I admit , I never used it when competely disconnected from internet [22:52] boxrick, perhaps use this ISO as a debootstrap source