=== ideopathic_ is now known as ideopathic [07:06] Good morning [08:18] Hello, I'm having a bug on UbuntuServer 16.04-DAILY-LTS on Azure. Looks like Cloud-init broken. does it ring a bell ? [08:19] somehow the provisioning is not going well between waagent and cloud-init === chat is now known as Guest38392 [08:34] Guys my server suddenly halted in the night, but there is nothing that indicates what happened in my /var/log/syslog [08:34] is there a nother place I can check [08:36] adac: dmesg? [08:38] lordievader, hmm there is no date on dmesg output [08:38] adac: There is if you run `dmesg -T`. [08:40] lordievader, that did the trick! :D [08:40] Does it give some hints? [08:40] lordievader, but there is only data showing since the last boot this morning it seems [08:41] There might be more in `/var/log/dmesg*` [08:43] lordievader, hmm acutally there is only one file namely /var/log/dmesg itself there and that one is empty [08:44] What version of Ubuntu are you running? [08:48] lordievader, https://pastebin.com/FxKqXdfE [08:49] adac: Ah, `sudo journalctl -b -1` might help you. [08:50] lordievader, hmm it says: Specifying boot ID has no effect, no persistent journal was found [08:50] Hmm. Stupid default. [08:50] I think I need to tweak then some things [08:51] Yes, you want to configure systemd-journald and logrotate. [09:00] lordievader, kk thanks [09:09] adac: if you create the directory /var/log/journal you will get persistent journalling, so you can look up messages from the previous boot with 'journalctl -b 1' [09:09] ducasse, thanks for the hint! [09:21] so ftr, cloud init doesn't execute custom script if there is a script stuck in same/previous runlevel https://serverfault.com/questions/852946/aws-userdata-script-in-cloud-init-not-running === ShriHari is now known as IDK === IDK is now known as Guest88476 === Guest88476 is now known as IDKIDKIDK === ShriHari is now known as Ubuntu_admin [10:59] can anybody here helm me with maas cause nobody there is answering [10:59] ? [11:02] whats maas [11:04] !maas [11:04] Metal as a Service is a dynamic server provisioning service for scalability. See more about it at https://maas.ubuntu.com. [11:49] gunix: try #maas [12:04] :) [12:04] good afternoon [12:10] cpaelzer: I saw how autopkgtest created the console on ttyS1, or tried to [12:10] cpaelzer: there should be a more modern way to do it :) [12:10] cpaelzer: reading the bug you pointed me at, now [12:10] cpaelzer: the other two issues I had were: [12:11] a) on ppc64el, for some reason it booted off vdb, not vda. So I had to change autopkgtest's assumption that the iso with the test setup script was in vdb and change that to vda [12:11] b) I had to pass -m ports.ubuntu.com/blabla, otherwise it would try to find ppc packages in archive.u.c [12:12] ahasenack: TL;DR non-x86 could need some improvement in autopkgtest buildvm (IMHO) [12:12] so true [12:12] I wonder how britney does it [12:13] cpaelzer: so, are you able to run ppc autopackagetests? I seem to recall you saying so in one or two old MPs. Or was that using biletto? [12:15] ahasenack: the CI infra uses openstack and custom images for it [12:15] ahasenack: most of the time you get around by (locally) just using lxd [12:15] ahasenack: I sometimes fixed up my VM images, but I'm not as experienced in it to have a great howto or gist about it [12:16] ahasenack: would lxd on ppc be an option for your case, I think no as you are mounting [12:16] I have to check [12:16] ahasenack: we can either try to fix up your image to work correctly [12:16] I'm trying to reproduce a bug that happens only during migration so far [12:16] the closest I have is a vm, since they use openstack as you said [12:16] ahasenack: or you run what the test would run in a normal bionic ppc vm spawned via uvtool [12:17] ahasenack: I can try to make a working bionic image (again) if it is needed (a.k.a if reproducing in a uvt VM fails) [12:17] I thought that autopkgtest would just work, given that we use it in migrations [12:18] ahasenack: it does just work if pre-setup is done :-) [12:18] that's how I started down this road, but it's definitely not as simple [12:18] nothing ever is [12:19] at this point it sounds more like a weekend project [12:37] I have a question regarding purge and remove behavior (deb package) [12:37] there is an motd cache in /var/cache//bla.cache [12:37] it is removed in purge, but not with a simple "remove" [12:38] I think it should be removed with "apt remove" as well, becaues otherwise the motd will keep being displayed [12:38] even though the script that generated it no longer exists [12:38] thoughts? [12:38] postrm is this: [12:38] if [ "$1" = purge -a -f "$CACHE_FILE" ]; then [12:38] rm "$CACHE_FILE" [12:38] fi [12:39] that particular cache/motd is showing the state of the system regarding livepatch status [13:14] ahasenack: yes I'd agree to remove it on remove as well [13:15] ahasenack: but I'd also retrigger a creation of the cache on that [13:15] ahasenack: is that possible? [13:15] ahasenack: because IIRC otherwise the next login might have no content at all [13:15] the next login would not have *this* content [13:15] if that cache is the main content that would be displayed [13:15] it's just one motd, of many [13:16] that being said, I just checked the code again and saw that we already won't display the cache if the script that generated it is no longer installed [13:16] ok, so leaving the file on remove is not an issue then? [13:17] not user-visible issue [13:17] the remaining issue would be if the user reinstalled it days later [13:18] for a while it would then display the old cache (while == 1 day at most) [13:18] versus displaying nothing until the cache is regenerated [13:25] ahasenack: is there a trivial way to retrigger creating all those bits that make up motd? [13:25] ahasenack: because if so IMHO any package dropping (or removing) something there should re-generate that cache [13:25] if it is a complex mess, then it might be not feasible to do so [13:26] cpaelzer: there is no global cache [13:26] cpaelzer: each script handles it in its own way [13:26] would there be a global trigger? [13:26] not that I know of [13:26] ok [13:26] I just login again [13:27] for the ua-tools bit, though [13:27] it's a daily cron job [13:27] so what I do it edit /etc/cronttab and change the daily timer to be in the next minute [13:28] but that cron job (and that login) has to call something [13:28] can't we call this "something" from the postinst/postrm ? [13:29] we could, yes, but that also calls apt-cache policy to check the status of some ua features [13:29] and I was fearful of calling that in the middle of a dpkg transaction without more careful testing [13:30] I think it might also call dpkg itself [13:30] to query things [13:30] yeah all of this was the reason to do it async in the background [13:30] and not sync on login [13:30] could we just "fire and forget" it from the maintainer script [13:30] just as the login does? [13:31] I'm not trying to convince you - I ask "do you tihnk it would be better to do so" [13:31] the login doesn't call that anymore, it's just the cronjob [13:31] the login just parses the cache file, if it exists [13:32] calling the script that the cron job calls at postinst just has that issue I mentioned of dpkg and apt-cache being used [13:32] which I don't know how serious is [13:32] I would fear stumbling upon lock files and whatnot from dpkg and apt [13:32] yep [13:32] keep it as is [13:33] thanks for the discussion [13:33] np [13:34] ahasenack: ppa for the motd change? [13:35] coreycb: hi again :) as i said on #openstack-infra, there's a small issue for mistral's packages on cloud archive [13:36] cpaelzer: oh, hm [13:36] cpaelzer: I didn't create one this time, sorry. I can do that quickly [13:36] coreycb: on pike version (5.0.0 which is quite behind the latest pike version on github) it is not possible to install mistral-event-engine and mistral-engine together since the mistral-event-engine provides the mistral-engine role instead of mistral-event-engine [13:36] since builddeps is so tiny I was just building the deb on my host [13:37] ahasenack: I'm fine building locally as well [13:37] I can push into my test container from here [13:37] cpaelzer: ok, thanks and sorry [13:37] pgaxatte: ok is that fixed in a pike point release? [13:38] pgaxatte: let's get a bug opened at https://bugs.launchpad.net/cloud-archive and I can dig further [13:38] coreycb: what do you mean by point release? [13:38] coreycb: fair enough i'll file a bug ;) [13:38] pgaxatte: like a 5.0.1 version [13:38] pgaxatte: thanks [13:39] coreycb: i only see 5.0.0, no superior version [13:40] coreycb: well there is 6.0 but it is queens and i'm interested in pike [13:40] pgaxatte: i see a tag for 5.2.2 [13:41] cpaelzer: the other day you mentioned something about cpu throttling in qemu [13:41] cpaelzer: what is the trick? I would like to slow things down a bit [13:41] coreycb: yes on github but my problem is related to the debianization [13:42] pgaxatte: oh, so yes we only have 5.0.0 atm for pike but we can do a stable release for 5.2.2 [13:43] coreycb: yes that would be good but the debian/mistral-event-engine.init.in needs fixing too [13:43] pgaxatte: ok i can fix that up too. please add details to the bug and then i'll work on it soon. [13:44] coreycb: thanks i'm preparing the bug [13:45] ahasenack: let me write a txt with a few rough steps to slow it down [13:45] thanks [13:46] finishing your MP review first :-) [13:47] of course [13:50] ahasenack: what is the corn job I might want to trigger [13:50] I'm in the "now the line is gone" state [13:51] cpaelzer: daily in /etc/cronttab [13:51] I prefer to have cron do it instead of calling the script manually, because [13:51] in the past calling the script directly hid a bug (/snap/bin wasn't in cron's PATH) [13:51] cpaelzer: so I edit /etc/cronttab, the daily line, and have it run in the next minute. Then save and wait, tailing /var/log/syslog to see when it ran [13:52] coreycb: https://bugs.launchpad.net/cloud-archive/+bug/1757433 [13:52] Launchpad bug 1757433 in Ubuntu Cloud Archive "mistral-event-engine conflicts mistral-event" [Undecided,New] [13:53] all good, done already andol [13:53] pgaxatte: thanks, will take a look shortly [13:53] sorry ahasenack I meant [13:53] coreycb: thanks ;) [14:47] hm. the systemd update that was just pushed to artful fails in chroot for me. [14:49] Guys why is it saying that it keeps back these packages [14:49] https://pastebin.com/9eR8sAgh [14:50] shouldn't dit-upgrade update them anyways? [14:50] *dist-upgrade [14:51] adac: Dist-upgrade should. Upgrade probably doesn't because the dependencies changed. [14:51] lordievader, but I'm actually using dist-upgrade [14:51] there might be missing dependencies [14:51] hmm [14:52] how can I resolve this? [14:52] maybe just wait until the missing packages are available [14:52] these are meta-packages which depend on the latest kernel version [14:53] sometimes these packages are available before the new kernel version is available [14:54] adac: `apt-cache show linux-image-generic` shows the depencies of the package. [14:54] maybe do an apt update and try again [14:55] (if it doesn't work now, try again in a couple hours) [14:56] adac: do you see them when running "apt-mark showhold" ? [14:56] lordievader, JanC I now found out what did wrong [14:56] With ansible I had set this: [14:56] command: apt-mark hold {{ ubuntu_kernel_version }} [14:56] there you go [14:56] eh [14:56] right [14:57] linux-image-4.4.0-116-generic [14:57] was the value [14:57] yes but It didn't show up in the list of the packages that are hold [14:57] if you block upgrades, upgrades will be blocked :) [14:57] so therefore I tought it was not on hold [14:57] JanC, yes that sounds about right^^ [14:58] I think I cannot lock a certain version [14:58] I can only lock the pakacge name or? [14:58] I can only lock the package name or? [14:59] linux-image-extra-virtual linux-image-generic [15:00] If you want to lock to a certain version, you could manually install that version and remove the meta package. But you loose the automatic updates. [15:07] lordievader, is this still valid: [15:07] https://askubuntu.com/a/678633 [15:09] adac: I have never frozen a kernel, so I don't really know. [15:09] lordievader, kk :) === LaserAllan is now known as Guest791 [15:14] coreycb: cool so i think i got understand the packaging process, just two questions now; when specifying the package dependencies we go by the requirements.txt for the project right? how is the testing for packages should i just spin up a vm and test the package? (openstack related btw) [15:16] tobasco: yes generally test-requirements.txt and requirements.txt would go in Build-Depends-Indep and requirements.txt would go in Depends. [15:17] tobasco: do you know how to create a PPA? you could upload to a PPA and install from that in your vm. [15:19] cool, have never created a ppa only used them, i'll test it out thanks [15:20] tobasco: assuming you have a launchpad account you can model a bionic ppa after this https://launchpad.net/~corey.bryant/+archive/ubuntu/bionic-queens [15:22] tobasco: this script comes in useful to avoid any version conflicts in a ppa when uploading the same version multiple times: https://paste.ubuntu.com/p/P4SCnF5wTq/ [15:27] coreycb: thanks [15:27] i'll see what i can come up with === LaserAllan is now known as Guest73615 [16:02] Hey guys whats the proper way to change logrotation for nginx? Ive been editing the file in /etc/logrotation/nginx is this the correct way? [16:02] teward: --^ maybe you know? [16:04] hello all [16:04] I noticed that in the Ubuntu Kernel packages there are many kernels that are cloud specific [16:04] looking at apt-cache search linux-image | egrep "gce|azure|aws" [16:05] zioproto: yes. [16:05] what is special about these kernels ? Is there some special kernel to use also in case of Openstack qemu+kmv hypervisors ? [16:06] coreycb: when uploading to launchpad ppa with dput does it take a while before i can see it? [16:06] nacc: ? [16:06] tobasco: it shouldn't take too long, probably 5 minutes. if it gets rejected you'll get an email. [16:06] zioproto: well, you hadn't yet asekd a question, so I was agreeing they exist. [16:06] coreycb: ok thanks [16:07] nacc: what is special about these kernels ? Is there some special kernel to use also in case of Openstack qemu+kmv hypervisors ? [16:07] nacc: those kernels are meant for virtual instances, right ?? [16:08] Odd_Bloke: --^ ? [16:12] zioproto: there is also the linux-kvm flavor [16:15] zioproto: they use a different config set so they don't come with all the generic stuff required for a given kernel to be able to run on bare metal servers, laptops, KVM guest, Xen guest, etc [16:16] zioproto: you can compare their /boot/config-$version files to see how much they differ [16:16] thanks ! [16:17] np [16:51] rbasak, so I noticed ppc64el failed to build for mongodb-server-core still [16:57] balloons: yeah I'm looking in to it [16:57] rbasak, no worries. Thanks === daniel is now known as Guest77671 === Guest77671 is now known as Odd_Bloke === Guest73615 is now known as LaserAllan_ [19:24] can i install ubuntu to a device, when i already have a running ubuntu ? [19:25] I have an ubuntu with cli and i want to install to /dev/sda [19:25] the debootstrap tool may be able to help you [19:26] you'll probably have to handle booting yourself [19:26] does it work for ubuntu ? [19:26] i wouldn't be in this situation if MAAS would detect /dev/sda [19:26] but it does not ... [19:27] does it detect it udner a different name? [19:28] no, it doesn't detect any storage ... [19:28] previously there was no /dev/sda even in bash, but i change the array controller from the del gen9 to get the disks into HBA [19:29] raid not needed since there is only one disk on the smartarray [19:29] now i see /dev/sda in bash, but in MAAS still not :D [19:30] gunix: did you recommission? [19:30] ahasenack: [19:30] no [19:30] will that fix it ? [19:31] i just shut it down and click on commission again "? [19:31] it will only refresh the hardware data if you recommission [19:31] and if it's an old maas, you will have to re-enlist, but recent versions should be fine [19:31] gunix: yeah, pretty much. It will erase what you have installed there, though [19:31] "old maas" [19:31] like 1.7 [19:31] that's old [19:32] i did "apt install maas" on ubuntu 16.04 [19:32] if that got me an old maas i am dissapointed :D [19:32] no, that should have given you a pretty recent one [19:32] 2.3 I think [19:32] so you should be good on that front [19:33] 2.3.0-6434-gd354690-0ubuntu1~16.04.1 [19:33] yep [19:33] ok i did commission again [19:38] that worked lol [19:38] ahasenack: i want to kiss you [19:38] ahasenack: nice :D [19:38] haha [19:38] i hope you are a dude cause my wife doesn't allow me to touch other girls === hugh_jim_bissell is now known as whaley [19:51] awkward silence ... :)) [20:04] I have this source tarball that installs a bash completion file in /etc/bash_completion.d [20:05] but that's an "old" location, it should be in /usr/share/bash-completion/completions nowadays [20:05] the new location is a bit stricted regarding filenames, though. The filename that is being installed is something.sh [20:05] that is not read anymore. It needs to be just "something", or "something.bash", as far as I understood [20:06] so, simple question: how to install it with the new name without patching the source tarball? [20:06] dh_install can't rename [20:06] dh-exec? [20:06] or an override in d/rules? [20:08] ahasenack: yeah, you want dh-exec and a debian/.install [20:08] although i would think completions going to a specific directroy are handled by a helper, but i might be wrong [20:08] there is a helper [20:08] ahasenack: cf. man dh_install [20:08] dh-bash-completion [20:08] dh_, rather [20:09] hm, it seems to suport renames [20:09] let me try that [20:15] ahasenack: do you know why i can't recommission other nodes ? [20:15] gnuoy: no, what fails? [20:15] 1 node cannot be commissioned. To proceed, update your selection. [20:15] i actually can't start the commission process [20:15] gunix: that can happen when you select multiple [20:15] gunix: and one or more is in a state that doesn't allow commissioning [20:16] gunix: for example, it's deployed [20:16] it needs to be ready, or new, iirc [20:16] ahasenack: oh, i have to override failed testing [20:27] it's working now [20:30] nacc: hm, the bash-completion package installs dh_bash-completion, but the latter is not mentioned in the debhelper manpage. I created the .bash-completion file, added build-depends for bash-completion, but dh_bash-completion was not called [20:31] d/rules has the usual "dh $@" line [20:31] debian/compat is 9 [20:32] any ideas? [20:32] build log shows no attempt at calling dh_bash-completion [20:33] * ahasenack maybe needs a --with in the dh line [20:34] yep [20:34] --with it is [20:34] thanks :) [21:27] If I'm making my own apt repositories, what should I specify for the component? main, universe, multiverse, or something else? [21:29] <_KaszpiR_> cliluw it really depends on what packages you have [21:29] <_KaszpiR_> try with main, and if fails then expand [21:31] _KaszpiR_: Isn't main only for "Canonical-supported free and open-source software"? It seems like if it's in my own repository, that would almost be definition not be Canonical-supported. [21:32] <_KaszpiR_> oh your own repo [21:32] <_KaszpiR_> sorry, misunderstood as mirror [21:32] <_KaszpiR_> well, actually do whatevfer you like and use apt-pin [21:49] i've got 4.1 gb of storage on /src/ for various linux headers on ubuntu 14.04 [21:49] anwyay to clean some of those out? [21:49] this is safe? sudo apt-get autoremove [21:57] <_KaszpiR_> it will be re-downloaded [22:14] arooni: if the related linux-image- has been removed the headers should autoremove [22:21] Hey, anyone active, i would like to talk about some samba/ip Ubuntu server issues im experiencing ! [22:22] irc works best with specific questions [22:24] Wolf_Y_, whats the issue? [22:24] does anyone know how snaps works? [22:28] compdoc: basically a wrapper around an LXd container [22:30] compdoc: alright, so i installed a fresh ubuntu server 17.10 [22:31] ran ifconfig -a and my ip was something like 172.x.x.x [22:31] installed plex,samba and the good stuff [22:31] everything works like a charm, but plex on tv can not find the server [22:31] so i bridged the connection between my host adapter and hyper-v one (im using hyper v manager,virtual ubuntu server) [22:32] my ip on ubuntu is 192.x.x.x. [22:32] same as host, so netplan again and i made it static [22:32] now tv can see plex [22:32] but plex cant see folder [22:32] and i can not samba share anything [22:32] what do you think is the issues [22:34] if you are not clear with the set-up shoot ill give my best to explain more in depth [22:35] so .. you've got a hyper-v hypervisor, and are doing bridged networking to your LAN? [22:36] you said you assigned the ubuntu VM a static address -- do you have a DHCP server on the lan? perhaps your internet router / firewall? [22:37] anyway to find out where php-fpm7 logs to ? (using nginx if that matters) [22:37] arooni: lsof probably shows an open filedescriptor [22:38] weird; that process is totally running but lsof shows "status error on php-fpm no such file or directory" [22:39] ahasenack: sorry, was afk [22:39] cliluw: seems like an odd question -- do whatever you want? [22:40] nacc: I'm just worried if I use component "main" instead of component "universe", maybe it could break something down the road. [22:41] cliluw: those are for the purposes of the archives themselves, really -- apt just follows the files in the archives it's told to [22:41] I think you can even set up your own sources without having the main / universe / etc level at all [22:42] yes, it's based upon the Packages file contents, I'm pretty sure [22:42] which for the Ubuntu archives refer to the components in the file paths [22:42] sarnold: Is it possible to get rid of the distribution level too, like "xenial" or "zesty"? I'm pretty sure my packages will work across distributions so I don't see why I need that level. [22:42] cliluw: ... you don't usually do that [22:42] cliluw: as your dependencies come from those distributions too [22:42] cliluw: i mean, very few things 'just work' across releases like that :) [22:43] nacc: My package is a Go binary so it's statically linked. [22:43] cliluw: oh [22:43] cliluw: why is it a deb at all then? [22:43] heh :) [22:43] cliluw: i mean if it's a statically linked binary, why do you need a package? [22:44] the pre/post inst/rm scripts might be nice [22:44] nacc: We prefer to deploy everything through Debian packages. It gives you other niceties like systemd service registration, etc. [22:45] cliluw: so it's not *just* a static go binary? it's also a systemd unit? [22:45] cliluw: that's all you needed to say :) [22:45] arooni: sorry was afk, is there a way in which we can connect so i can show you my set-up, i can try and explain more in depth if needed but my eng is non-native so im affraid ill get lost or confuse you, the thing i had in mind for connecting is skype! [22:45] cliluw: tbh, sounds like it should be a snap, but what do i know :) [22:45] cliluw: in any case, you might be right that it doesn't need the release in the path [22:45] Wolf_Y_: i appreciate it! but i think i have it figured out now :) [22:46] cliluw: but i'm not sure how apt handles those URLs in those cases (given the is part of the specification in the sources.list [22:46] cliluw: it seems easiest to just leave it, and worst-case, symlink the file around [22:48] arooni: i though we where talking about the issues im experiencing [22:49] compdoc: still there ? [22:49] ah i'm a noob-ish sysadmin at best :P still learning the ropes [23:00] Wolf_Y [23:12] compdoc: im here, are you here ? [23:20] Im in and out. Im configuring a new server [23:21] compdoc: is there a way in which we can talk or something, dis, skype anything.... [23:21] compdoc: i have some questions and issues i would like to share, and maybe we could figure them out together if you have time afcorse [23:22] best to jusy list your problems here, then others can help [23:24] compdoc: i did, and im also on #ubuntu at the same time [23:24] Wolf_Y_: it's preferred not to crosspost as well [23:25] compdoc: but the thing i would like the most is to show it to someone [23:25] nacc: oh did not know...sorry [23:25] compdoc: would you be interested to talk ? [23:25] cant, busy [23:28] compdoc: alright, maybe some other time then [23:28] if anyone else is interested in listening to my strange problems, ill be here [23:31] Hello, I am using this command to sync directories when a particular computer turns on. Do you know how can I re-run this command automatically, after it ends, due to disappearance of that computer or network error? [23:31] until nmap -sn 192.168.2.0/24 | grep 2.17; do sleep 300; done; rsync --progress --partial -avz -e "ssh -i /home/.ssh/ns" ns@192.168.2.17:"/Users/nafis/Masters/2016/" . [23:32] mojtaba: probably just move the '; done' to the end of the command [23:33] sarnold: thanks [23:33] mojtaba: i think you want to rethink it more than that, even [23:33] mojtaba: since if it's gone away, you need to redo the until as well, afaict [23:33] nacc: hmm, how? [23:34] nacc: yes [23:34] mojtaba: so it's insufficient to just move the done [23:34] oh :( [23:34] nacc: should I add another until? [23:34] mojtaba: you really want to put a second until in the loop [23:34] maybe, at least [23:34] don't start the loop until the server is available [23:34] wait 300s in that case [23:34] try to rsync [23:34] if rsync fails (use error checking) [23:35] retry the whole shebang [23:35] if rsync succeeds, then exit [23:35] nacc: can it be a one liner, like the one that I had? [23:35] maybe it'd be easier to just cronjob the thing with 'run-one' every half hour or something? skip the connectivity checks.. [23:35] mojtaba: anyting *can* be a one liner [23:36] mojtaba: but it's not sensible to make long one-liners and i have no idea why you would except to make your own maintenance harder [23:36] mojtaba: are you competing in some competition? [23:36] nacc: It is just one time use. [23:36] mojtaba: nothing is every just one time use [23:36] you've already used it at least twice, once when it worked, and now debugging it when it didn't [23:36] so do it correctly :) [23:37] nacc: thanks, for the advice. [23:37] mojtaba: in any case, you probably want ##bash, as this has little to nothing to do with ubuntu server :) [23:37] nacc: I see. Thanks a lot