[07:06] <lordievader> Good morning
[08:18] <march__> Hello, I'm having a bug on UbuntuServer 16.04-DAILY-LTS on Azure. Looks like Cloud-init broken. does it ring a bell ?
[08:19] <march__> somehow the provisioning is not going well between waagent and cloud-init
[08:34] <adac> Guys my server suddenly halted in the night, but there is nothing that indicates what happened in my /var/log/syslog
[08:34] <adac> is there a nother place I can check
[08:36] <lordievader> adac: dmesg?
[08:38] <adac> lordievader, hmm there is no date on dmesg output
[08:38] <lordievader> adac: There is if you run `dmesg -T`.
[08:40] <adac> lordievader, that did the trick! :D
[08:40] <lordievader> Does it give some hints?
[08:40] <adac> lordievader, but there is only data showing since the last boot this morning it seems
[08:41] <lordievader> There might be more in `/var/log/dmesg*`
[08:43] <adac> lordievader, hmm acutally there is only one file namely /var/log/dmesg itself there and that one is empty
[08:44] <lordievader> What version of Ubuntu are you running?
[08:48] <adac> lordievader, https://pastebin.com/FxKqXdfE
[08:49] <lordievader> adac: Ah, `sudo journalctl -b -1` might help you.
[08:50] <adac> lordievader, hmm it says: Specifying boot ID has no effect, no persistent journal was found
[08:50] <lordievader> Hmm. Stupid default.
[08:50] <adac> I think I need to tweak then some things
[08:51] <lordievader> Yes, you want to configure systemd-journald and logrotate.
[09:00] <adac> lordievader, kk thanks
[09:09] <ducasse> adac: if you create the directory /var/log/journal you will get persistent journalling, so you can look up messages from the previous boot with 'journalctl -b 1'
[09:09] <adac> ducasse, thanks for the hint!
[09:21] <march__> so ftr, cloud init doesn't execute custom script if there is a script stuck in same/previous runlevel https://serverfault.com/questions/852946/aws-userdata-script-in-cloud-init-not-running
[10:59] <gunix> can anybody here helm me with maas cause nobody there is answering
[10:59] <gunix> ?
 whats maas
[11:04] <hateball> !maas
[11:49] <rbasak> gunix: try #maas
[12:04] <Neo4> :)
[12:04] <Neo4> good afternoon
[12:10] <ahasenack> cpaelzer: I saw how autopkgtest created the console on ttyS1, or tried to
[12:10] <ahasenack> cpaelzer: there should be a more modern way to do it :)
[12:10] <ahasenack> cpaelzer: reading the bug you pointed me at, now
[12:10] <ahasenack> cpaelzer: the other two issues I had were:
[12:11] <ahasenack> a) on ppc64el, for some reason it booted off vdb, not vda. So I had to change autopkgtest's assumption that the iso with the test setup script was in vdb and change that to vda
[12:11] <ahasenack> b) I had to pass -m ports.ubuntu.com/blabla, otherwise it would try to find ppc packages in archive.u.c
[12:12] <cpaelzer> ahasenack: TL;DR non-x86 could need some improvement in autopkgtest buildvm (IMHO)
[12:12] <ahasenack> so true
[12:12] <ahasenack> I wonder how britney does it
[12:13] <ahasenack> cpaelzer: so, are you able to run ppc autopackagetests? I seem to recall you saying so in one or two old MPs. Or was that using biletto?
[12:15] <cpaelzer> ahasenack: the CI infra uses openstack and custom images for it
[12:15] <cpaelzer> ahasenack: most of the time you get around by (locally) just using lxd
[12:15] <cpaelzer> ahasenack: I sometimes fixed up my VM images, but I'm not as experienced in it to have a great howto or gist about it
[12:16] <cpaelzer> ahasenack: would lxd on ppc be an option for your case, I think no as you are mounting
[12:16] <ahasenack> I have to check
[12:16] <cpaelzer> ahasenack: we can either try to fix up your image to work correctly
[12:16] <ahasenack> I'm trying to reproduce a bug that happens only during migration so far
[12:16] <ahasenack> the closest I have is a vm, since they use openstack as you said
[12:16] <cpaelzer> ahasenack: or you run what the test would run in a normal bionic ppc vm spawned via uvtool
[12:17] <cpaelzer> ahasenack: I can try to make a working bionic image (again) if it is needed (a.k.a if reproducing in a uvt VM fails)
[12:17] <ahasenack> I thought that autopkgtest would just work, given that we use it in migrations
[12:18] <cpaelzer> ahasenack: it does just work if pre-setup is done :-)
[12:18] <ahasenack> that's how I started down this road, but it's definitely not as simple
[12:18] <cpaelzer> nothing ever is
[12:19] <ahasenack> at this point it sounds more like a weekend project
[12:37] <ahasenack> I have a question regarding purge and remove behavior (deb package)
[12:37] <ahasenack> there is an motd cache in /var/cache/<pkg>/bla.cache
[12:37] <ahasenack> it is removed in purge, but not with a simple "remove"
[12:38] <ahasenack> I think it should be removed with "apt remove" as well, becaues otherwise the motd will keep being displayed
[12:38] <ahasenack> even though the script that generated it no longer exists
[12:38] <ahasenack> thoughts?
[12:38] <ahasenack> postrm is this:
[12:38] <ahasenack> if [ "$1" = purge -a -f "$CACHE_FILE" ]; then
[12:38] <ahasenack>     rm "$CACHE_FILE"
[12:38] <ahasenack> fi
[12:39] <ahasenack> that particular cache/motd is showing the state of the system regarding livepatch status
[13:14] <cpaelzer> ahasenack: yes I'd agree to remove it on remove as well
[13:15] <cpaelzer> ahasenack: but I'd also retrigger a creation of the cache on that
[13:15] <cpaelzer> ahasenack: is that possible?
[13:15] <cpaelzer> ahasenack: because IIRC otherwise the next login might have no content at all
[13:15] <ahasenack> the next login would not have *this* content
[13:15] <cpaelzer> if that cache is the main content that would be displayed
[13:15] <ahasenack> it's just one motd, of many
[13:16] <ahasenack> that being said, I just checked the code again and saw that we already won't display the cache if the script that generated it is no longer installed
[13:16] <cpaelzer> ok, so leaving the file on remove is not an issue then?
[13:17] <ahasenack> not user-visible issue
[13:17] <ahasenack> the remaining issue would be if the user reinstalled it days later
[13:18] <ahasenack> for a while it would then display the old cache (while == 1 day at most)
[13:18] <ahasenack> versus displaying nothing until the cache is regenerated
[13:25] <cpaelzer> ahasenack: is there a trivial way to retrigger creating all those bits that make up motd?
[13:25] <cpaelzer> ahasenack: because if so IMHO any package dropping (or removing) something there should re-generate that cache
[13:25] <cpaelzer> if it is a complex mess, then it might be not feasible to do so
[13:26] <ahasenack> cpaelzer: there is no global cache
[13:26] <ahasenack> cpaelzer: each script handles it in its own way
[13:26] <cpaelzer> would there be a global trigger?
[13:26] <ahasenack> not that I know of
[13:26] <cpaelzer> ok
[13:26] <ahasenack> I just login again
[13:27] <ahasenack> for the ua-tools bit, though
[13:27] <ahasenack> it's a daily cron job
[13:27] <ahasenack> so what I do it edit /etc/cronttab and change the daily timer to be in the next minute
[13:28] <cpaelzer> but that cron job (and that login) has to call something
[13:28] <cpaelzer> can't we call this "something" from the postinst/postrm ?
[13:29] <ahasenack> we could, yes, but that also calls apt-cache policy to check the status of some ua features
[13:29] <ahasenack> and I was fearful of calling that in the middle of a dpkg transaction without more careful testing
[13:30] <ahasenack> I think it might also call dpkg itself
[13:30] <ahasenack> to query things
[13:30] <cpaelzer> yeah all of this was the reason to do it async in the background
[13:30] <cpaelzer> and not sync on login
[13:30] <cpaelzer> could we just "fire and forget" it from the maintainer script
[13:30] <cpaelzer> just as the login does?
[13:31] <cpaelzer> I'm not trying to convince you - I ask "do you tihnk it would be better to do so"
[13:31] <ahasenack> the login doesn't call that anymore, it's just the cronjob
[13:31] <ahasenack> the login just parses the cache file, if it exists
[13:32] <ahasenack> calling the script that the cron job calls at postinst just has that issue I mentioned of dpkg and apt-cache being used
[13:32] <ahasenack> which I don't know how serious is
[13:32] <ahasenack> I would fear stumbling upon lock files and whatnot from dpkg and apt
[13:32] <cpaelzer> yep
[13:32] <cpaelzer> keep it as is
[13:33] <cpaelzer> thanks for the discussion
[13:33] <ahasenack> np
[13:34] <cpaelzer> ahasenack: ppa for the motd change?
[13:35] <pgaxatte> coreycb: hi again :) as i said on #openstack-infra, there's a small issue for mistral's packages on cloud archive
[13:36] <ahasenack> cpaelzer: oh, hm
[13:36] <ahasenack> cpaelzer: I didn't create one this time, sorry. I can do that quickly
[13:36] <pgaxatte> coreycb: on pike version (5.0.0 which is quite behind the latest pike version on github) it is not possible to install mistral-event-engine and mistral-engine together since the mistral-event-engine provides the mistral-engine role instead of mistral-event-engine
[13:36] <ahasenack> since builddeps is so tiny I was just building the deb on my host
[13:37] <cpaelzer> ahasenack: I'm fine building locally as well
[13:37] <cpaelzer> I can push into my test container from here
[13:37] <ahasenack> cpaelzer: ok, thanks and sorry
[13:37] <coreycb> pgaxatte: ok is that fixed in a pike point release?
[13:38] <coreycb> pgaxatte: let's get a bug opened at https://bugs.launchpad.net/cloud-archive and I can dig further
[13:38] <pgaxatte> coreycb: what do you mean by point release?
[13:38] <pgaxatte> coreycb: fair enough i'll file a bug ;)
[13:38] <coreycb> pgaxatte: like a 5.0.1 version
[13:38] <coreycb> pgaxatte: thanks
[13:39] <pgaxatte> coreycb: i only see 5.0.0, no superior version
[13:40] <pgaxatte> coreycb: well there is 6.0 but it is queens and i'm interested in pike
[13:40] <coreycb> pgaxatte: i see a tag for 5.2.2
[13:41] <ahasenack> cpaelzer: the other day you mentioned something about cpu throttling in qemu
[13:41] <ahasenack> cpaelzer: what is the trick? I would like to slow things down a bit
[13:41] <pgaxatte> coreycb: yes on github but my problem is related to the debianization
[13:42] <coreycb> pgaxatte: oh, so yes we only have 5.0.0 atm for pike but we can do a stable release for 5.2.2
[13:43] <pgaxatte> coreycb: yes that would be good but the debian/mistral-event-engine.init.in needs fixing too
[13:43] <coreycb> pgaxatte: ok i can fix that up too. please add details to the bug and then i'll work on it soon.
[13:44] <pgaxatte> coreycb: thanks i'm preparing the bug
[13:45] <cpaelzer> ahasenack: let me write a txt with a few rough steps to slow it down
[13:45] <ahasenack> thanks
[13:46] <cpaelzer> finishing your MP review first :-)
[13:47] <ahasenack> of course
[13:50] <cpaelzer> ahasenack: what is the corn job I might want to trigger
[13:50] <cpaelzer> I'm in the "now the line is gone" state
[13:51] <ahasenack> cpaelzer: daily in /etc/cronttab
[13:51] <ahasenack> I prefer to have cron do it instead of calling the script manually, because
[13:51] <ahasenack> in the past calling the script directly hid a bug (/snap/bin wasn't in cron's PATH)
[13:51] <ahasenack> cpaelzer: so I edit /etc/cronttab, the daily line, and have it run in the next minute. Then save and wait, tailing /var/log/syslog to see when it ran
[13:52] <pgaxatte> coreycb: https://bugs.launchpad.net/cloud-archive/+bug/1757433
[13:53] <cpaelzer> all good, done already andol
[13:53] <coreycb> pgaxatte: thanks, will take a look shortly
[13:53] <cpaelzer> sorry ahasenack I meant
[13:53] <pgaxatte> coreycb: thanks ;)
[14:47] <trippeh_> hm. the systemd update that was just pushed to artful fails in chroot for me.
[14:49] <adac> Guys why is it saying that it keeps back these packages
[14:49] <adac> https://pastebin.com/9eR8sAgh
[14:50] <adac> shouldn't dit-upgrade update them anyways?
[14:50] <adac> *dist-upgrade
[14:51] <lordievader> adac: Dist-upgrade should. Upgrade probably doesn't because the dependencies changed.
[14:51] <adac> lordievader, but I'm actually using dist-upgrade
[14:51] <JanC> there might be missing dependencies
[14:51] <adac> hmm
[14:52] <adac> how can I resolve this?
[14:52] <JanC> maybe just wait until the missing packages are available
[14:52] <JanC> these are meta-packages which depend on the latest kernel version
[14:53] <JanC> sometimes these packages are available before the new kernel version is available
[14:54] <lordievader> adac: `apt-cache show linux-image-generic` shows the depencies of the package.
[14:54] <JanC> maybe do an apt update and try again
[14:55] <JanC> (if it doesn't work now, try again in a couple hours)
[14:56] <sdeziel> adac: do you see them when running "apt-mark showhold" ?
[14:56] <adac> lordievader, JanC I now found out what  did wrong
[14:56] <adac> With ansible I had set this:
[14:56] <adac> command: apt-mark hold {{ ubuntu_kernel_version }}
[14:56] <sdeziel> there you go
[14:56] <JanC> eh
[14:56] <JanC> right
[14:57] <adac> linux-image-4.4.0-116-generic
[14:57] <adac> was the value
[14:57] <adac> yes but It didn't show up in the list of the packages that are hold
[14:57] <JanC> if you block upgrades, upgrades will be blocked  :)
[14:57] <adac> so therefore I tought it was not on hold
[14:57] <adac> JanC, yes that sounds about right^^
[14:58] <adac> I think I cannot lock a  certain version
[14:58] <adac> I can only lock the pakacge name or?
[14:58] <adac> I can only lock the package name or?
[14:59] <adac> linux-image-extra-virtual linux-image-generic
[15:00] <lordievader> If you want to lock to a certain version, you could manually install that version and remove the meta package. But you loose the automatic updates.
[15:07] <adac> lordievader, is this still valid:
[15:07] <adac> https://askubuntu.com/a/678633
[15:09] <lordievader> adac: I have never frozen a kernel, so I don't really know.
[15:09] <adac> lordievader, kk :)
[15:14] <tobasco> coreycb: cool so i think i got understand the packaging process, just two questions now; when specifying the package dependencies we go by the requirements.txt for the project right? how is the testing for packages should i just spin up a vm and test the package? (openstack related btw)
[15:16] <coreycb> tobasco: yes generally test-requirements.txt and requirements.txt would go in Build-Depends-Indep and requirements.txt would go in Depends.
[15:17] <coreycb> tobasco: do you know how to create a PPA? you could upload to a PPA and install from that in your vm.
[15:19] <tobasco> cool, have never created a ppa only used them, i'll test it out thanks
[15:20] <coreycb> tobasco: assuming you have a launchpad account you can model a bionic ppa after this https://launchpad.net/~corey.bryant/+archive/ubuntu/bionic-queens
[15:22] <coreycb> tobasco: this script comes in useful to avoid any version conflicts in a ppa when uploading the same version multiple times: https://paste.ubuntu.com/p/P4SCnF5wTq/
[15:27] <tobasco> coreycb: thanks
[15:27] <tobasco> i'll see what i can come up with
[16:02] <sansay> Hey guys whats the proper way to change logrotation for nginx? Ive been editing the file in /etc/logrotation/nginx is this the correct way?
[16:02] <nacc> teward: --^ maybe you know?
[16:04] <zioproto> hello all
[16:04] <zioproto> I noticed that in the Ubuntu Kernel packages there are many kernels that are cloud specific
[16:04] <zioproto> looking at apt-cache search linux-image | egrep "gce|azure|aws"
[16:05] <nacc> zioproto: yes.
[16:05] <zioproto> what is special about these kernels ? Is there some special kernel to use also in case of Openstack qemu+kmv hypervisors ?
[16:06] <tobasco> coreycb: when uploading to launchpad ppa with dput does it take a while before i can see it?
[16:06] <zioproto> nacc: ?
[16:06] <coreycb> tobasco: it shouldn't take too long, probably 5 minutes. if it gets rejected you'll get an email.
[16:06] <nacc> zioproto: well, you hadn't yet asekd a question, so I was agreeing they exist.
[16:06] <tobasco> coreycb: ok thanks
[16:07] <zioproto> nacc: what is special about these kernels ? Is there some special kernel to use also in case of Openstack qemu+kmv hypervisors ?
[16:07] <zioproto> nacc: those kernels are meant for virtual instances, right ??
[16:08] <nacc> Odd_Bloke: --^ ?
[16:12] <sdeziel> zioproto: there is also the linux-kvm flavor
[16:15] <sdeziel> zioproto: they use a different config set so they don't come with all the generic stuff required for a given kernel to be able to run on bare metal servers, laptops, KVM guest, Xen guest, etc
[16:16] <sdeziel> zioproto: you can compare their /boot/config-$version files to see how much they differ
[16:16] <zioproto> thanks !
[16:17] <sdeziel> np
[16:51] <balloons> rbasak, so I noticed ppc64el failed to build for mongodb-server-core still
[16:57] <rbasak> balloons: yeah I'm looking in to it
[16:57] <balloons> rbasak, no worries. Thanks
[19:24] <gunix> can i install ubuntu to a device, when i already have a running ubuntu ?
[19:25] <gunix> I have an ubuntu with cli and i want to install to /dev/sda
[19:25] <sarnold> the debootstrap tool may be able to help you
[19:26] <sarnold> you'll probably have to handle booting yourself
[19:26] <gunix> does it work for ubuntu ?
[19:26] <gunix> i wouldn't be in this situation if MAAS would detect /dev/sda
[19:26] <gunix> but it does not ...
[19:27] <sarnold> does it detect it udner a different name?
[19:28] <gunix> no, it doesn't detect any storage ...
[19:28] <gunix> previously there was no /dev/sda even in bash, but i change the array controller from the del gen9 to get the disks into HBA
[19:29] <gunix> raid not needed since there is only one disk on the smartarray
[19:29] <gunix> now i see /dev/sda in bash, but in MAAS still not :D
[19:30] <ahasenack> gunix: did you recommission?
[19:30] <gunix> ahasenack:
[19:30] <gunix> no
[19:30] <gunix> will that fix it ?
[19:31] <gunix> i just shut it down and click on commission again "?
[19:31] <ahasenack> it will only refresh the hardware data if you recommission
[19:31] <ahasenack> and if it's an old maas, you will have to re-enlist, but recent versions should be fine
[19:31] <ahasenack> gunix: yeah, pretty much. It will erase what you have installed there, though
[19:31] <gunix> "old maas"
[19:31] <ahasenack> like 1.7
[19:31] <ahasenack> that's old
[19:32] <gunix> i did "apt install maas" on ubuntu 16.04
[19:32] <gunix> if that got me an old maas i am dissapointed :D
[19:32] <ahasenack> no, that should have given you a pretty recent one
[19:32] <ahasenack> 2.3 I think
[19:32] <ahasenack> so you should be good on that front
[19:33] <gunix> 2.3.0-6434-gd354690-0ubuntu1~16.04.1
[19:33] <ahasenack> yep
[19:33] <gunix> ok i did commission again
[19:38] <gunix> that worked lol
[19:38] <gunix> ahasenack: i want to kiss you
[19:38] <sarnold> ahasenack: nice :D
[19:38] <ahasenack> haha
[19:38] <gunix> i hope you are a dude cause my wife doesn't allow me to touch other girls
[19:51] <gunix> awkward silence ... :))
[20:04] <ahasenack> I have this source tarball that installs a bash completion file in /etc/bash_completion.d
[20:05] <ahasenack> but that's an "old" location, it should be in /usr/share/bash-completion/completions nowadays
[20:05] <ahasenack> the new location is a bit stricted regarding filenames, though. The filename that is being installed is something.sh
[20:05] <ahasenack> that is not read anymore. It needs to be just "something", or "something.bash", as far as I understood
[20:06] <ahasenack> so, simple question: how to install it with the new name without patching the source tarball?
[20:06] <ahasenack> dh_install can't rename
[20:06] <ahasenack> dh-exec?
[20:06] <ahasenack> or an override in d/rules?
[20:08] <nacc> ahasenack: yeah, you want dh-exec and a debian/<package>.install
[20:08] <nacc> although i would think completions going to a specific directroy are handled by a helper, but i might be wrong
[20:08] <ahasenack> there is a helper
[20:08] <nacc> ahasenack: cf. man dh_install
[20:08] <ahasenack> dh-bash-completion
[20:08] <ahasenack> dh_, rather
[20:09] <ahasenack> hm, it seems to suport renames
[20:09] <ahasenack> let me try that
[20:15] <gunix> ahasenack: do you know why i can't recommission other nodes ?
[20:15] <ahasenack> gnuoy: no, what fails?
[20:15] <gunix> 1 node cannot be commissioned. To proceed, update your selection.
[20:15] <gunix> i actually can't start the commission process
[20:15] <ahasenack> gunix: that can happen when you select multiple
[20:15] <ahasenack> gunix: and one or more is in a state that doesn't allow commissioning
[20:16] <ahasenack> gunix: for example, it's deployed
[20:16] <ahasenack> it needs to be ready, or new, iirc
[20:16] <gunix> ahasenack: oh, i have to override failed testing
[20:27] <gunix> it's working now
[20:30] <ahasenack> nacc: hm, the bash-completion package installs dh_bash-completion, but the latter is not mentioned in the debhelper manpage. I created the <package>.bash-completion file, added build-depends for bash-completion, but dh_bash-completion was not called
[20:31] <ahasenack> d/rules has the usual "dh $@" line
[20:31] <ahasenack> debian/compat is 9
[20:32] <ahasenack> any ideas?
[20:32] <ahasenack> build log shows no attempt at calling dh_bash-completion
[20:33]  * ahasenack maybe needs a --with in the dh line
[20:34] <ahasenack> yep
[20:34] <ahasenack> --with it is
[20:34] <ahasenack> thanks :)
[21:27] <cliluw> If I'm making my own apt repositories, what should I specify for the component? main, universe, multiverse, or something else?
[21:29] <_KaszpiR_> cliluw it really depends on what packages you have
[21:29] <_KaszpiR_> try with main, and if fails then expand
[21:31] <cliluw> _KaszpiR_: Isn't main only for "Canonical-supported free and open-source software"? It seems like if it's in my own repository, that would almost be definition not be Canonical-supported.
[21:32] <_KaszpiR_> oh your own repo
[21:32] <_KaszpiR_> sorry, misunderstood as mirror
[21:32] <_KaszpiR_> well, actually do whatevfer you like and use apt-pin
[21:49] <arooni> i've got 4.1 gb of storage on /src/ for various linux headers on ubuntu 14.04
[21:49] <arooni> anwyay to clean some of those out?
[21:49] <arooni> this is safe? sudo apt-get autoremove
[21:57] <_KaszpiR_> it will be re-downloaded
[22:14] <TJ-> arooni: if the related linux-image-<VERSION> has been removed the headers should autoremove
[22:21] <Wolf_Y_> Hey, anyone active, i would like to talk about some samba/ip Ubuntu server issues im experiencing !
[22:22] <sarnold> irc works best with specific questions
[22:24] <compdoc> Wolf_Y_, whats the issue?
[22:24] <compdoc> does anyone know how snaps works?
[22:28] <TJ-> compdoc: basically a wrapper around an LXd container
[22:30] <Wolf_Y_> compdoc: alright, so i installed a fresh ubuntu server 17.10
[22:31] <Wolf_Y_> ran ifconfig -a and my ip was something like 172.x.x.x
[22:31] <Wolf_Y_> installed plex,samba and the good stuff
[22:31] <Wolf_Y_> everything works like a charm, but plex on tv can not find the server
[22:31] <Wolf_Y_> so i bridged the connection between my host adapter and hyper-v one (im using hyper v manager,virtual ubuntu server)
[22:32] <Wolf_Y_> my ip on ubuntu is 192.x.x.x.
[22:32] <Wolf_Y_> same as host, so netplan again and i made it static
[22:32] <Wolf_Y_> now tv can see plex
[22:32] <Wolf_Y_> but plex cant see folder
[22:32] <Wolf_Y_> and i can not samba share anything
[22:32] <Wolf_Y_> what do you think is the issues
[22:34] <Wolf_Y_> if you are not clear with the set-up shoot ill give my best to explain more in depth
[22:35] <sarnold> so .. you've got a hyper-v hypervisor, and are doing bridged networking to your LAN?
[22:36] <sarnold> you said you assigned the ubuntu VM a static address -- do you have a DHCP server on the lan? perhaps your internet router / firewall?
[22:37] <arooni> anyway to find out where php-fpm7 logs to ? (using nginx if that matters)
[22:37] <sarnold> arooni: lsof probably shows an open filedescriptor
[22:38] <arooni> weird; that process is totally running but lsof shows "status error on php-fpm no such file or directory"
[22:39] <nacc> ahasenack: sorry, was afk
[22:39] <nacc> cliluw: seems like an odd question -- do whatever you want?
[22:40] <cliluw> nacc: I'm just worried if I use component "main" instead of component "universe", maybe it could break something down the road.
[22:41] <nacc> cliluw: those are for the purposes of the archives themselves, really -- apt just follows the files in the archives it's told to
[22:41] <sarnold> I think you can even set up your own sources without having the main / universe / etc level at all
[22:42] <nacc> yes, it's based upon the Packages file contents, I'm pretty sure
[22:42] <nacc> which for the Ubuntu archives refer to the components in the file paths
[22:42] <cliluw> sarnold: Is it possible to get rid of the distribution level too, like "xenial" or "zesty"? I'm pretty sure my packages will work across distributions so I don't see why I need that level.
[22:42] <nacc> cliluw: ... you don't usually do that
[22:42] <nacc> cliluw: as your dependencies come from those distributions too
[22:42] <nacc> cliluw: i mean, very few things 'just work' across releases like that :)
[22:43] <cliluw> nacc: My package is a Go binary so it's statically linked.
[22:43] <nacc> cliluw: oh
[22:43] <nacc> cliluw: why is it a deb at all then?
[22:43] <sarnold> heh :)
[22:43] <nacc> cliluw: i mean if it's a statically linked binary, why do you need a package?
[22:44] <sarnold> the pre/post inst/rm scripts might be nice
[22:44] <cliluw> nacc: We prefer to deploy everything through Debian packages. It gives you other niceties like systemd service registration, etc.
[22:45] <nacc> cliluw: so it's not *just* a static go binary? it's also a systemd unit?
[22:45] <nacc> cliluw: that's all you needed to say :)
[22:45] <Wolf_Y_> arooni:  sorry was afk, is there a way in which we can connect so i can show you my set-up, i can try and explain more in depth if needed but my eng is non-native so im affraid ill get lost or confuse you, the thing i had in mind for connecting is skype!
[22:45] <nacc> cliluw: tbh, sounds like it should be a snap, but what do i know :)
[22:45] <nacc> cliluw: in any case, you might be right that it doesn't need the release in the path
[22:45] <arooni> Wolf_Y_: i appreciate it!  but i think i have it figured out now :)
[22:46] <nacc> cliluw: but i'm not sure how apt handles those URLs in those cases (given the <release-pocket> is part of the specification in the sources.list
[22:46] <nacc> cliluw: it seems easiest to just leave it, and worst-case, symlink the file around
[22:48] <Wolf_Y_> arooni: i though we where talking about the issues im experiencing
[22:49] <Wolf_Y_> compdoc: still there ?
[22:49] <arooni> ah i'm a noob-ish sysadmin at best :P  still learning the ropes
[23:00] <compdoc> Wolf_Y
[23:12] <Wolf_Y_> compdoc: im here, are you here ?
[23:20] <compdoc> Im in and out. Im configuring a new server
[23:21] <Wolf_Y_> compdoc:  is there a way in which we can talk or something, dis, skype anything....
[23:21] <Wolf_Y_> compdoc:  i have some questions and issues i would like to share, and maybe we could figure them out together if you have time afcorse
[23:22] <compdoc> best to jusy list your problems here, then others can help
[23:24] <Wolf_Y_> compdoc: i did, and im also on #ubuntu at the same time
[23:24] <nacc> Wolf_Y_: it's preferred not to crosspost as well
[23:25] <Wolf_Y_> compdoc:  but the thing i would like the most is to show it to someone
[23:25] <Wolf_Y_> nacc:  oh did not know...sorry
[23:25] <Wolf_Y_> compdoc:  would you be interested to talk ?
[23:25] <compdoc> cant, busy
[23:28] <Wolf_Y_> compdoc:  alright, maybe some other time then
[23:28] <Wolf_Y_> if anyone else is interested in listening to my strange problems, ill be here
[23:31] <mojtaba> Hello, I am using this command to sync directories when a particular computer turns on. Do you know how can I re-run this command automatically, after it ends, due to disappearance of that computer or network error?
[23:31] <mojtaba> until nmap -sn 192.168.2.0/24 | grep 2.17; do sleep 300; done; rsync --progress --partial -avz -e "ssh -i /home/.ssh/ns" ns@192.168.2.17:"/Users/nafis/Masters/2016/" .
[23:32] <sarnold> mojtaba: probably just move the '; done' to the end of the command
[23:33] <mojtaba> sarnold: thanks
[23:33] <nacc> mojtaba: i think you want to rethink it more than that, even
[23:33] <nacc> mojtaba: since if it's gone away, you need to redo the until as well, afaict
[23:33] <mojtaba> nacc: hmm, how?
[23:34] <mojtaba> nacc: yes
[23:34] <nacc> mojtaba: so it's insufficient to just move the done
[23:34] <sarnold> oh :(
[23:34] <mojtaba> nacc: should I add another until?
[23:34] <nacc> mojtaba: you really want to put a second until in the loop
[23:34] <nacc> maybe, at least
[23:34] <nacc> don't start the loop until the server is available
[23:34] <nacc> wait 300s in that case
[23:34] <nacc> try to rsync
[23:34] <nacc> if rsync fails (use error checking)
[23:35] <nacc> retry the whole shebang
[23:35] <nacc> if rsync succeeds, then exit
[23:35] <mojtaba> nacc: can it be a one liner, like the one that I had?
[23:35] <sarnold> maybe it'd be easier to just cronjob the thing with 'run-one' every half hour or something? skip the connectivity checks..
[23:35] <nacc> mojtaba: anyting *can* be a one liner
[23:36] <nacc> mojtaba: but it's not sensible to make long one-liners and i have no idea why you would except to make your own maintenance harder
[23:36] <nacc> mojtaba: are you competing in some competition?
[23:36] <mojtaba> nacc: It is just one time use.
[23:36] <nacc> mojtaba: nothing is every just one time use
[23:36] <nacc> you've already used it at least twice, once when it worked, and now debugging it when it didn't
[23:36] <nacc> so do it correctly :)
[23:37] <mojtaba> nacc: thanks, for the advice.
[23:37] <nacc> mojtaba: in any case, you probably want ##bash, as this has little to nothing to do with ubuntu server :)
[23:37] <mojtaba> nacc: I see. Thanks a lot