[00:00] <hehehe> so nice
[00:00] <hehehe> crystal clear text
[00:00] <hehehe> everything is logical minimal effective
[00:03] <sarnold> hehehe: you shouldn't use tcp sockets; that again allows all processes on the local machine to run arbitrary php in the context of the FPM process
[00:04] <hehehe> i use file socket
[00:04] <sarnold> good keep it that way :D
[00:04] <hehehe> sarnold:   but where is mistake solution! :D
[00:04] <hehehe> haha
[00:05] <hehehe> and why would tcp sockets allow any proccess to run arb php?
[00:05] <hehehe> is there a datagram for it?
[00:05] <hehehe> to visualise
[00:05] <sarnold> there's no access controls on tcp sockets
[00:05] <sarnold> unix domain sockets do have access controls
[00:06] <hehehe> ok such as file permissions
[00:06] <sarnold> so if you wanted to constrain access to the tcp sockets you'd need to add that yourself via iptables
[00:06] <hehehe> so user is www:data and nginx user is www:data
[00:06] <hehehe> but say if someone hijack local process via bug
[00:07] <hehehe> can you explain more
[00:07] <hehehe> what happens then?
[00:08] <hehehe> https://dt-cdn.net/wp-content/uploads/2014/10/FirstFastCGIrequest.png
[00:08] <hehehe> niceee
[00:08] <sarnold> if a local process is hijacked then the hijacker can perform all operations that the process is allowed to do: read/write to open file descriptors, filesystem access, all syscalls with capabilities of the process, etc..
[00:08] <sarnold> and if that allows connect(localhost, 9000) kinds of operations, then it can send essentuially arbitrary php to the fpm system
[00:09] <hehehe> emm
[00:09] <hehehe> what kind of local process can do all that
[00:09] <hehehe> its kinda tricky to hijack such process
[00:10] <hehehe> ll /run/php/ | grep php
[00:10] <hehehe> -rw-r--r--  1 root     root       5 Jun 13 01:40 php7.0-fpm.pid
[00:11] <hehehe> maybe socket died?
[00:14] <hehehe> sarnold: issue seems mostly with that dude idea to give file ownership to root:www-data
[00:14] <hehehe> it then tries to serve html
[00:15] <hehehe> which indicates it cant communicate with php
[00:15] <hehehe> or lets say nginx www:data sends request to php fpm and then on  a way back something happen
[00:41] <mwhudson> coreycb: seeing as you Touched It Last: https://launchpad.net/ubuntu/+source/python-pika-pool/0.1.3-1ubuntu2
[05:57] <jamespage> nacc: thanks for your work on this - much appreciated
[05:57] <jamespage> nacc: I'd go with the upstream test suites; the earlier this gets landed into artful, the more general testing it will get
[07:41] <patsToms> can I concat two repositories when using debmirror to make local mirror?
[10:10] <Aison> is there cacti  1.1.10 available for ubuntu xenial?
[12:16] <frickler> jamespage: do you have a PPA with horizon 10.0.4 somewhere (uca newton)? I tried building locally, but seeing issues with the compress jobs when installing
[12:25] <jamespage> frickler: lemme see - I think I deleted my testing ppa once I uploaded for SRU team review
[12:26] <jamespage> hmm yeah I did tidy that one up
[12:26] <jamespage> frickler: I can shove it somewhere for you if you need it
[12:28] <jamespage> frickler: ppa:james-page/newton
[12:28] <jamespage> frickler: its a trickier one to build from source due to the multiple orig tarballs thingmy
[12:29] <frickler> jamespage: yeah, I know, its the only package I'm needing sbuild for, but the result still doesn't work for me
[12:34] <hehehe> hello server gangsters
[12:34] <hehehe> :)
[12:34] <coreycb> mwhudson: +1 thanks for letting me know
[12:36] <hehehe> I implemented htst
[12:36] <hehehe> HSTS
[12:36] <hehehe> however nginx also allows to use  return 301 https://www $request_uri;
[12:36] <hehehe> to sent all requests to https
[12:37] <hehehe> so whats the advantage of hsts in such case?
[12:37] <hehehe> hi zhhuabj  :)
[12:38] <lordievader> hehehe: The advantage is that clients ask themselves for https, instead of the server telling them they should go there.
[12:39] <hehehe> lordievader:  hmm the dude who helped me with nginx told me to use both?
[12:39] <hehehe> does it make sense?
[12:39] <hehehe> *told me to use both
[12:39] <lordievader> hehehe: If you have a man-in-the-middle pretending to be your website, hsts helps, your approach does not help in that case.
[12:39] <lordievader> Yes, it makes sense to use both.
[12:40] <hehehe> ok I see - first request is http and then it goes to https unless strict http reject all http right?
[12:40] <hehehe> and also what are OWASP Secure Headers Project for? :)
[12:42] <lordievader> How it goes in this scenario, a browser reaches your website, the server tells the client to go to https. Then hsts tells the browser to only reach this website over https in the future.
[12:44] <hehehe> oki
[12:44] <hehehe> so if they using site for first time max security comes when site is on distributed preload list
[12:45] <hehehe> lordievader: any drawbacks with using hsts?
[12:45] <hehehe> potential issues? :D
[12:46] <lordievader> You have to make sure your https works. If it is broken you cannot simply switch back to http.
[12:46] <lordievader> Ssllabs has some nice tests for this sort of stuff.
[12:46] <lordievader> https://www.ssllabs.com/
[12:48] <hehehe> cool
[12:48] <hehehe> yes https works here
[12:50] <hehehe> I read some nginx howto and its like blank for me :D
[12:50] <hehehe> even after re reading
[12:52] <jamespage> frickler: did you see
[12:52] <jamespage> CommandError: An error occurred during rendering /usr/share/openstack-dashboard/openstack_dashboard/templates/horizon/_scripts.html: '\"../bower_components/respond/dest/respond.min.js\"' isn't accessible via COMPRESS_URL ('/horizon/static/') and can't be compressed
[12:55] <hehehe> $fastcgi_script_name
[12:55] <hehehe>     This variable is equal to the URI request or, if if the URI concludes with a forward slash, then the URI request plus the name of the index file given by fastcgi_index - makes sense but what is user wont use concluding slash?
[12:55] <hehehe> then site wont server say index.php?
[12:55] <hehehe> *serve
[12:55] <frickler> jamespage: exactly
[12:56] <frickler> while upgrading from 10.0.3 or on a fresh install with 10.0.4 directly
[12:58] <lordievader> hehehe: I don't understand the question.
[13:00] <hehehe> ok
[13:01] <hehehe> lordievader: if u read info I pasted it seems fastscgi parama fastcgi_script_name servers  index file in a dir only if directory is /dir/ and not /dir?
[13:01] <hehehe> *serves
[13:02] <lordievader> It does read that way, yes.
[13:03] <jamespage> frickler: looking now but I suspect its some sort of transient dep issue with the way the xstatic bundle is created
[13:04] <hehehe> lordievader: but that is an issue as many people will type www.xexy.com and not www.xexy.com/
[13:04] <hehehe> how to solve it? :)
[13:05] <lordievader> hehehe: Have you verified if this is actually the case? Wouldn't be surprised if the wording is just a tad confusing and no problem actually occuring.
[13:06] <hehehe> lordievader: well somehow it works without closing / but then its kinda contradicts wording
[13:06] <lordievader> hehehe: Submit a patch ;)
[13:06] <hehehe> also I discovered why I failed to generate lets encrypt cert before on 1 box
[13:06] <hehehe> I had ngix  setting to dissallow access to all . files
[13:06] <hehehe> :D
[13:07] <hehehe> location ~ /\. {
[13:07] <hehehe> deny all;
[13:07] <hehehe> and I did not add location ~ /\.well-known\/acme-challenge {
[13:07] <hehehe> allow all;
[13:07] <hehehe> }
[13:07] <hehehe> :)))
[13:11] <jamespage> frickler: building a revised version in ppa:james-page/newton2
[13:13] <frickler> jamespage: so you did see that error with your package, too? that would imply that my building foo isn't quite as bad as I'm thinking ;)
[13:19] <jamespage> frickler: I did
[13:19] <jamespage> frickler: the refresh-xstatic helper does not limit upper bounds when creating the xstatic tarball.
[13:20] <jamespage> frickler: so I suspect something broke in the xstatic depends versions/deps
[13:24] <Aison> why are my lvm volumes not activated after reboot? this is new here since systemd
[13:25] <Aison> i'm not sure how to do it right
[13:25] <Aison> is there some special systemd service I have to enable to use lvm2?
[13:30] <ronator> Aison: did you upgrade?
[13:33] <Aison> ronator, I just try to upgrade
[13:33] <ronator> usually, an upgrade should convert start scripts to systemd - let me check here ...
[13:35] <ronator> Aison: this won't really help but you are not alone: https://serverfault.com/questions/199185/logical-volumes-are-inactive-at-boot-time#200580
[13:40] <jamespage> frickler: the one in newton2 works OK - I basically copied forward the 10.0.3 orig-static.tar.gz
[13:40] <jamespage> 10.0.4-0ubuntu1 was rejected from the UNAPPROVED queue - will upload with the older renamed tarball to avoid the break
[13:41] <jamespage> frickler: need to update the refresh process to use upper-constraints
[13:42] <ronator> Aison: Do you have a "lvm2-monitor.service" anywhere on your system?  I have on ubuntu server 16.04.2 these in /lib/systemd/system: lvm2-lvmetad.service,  lvm2-lvmetad.socket,  lvm2-lvmpolld.service, lvm2-lvmpolld.socket, lvm2-monitor.service, lvm2-pvscan@.service, lvm2.service
[13:43] <ronator> Aison: if you dont, this could be a reason but I do not know your whole system history :D
[13:43] <ronator> Aison: maybe look also for help in chan #systemd?
[13:47] <frickler> jamespage: cool, thx
[14:01] <Aison> ronator, just checking...
[14:03] <Aison> ronator, lvm2-monitor.service was already enabled
[14:03] <frickler> jamespage: confirmed the newton2 build works fine for me, thx again
[14:04] <ronator> "systemd-analyze" is a great command, maybe read about it and see if you can find the problem ?!?
[14:04] <ronator> @ Aison
[14:04] <Aison> maybe the problem is, that /var is on lvm device?
[14:05] <ronator> well, if thats the case, there should be logs about it, like in "dmesg" or syslog
[14:05] <ronator> is the system booting fast?
[14:06] <ronator> considerable?
[14:06] <ronator> 'systemd-analyze blame' can show you where systemd spends most time on while booting; may be of help ...
[14:08] <Aison> ronator, no, it hangs at mounting the lvm devices for 90seconds
[14:08] <Aison> then I can enter the admin password
[14:10] <Aison> ronator, in #systemd they tell me to mount /var in initrd
[14:10] <Aison> how to do that with ubuntu?!? never changed my initrd
[14:11] <ronator> I read it ...
[14:12] <Aison> I thought in initrd it is also done by systemd
[14:13] <ronator> I am not sure how he/she meant it, that's why I kept silent to see if I can also learn sth. :D
[14:15] <ronator> Aison: let's see if he gives an example. should be possible to apply that to ubuntu similiarly
[14:18] <fallentree> initrd is required only to host tools required to mount root. why would you want to mount var in it?
[14:26] <macskay> hi guys, im investigating an issue on my server. i have dovecot service running and it kept telling me that a user i created tries to connect every minute at around the same second-value. i therefore did a "netstat -nputw | grep :25" which shows me a TIME_WAIT: "tcp        0      0 127.0.0.1:49190         127.0.0.1:25            TIME_WAIT   -" is there a way to determine what process belonged to 49190 prior to the
[14:26] <macskay> time_WAIT
[14:29] <Aison> I also think my ssd is broken....
[14:29] <ronator> fallentree: that was a suggestion from #systemd due to not mounted /var LVM device after reboot
[14:29] <ronator> Aison: you should check thta first :)
[14:29] <hehehe> hehe
[14:30] <fallentree> it's a stupid suggestion (and no wonder it comes from systemd).
[14:31] <fallentree> macskay: probably not, btw dovecot has nothing to do with port 25
[14:39] <hehehe> hi fallentree  :)
[14:39] <hehehe> fallentree:  I nearly made your suggestion work, but something is yet to work :) I think it may work soon
[14:42] <hehehe> fallentree: if I add root to www-data group I also need to change socket ownership right?
[14:42] <hehehe> or not
[14:42] <hehehe> nah
[14:42] <hehehe> I am just figuring out why its yet to work
[14:45] <fallentree> why would you add root to www-data group?
[14:45] <fallentree> root is omnipotent you don't need to add it to a group
[14:45] <hehehe> fallentree: whats what u said yesterday
[14:45] <hehehe> I was also wondering wtf is that
[14:45] <hehehe> :D
[14:46] <fallentree> I never said add root to www-data
[14:48] <fallentree> I may have said you chown root:www-data <dirs>/<files>, so that 750 on dirs and 640 on files can work, assuming nignx runs as www-data.
[14:48] <hehehe>  the better setup is where the files are owned by root, in group www-data
[14:48] <fallentree> I also may've said if you needed sftp access, you add nginx user and php-fpm user into the sftp user's group
[14:48] <hehehe> yes I misunderstood
[14:49] <hehehe> anyway I done chown
[14:49] <hehehe> and
[14:49] <hehehe> https://paste.ngx.cc/9d
[14:50] <fallentree> hehehe: so, is EVERY component in the path /home/op/gd.com/*   readable to the nginx user?
[14:51] <hehehe> component means file?
[14:51] <fallentree> every element of the path
[14:51] <fallentree> home, op, gd.com, anything under gd.com
[14:52] <hehehe> hmm
[14:52] <hehehe> I dont know
[14:52] <hehehe> since  after chown root:www-data www-data dont own da files
[14:53] <hehehe> according to new permissions it would have to be a group member
[14:53] <hehehe> :)
[14:53] <hehehe> going to add it to a group
[14:54] <fallentree> right, and that's why g+r is required so g (in this case www-data, for root:www-data owned paths)
[14:54] <fallentree> adding WHAT to WHICH group?
[14:54] <hehehe>  g +r means?
[14:54] <fallentree> readable to group
[14:54] <fallentree> (check chown manpage)
[14:54] <hehehe> ok 1 moment
[14:55] <fallentree> btw is "op" is some user and /home/op is its home dir, then you have a problem there
[14:56] <fallentree> first, having root:www-data owned files in op's home makes zero sense
[14:56] <hehehe> its not a user
[14:56] <hehehe> its a simple directory
[14:56] <fallentree> so what's inside /home/op except gd.com ?
[14:56] <hehehe> nothing
[14:56] <hehehe> just gd.com
[14:57] <hehehe> in fact I will double check now
[14:57] <hehehe> yes thats it
[14:57] <fallentree> right, so, chmod 755 /home,    chmod 755 /home/op,     chown -R root:www-data /home/op/gd.com
[14:58] <fallentree> and use whatever method you're comfortable with to set dirs to 750 and files to 640, under (and including) /home/op/gd.com
[14:58] <fallentree> like,    find /home/op/gd.com/ -type f -exec chmod 640 {} \;
[14:58] <hehehe> yes I done find stuff
[14:58] <fallentree> and find /home/op/gd.com/ -type d -exec chmod 750 {} \;
[14:59] <hehehe> however chmod 755 /home  why?
[14:59] <hehehe> there may be other users stuff, its  not  a problem?
[14:59] <fallentree> because it'd default directory for user accounts and in itself should be accessible to all users
[14:59] <hehehe> oki
[14:59] <fallentree> *it's
[15:00] <fallentree>  /home should be world accessible, but individual paths in home, assuming user home dirs, should not
[15:00] <fallentree> but since you said op is not a user... well... you're going against standards. better put root owned sites under /var/www
[15:00] <hehehe> yes later I can do that
[15:01] <hehehe> my friend said if I put in home it can fool some crackers
[15:01] <hehehe> making it harder to hack lol
[15:01] <fallentree> that's stupid
[15:01] <hehehe> yes by now it seems stupid
[15:03] <hehehe> ok nearly there
[15:03] <hehehe> chown -R root:www-data /home/op/gd.com - this command permits what?
[15:04] <hehehe> it simply changes ownership
[15:04] <hehehe> ok
[15:04] <fallentree> it recursively sets ownership to root:www-data to all files and folders under (and including) gd.com
[15:04] <fallentree> check the manpage
[15:04] <fallentree> `man chpown`
[15:04] <fallentree> the manuals are you best friends.
[15:05] <hehehe> ok so only thing I did not do before was to set 755 to op dir
[15:05] <hehehe> I set is to 750
[15:05] <hehehe> that caused issue right?
[15:05] <fallentree> depends on who owned it
[15:06] <hehehe> it was owned by root:www-data
[15:06] <fallentree> that's accessible to www-data group
[15:06] <hehehe> fallentree: that is clear, but how nginx www-data user is accessing it? he is member of www-data group by default?
[15:07] <hehehe> I am getting some hackers guide to servers soon :D
[15:08] <hehehe> also system/storage/modification/ is not writable. open cart want some directories writeable but by whom?
[15:08] <hehehe> group?
[15:08] <fallentree> use `id www-data` to check that.   `man id` for more info on the command.
[15:08] <hehehe> so simple
[15:09] <hehehe> awesome
[15:09] <hehehe> :)
[15:14] <hehehe> fallentree: also for extra security set config.php to 440?
[15:14] <hehehe> or no need since if root is hacked it wont do anything anyway
[15:14] <hehehe> :)
[15:14] <hehehe> so 640 is as secure
[15:16] <Ussat> if root is hacked, all bets are off
[15:16] <hehehe> ye
[15:17] <hehehe> dirs that app want to be writeable have to be set to 770?
[15:17] <hehehe> read write execute
[15:17] <fallentree> it's another layer of security.
[15:17] <hehehe> fallentree: what is?
[15:17] <fallentree> chmod 440 instead 640
[15:17] <fallentree> that is, u-w
[15:17] <hehehe> fallentree: what makes it extra layer?
[15:18] <fallentree> that root can't write it without chmodding it first
[15:18] <fallentree> there are classes of RCE which can try append/modify a file or mmap, but can't execute a chmod
[15:18] <hehehe> RCE?
[15:19] <fallentree> so every protection counts, every little detail is important. if you can 440, then do it.
[15:19] <fallentree> Remote code Execution
[15:19] <hehehe> yes I can do it
[15:19] <hehehe> fallentree: also open cart wants some dirs writeable by group is that normal safe practise?
[15:20] <hehehe> I think yes its for cache and images etc
[15:20] <fallentree> sure, file uploads for example
[15:20] <fallentree> yeah cache and other stuff generated by php
[15:20] <fallentree> but those paths are most frequently abused to upload and execute PHP code
[15:21] <hehehe> well what can be done to null such attemps?
[15:21] <fallentree> best thing would be to be extra sure that the web server won't call the PHP handler from those paths
[15:23] <hehehe> that can be done in php config file right?
[15:23] <fallentree> no, in nginx
[15:23] <hehehe> do you know how to do it?
[15:24] <fallentree> it depends on the directory structure and many other factors
[15:24] <fallentree> I have no idea what opencart has
[15:25] <hehehe> cool
[15:25] <hehehe> I am also installing metasploiter
[15:25] <hehehe> to check site for common holes
[15:25] <hehehe> if any
[15:25] <fallentree> nice. I have to go now, bbl
[15:26] <hehehe> cool
[15:26] <hehehe> overall folks its better to hire sysadmin from same country and log all stuff on server?
[15:27] <hehehe> cause some hire remote sysadmins from say bangladesh - if there is arguments etc he can simply screw server
[15:27] <hehehe> cause who is going to go there to locate him etc :D
[15:49] <nacc> jamespage: ack, i'll just check the manpages and stuff and then uplod today, probably
[15:51] <ChmEarl> any advice to upgrade to pbuilder 0.228.7 on Xenial?
[15:54] <ChmEarl> maybe, backport from Zesty?
[15:57] <smoser> nacc, ping.
[15:57] <smoser> http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
[15:57] <smoser> can you explain to me why open-iscsi would be stuck in proposed ?
[15:57] <nacc> smoser: pong
[15:57] <smoser> oh... no -udebs. hmmm.
[15:57] <smoser> libisns0 is avialable
[15:57] <smoser> at needed version, but no -udeb i guess ?
[15:57] <nacc> smoser: there's a MIR filed
[15:57] <nacc> smoser: it's c-m
[15:58] <nacc> smoser: LP: #1689963
[16:00] <smoser> nacc, thanks.
[16:00] <nacc> smoser: np
[16:37] <jamespage> nacc: great thankyou!
[16:37] <nacc> jamespage: np
[16:53] <hehehe> seems nice
[16:53] <hehehe> who here used it?
[16:53] <hehehe> https://book.serversforhackers.com/ :)
[17:10] <ChmEarl> pbuilder backport for Xenial: https://paste.debian.net/plain/971347
[17:11] <nacc> ChmEarl: wrong channel? ...
[17:25] <zerocool443> hi
[17:49] <nacc> jamespage: the biggest thing from the updated celery that will probably get hit is that many of the commands (celeryd, celerybeat, celeryd-multi) are gone. REplaced by `celery` subcommands (worker, beat and multi respectively) -- not sure if that matters for openstack itself or not
[18:11] <The_Tick> I'm trying to figure out how in the world to change both the hostname and fqdn on my ubuntu server box
[18:12] <The_Tick> I'm using 14.04.5 LTS, /etc/hosts modification doesn't seem to do a thing, hostnamectl set-hostname doesn't seem to have a way to set the fqdn
[18:13] <The_Tick> I'm finding a lot of random on google but nothing else, any help is appreciated
[18:13] <nacc> The_Tick: /etc/hosts is used for name resolution, not setting the hostname. (see `man hosts`)
[18:14] <nacc> The_Tick: `hostnamectl` (I thought) is a systemd thing
[18:14] <dpb1> nacc: having your host wrong there is problematic though. (/etc/hosts)
[18:14] <nacc> dpb1: absolutely
[18:14] <nacc> dpb1: but changing values there won't change your hostname
[18:14] <dpb1> +1
[18:14] <nacc> The_Tick: the underlying file is /etc/hostname, iirc
[18:14] <nacc> The_Tick: `man 1 hostname` may help
[18:15] <The_Tick> oof just got it
[18:15] <The_Tick> hostnamectl and /etc/cloud/templates/hosts.debian.tmpl
[18:23] <Aison> i'm still stuck at initramfs that should activate lvm volumes
[18:24] <Aison> I dont get it ;(
[18:24] <Aison> since zesty, no lvm is activated on my machine
[18:24] <Aison> I always have to do it manually
[18:24] <Aison> is that a problem of my lvm.conf or initramfs?
[18:25] <nacc> Aison: when you get dropped the shell, are you able to debug why it failed?
[18:25] <nacc> e.g., systemctl status lvm2 or whatever
[18:25] <Aison> lvm2.service is masked
[18:25] <Aison> ;)
[18:25] <ChmEarl> Aison, check for a hook: /usr/share/initramfs-tools/hooks/LVM
[18:26] <nacc> Aison: and how do you activate it?
[18:26] <jamespage> nacc: celery is not actually used by openstack; they just share a common dependency in kombu and one blocks the other with proposed migrations
[18:26] <Aison> lvchange -ay alv0
[18:26] <nacc> jamespage: ah ok
[18:26] <Aison> this way all logical volumes of logical group alv0 are activated
[18:27] <db`> Hi nPeople!
[18:27] <db`> How do I verify DMARC record for a subdomain?
[18:27] <db`> It always fails when I mail from a subdomain. SPF passes, since I added the IP to SPF record already.
[18:28] <db`> I also added a dmarc record for the subdomain, still it fails.
[18:31] <nacc> jamespage: just getting the autopkgtests to pass and i should be able to upload
[18:42] <IShavedForThis_> I can't seem to get my vpn tunneled transmission to work anymore and I'm not sure what broke it, could anybody help?
[18:42] <IShavedForThis_> https://www.htpcguides.com/force-torrent-traffic-vpn-split-tunnel-debian-8-ubuntu-16-04/
[18:42] <IShavedForThis_> that was the guide I used and it worked for a few months up until about last week
[18:44] <Aison> when I use auto_activation_volume_list = [ "alv0" ]
[18:44] <Aison> then the it is auto activated
[18:44] <Aison> (though an empty auto_activation_volume_list should auto activate all volumes...)
[18:45] <Aison> but mounting still doesn't work, since the activation is too late ;)
[18:50] <ChmEarl> Aison, sudo udevadm info --name=<PV> | grep SYSTEMD_WANTS  <-- I think this ENV var is missing on Zesty
[18:50] <ChmEarl> ^^ same thing on Stretch
[18:51] <Aison> ChmEarl, systemd_wants is not defined
[18:52] <ChmEarl> this is an old bug filed in Sid 2 years ago
[18:52] <ChmEarl> the lvm2-pvscan@.service is broken as a result
[18:53] <Aison> an is there a workaround?
[18:54] <ChmEarl> yes, you copy the 69*rules to /etc/udev/rules.d/69-lvm-metad.rules and patch it
[18:55] <ahasenack> nacc: hi, question
[18:55] <ahasenack> nacc: if https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1668940 is sru'ed, it will introduce a libcephs1 dependency into samba-vfs-modules. That's generally frowned upon?
[18:56] <ahasenack> it's a new feature per se. The "bug" is that we included the manpage of the ceph module, just not the module itself. Another way to fix it would be to remove the manpage ;)
[18:56] <Aison> ChmEarl, where do I get these files?
[18:56] <Aison> and the patch? :P
[18:57] <ChmEarl> Aison basic idea is to add in the 3 ENV vars: https://paste.debian.net/plain/971356
[18:57] <ChmEarl> Aison that patch is quite old so the context might be changed
[18:58] <Aison> ok
[18:58] <ChmEarl> Aison test with:  sudo udevadm info --name=<PV> | grep SYSTEMD_WANTS
[18:58] <ChmEarl> PV is the physical volume with your VG's
[18:58] <Aison> yes, I already tested
[18:59] <Aison> there is only SYSTEMD_READY=1
[19:00] <ChmEarl> Aison,  the patch is invisible
[19:00] <ChmEarl> patch it, test again
[19:02] <ChmEarl> find the 69*rules under /usr/lib, copy it to /etc/udev*, patch it
[19:02] <ChmEarl> Aison I did this in Stretch & Zesty
[19:03] <ChmEarl> Aison, original bug was found by M Biebl in #debian-systemd on OFTC
[19:05] <ChmEarl> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=791869#209
[19:10] <Aison> I guess I have to do update-initramfs -u ?
[19:11] <Aison> bug report with some spam at the end
[19:12] <ChmEarl> Aison, I hesitate to post that BTS since its 2 years ago
[19:13] <Aison> hmm, still not working
[19:13] <Aison> looks like this udev is simply ignored
[19:18] <ChmEarl> Aison, lvmetad.socket  enabled,  lvm2-lvmpolld.socket    enabled,  lvm2-monitor enabled
[19:19] <ChmEarl> ^^ only these are enabled
[19:19] <Aison> well, I can't even see the env vars with udevadm info
[19:22] <ChmEarl> Aison, if you add the major/minor for your PV here:  lvm2-pvscan@.service what happends
[19:24] <ChmEarl> Aison, this patch fixed it for me in 2 very different contexts, so I think it should work
[19:25] <Aison> lvmetad.socket and lvmpolld.socket is present
[19:25] <Aison> when I type systemctl, is the order of the services the order they were executed?
[19:27] <Aison> lvm2-monitor is also enabled
[19:29] <Aison> ChmEarl, the funny thing is, as soon as I type lvchange -ay alv0, everything works is is mounted
[19:29] <Aison> so strange...
[19:33] <dpb1> nacc: ok, so where autopkgtest runs, what is the network like.  is there an http_proxy etc?
[19:33] <Aison> ChmEarl, how do you mount the lvm then? by uuid?
[19:34] <Aison> or by name?
[19:34] <dpb1> nacc: I'm assuming it's fine for boto to have network-dependent tests.
[19:37] <db`> If I'm wanting to copy all files/folders inside a directory to remote server using rsync, do I need to use option -r ?
[19:37] <db`> I just see rsync -avz in tutorials.
[19:37] <Aison> ChmEarl, and do you use auto_activation_volume_list in lvm.conf?
[19:37] <dpb1> db`: read the manpage and look at -a: -a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
[19:38] <dpb1> db`: go look up what each of those flags means for -a, it's a fun read. :)
[19:39] <db`> sure
[19:39] <ChmEarl> Aison, all defaults
[19:39] <Aison> ok
[19:40] <Aison> one thing is very very strange. even when the volumes are activated and visible in /dev/mapper/
[19:40] <Aison> they are not mounted by systemd
[19:41] <db`> dpb1: so if I use rsync -avz, I hope the files in remote which are NOT present in localhost, will NOT get deleted.
[19:42] <dpb1> db`: right, --delete is specifically not bundled in the '-a' option
[19:42] <dpb1> for just that reason
[19:42] <db`> but I would be using -e
[19:43] <db`> it shows several 'deletes' in the man
[19:43] <db`> I'm sorry if its a really noobish query.
[19:43] <ChmEarl> Aison, lvm2-pvscan@.service can activate only as its sequenced by systemd
[19:43] <ChmEarl> not mount
[19:44] <dpb1> db`: it's ok, have to start somewhere.  not following you about several deletes in the man.
[19:44] <Aison> ChmEarl, do you use .mount files? or fstab?
[19:45] <ChmEarl> Aision I use the lvm2-pvscan@.service to sequence activation before Xen starts so my VM can start from LVM2
[19:45] <Aison> but this service is executed automatically, I guess
[19:46] <db`> dpb1:http://prntscr.com/fje9ag
[19:46] <db`> hows that supposed to be read?
[19:47] <dpb1> db`: the '--delete' options, you mean?
[19:47] <db`> yes, if you see it says "-e,
[19:47] <db`> and then all the delete types
[19:47] <dpb1> db`: ah I see your confusion
[19:47] <dpb1> db`: '-e, --rsh' are one entry
[19:47] <dpb1> --rsync-path the next entry
[19:48] <dpb1> basically, each line is separate.
[19:48] <db`> oh
[19:48] <dpb1> ya, confusing layout.
[19:48] <db`> so what if I just use -e and not anything after that?
[19:48] <dpb1> yup, -e will just change the remote shell to use, that's it
[19:49] <db`> so -option would by default do the first ones, from the list?
[19:49] <dpb1> db`: also the most important option to remember to append '-n'  -- that will do a dry-run and just print out what would be done.
[19:49] <db`> sure.
[19:49] <db`> thanks
[19:49] <dpb1> ok
[19:50] <db`> so I can start with rsync -nazv ?
[19:50] <dpb1> db`: notice also the difference between --longoption and -avz
[19:51] <dpb1> two dashes at the front means a long spelled out option, one dash is like specifying -a -v -z
[19:51] <dpb1> just shorthand.
[19:51] <db`> right, since -n is short for --dry-run, can I use rsync -navz .. ?
[19:51] <dpb1> db`: that is a very sensible starting point, yes.
[19:51] <dpb1> and correct on --dry-run being equal to -n
[19:51] <db`> sure, thanks.
[20:17] <dpb1> nacc: have you ever needed to modify the whitelist for squid.internal?
[20:17] <nacc> dpb1: no :)
[20:17] <nacc> dpb1: has the test ever succeeded?
[20:17] <dpb1> so I'm guessing that's not it
[20:17] <dpb1> from the output it looks like it's getting validish data back from AWS
[20:17] <dpb1> lmc
[20:18] <dpb1> nacc: ... how do I tell?
[20:18] <dpb1> :)
[20:18] <dpb1> yes
[20:18] <dpb1> I think it did
[20:18] <dpb1> marked 'regression' on the proposed migration page
[20:19] <nacc> dpb1: http://autopkgtest.ubuntu.com/
[20:20] <nacc> dpb1: heh: http://autopkgtest.ubuntu.com/packages/python-boto
[20:20] <dpb1> nacc: so 'regression' is more like 'massive fail'
[20:20] <nacc> dpb1: last succeed in ... 2015?
[20:21] <dpb1> rbasak: it's actually the "unit" tests in python-boto that reach out to the network
[20:21]  * dpb1 looks if there is a disable_network_tests env var or something
[20:22] <rbasak> """In general, tests are also allowed to access the internet. As this
[20:22] <rbasak> usually makes tests less reliable, this should be kept to a minimum; but
[20:22] <rbasak> for many packages their main purpose is to interact with remote web
[20:22] <rbasak> services and thus their testing should actually cover those too, to
[20:22] <rbasak> ensure that the distribution package keeps working with their
[20:22] <rbasak> corresponding web service."""
[20:22] <rbasak> https://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/plain/doc/README.package-tests.rst
[20:23] <dpb1> that seems to fit python-boto
[20:23] <dpb1> :)
[20:23] <rbasak> Yeah.
[20:23] <rbasak> I guess that's the official answer.
[20:23] <dpb1> ok thx
[20:23] <nacc> and it looks like, at least, the version that passed on xenial at some point, did get out to the network
[20:23] <dpb1> I'll keep digging on it then.  they seem reliable enough run locally
[20:24] <dpb1> nacc: 'nother quick q: since this might require some inline debugging, how do I trigger a hand-rolled test *in that environment*
[20:25] <dpb1> nacc: or can I *gasp* get access to the host with an interactive shell?
[20:25] <ahasenack> this debian bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=820965
[20:26] <ahasenack> mentions that the fix is in http://git.debian.org/?p=pkg-samba/samba.git;a=commitdiff;h=d29a694
[20:26] <ahasenack> but that is a 404 essentially
[20:26] <ahasenack> even with the full hash
[20:26] <ahasenack> I also cloned it with git, and can't find references to the bug in debian/changelog, git grep, or git log
[20:26] <ahasenack> it's not the first time I've seen this. Any clue what is going on?
[20:27] <nacc> dpb1: i think this is where jgrimm got stuck :)
[20:30] <nacc> ahasenack: looks like buggy botting, or something, but it's this version, right? https://anonscm.debian.org/cgit/pkg-samba/samba.git/commit/?h=debian/4.2.14%2bdfsg-0%2bdeb8u1&id=2bbf380759b4a03b86ca3b26c8375024924dc2c7
[20:31] <ahasenack> nacc: I think so, how can I find the code change based on that?
[20:32] <nacc> ahasenack: ideally you can deduce it from: https://anonscm.debian.org/cgit/pkg-samba/samba.git/log/?h=debian/4.2.14%2bdfsg-0%2bdeb8u1
[20:32] <nacc> ahasenack: but the 'fix' was grabbing a new upstream
[20:32] <ahasenack> I was expecting the "log msg" search by bug number to find it
[20:33] <nacc> ahasenack: only if they committed it with such a log message :)
[20:33] <ahasenack> since the diff in the bug shows the changelog change
[20:33] <nacc> ahasenack: the d/changelong entry is from: https://anonscm.debian.org/cgit/pkg-samba/samba.git/commit/?h=debian/4.2.14%2bdfsg-0%2bdeb8u1&id=d4092f0849e2ec1c92214da90d052c7947913d19
[20:33] <ahasenack> correct
[20:34] <ahasenack> and "UNRELEASED" at that time
[20:34] <nacc> ahasenack: and since the *git* log doesn't contain any bug #s, it won't show up in the 'log msg' search
[20:34] <nacc> afaict
[20:36] <ahasenack> powersj: I need some context about http://iso.qa.ubuntu.com/qatracker/milestones/359/builds/117343/testcases/1409/results, can you help a bit?
[20:36] <powersj> looking
[20:36] <ahasenack> powersj: that's a manual test case, right?
[20:36] <ahasenack> that someone once upon a time decided to run?
[20:37] <powersj> ahasenack: these are manual tests cases placed on the ISO tracker that we ask people to run when we publish alpha/beta/release ISOs
[20:38] <powersj> we (server team) run those tests in an automated fashion as well
[20:38] <ahasenack> powersj: where can I find the last time someone ran it?
[20:38] <ahasenack> and, the last automated run for that particular one?
[20:39] <powersj> ahasenack: so that looks like that failure was reported on Xenial final initial release ISO
[20:40] <powersj> the "latest" by my definition would be on the Xenial .2 point release (16.04.2)
[20:40] <ahasenack> I'm thinking maybe the installer does something extra, because I can't see how that test would work by just installing the sambe and winbind packages
[20:40] <powersj> so I would look at http://iso.qa.ubuntu.com/qatracker/milestones/372/builds/142896/testcases/1409/results which you will see a response from me responding
[20:41] <ahasenack> powersj: was that you going over it manually, or one of the automated runs?
[20:41] <powersj> automated
[20:41] <powersj> At Software selection, choose "Samba server"
[20:41] <powersj> that is choosing the tasksel for samba server
[20:41] <ahasenack> where can I see the output of that run?
[20:42] <dpb1> nacc: btw, any response on my questions? (more rapid debugging, etc)
[20:42] <nacc> dpb1: sorry, i meant that jgrimm was looking into that becuase I don't know :)
[20:43] <nacc> dpb1: i don't believe you can login to the runners, but presumably it's reproducible somewhere
[20:45] <powersj> ahasenack: if you are on the VPN you can see all the test runs for ISOs here: https://platform-qa-jenkins.ubuntu.com/view/server/
[20:45] <powersj> for the purpose of showing the results I've pastebin that run's syslog here (5MB):
[20:45] <dpb1> nacc: and to 'retry' this with debugging?
[20:46] <dpb1> nacc: can I commit somewhere and it just picks it up?  can I schedule adhoc jobs?
[20:46] <ahasenack> powersj: checking
[20:46] <nacc> dpb1: i'm not sure if i follow -- the autopkgtest is following what's in the archive. So if you want to retry it there, you'd need to upload a new version. But uploads aren't typically used for debugging :)
[20:46] <powersj> ahasenack:  https://paste.ubuntu.com/24851571/
[20:46] <powersj> that's a big paste
[20:47] <powersj> that's the installer output
[20:47] <nacc> dpb1: I'd probably start with asking the release folks how best to reproduce that env (slangasek, infinity)
[20:47] <ahasenack> better there than here :)
[20:47] <dpb1> nacc: But uploads aren't typically used for debugging -- yes, this is what I was assuming. :)
[20:47] <nacc> dpb1: that's also why we haven't made much (any) progress on it
[20:47] <dpb1> done that already
[20:47] <dpb1> ok
[20:47] <dpb1> email time
[20:48] <powersj> ahasenack: https://paste.ubuntu.com/24851587/ that's the yaml of the test cases result
[20:49] <powersj> which says "    /bin/sh: 1: tsetup/setup.sh: Permission denied" *sigh*
[20:49] <ahasenack> powersj: I'm trying to find in the output where "net usersidlist" is run, according to the test case
[20:50] <ahasenack> hm, it didn't run then?
[20:52]  * ahasenack branches lp:ubuntu-test-cases/server/testsuites/samba-server/
[20:52] <ahasenack> ops, enoperm
[20:53] <hehehe> :)))
[20:53] <hehehe> what is sticky bit?
[20:54] <hehehe> if you have write + execute permissions on a directory, you can {delete,rename} items living within even if you don't have write perimission on those items. (use sticky bit to prevent this)
[20:55] <ahasenack> hehehe: /tmp is an example of a directory that has the sticky bit set
[20:55] <ahasenack> hehehe: everybody can write to it, but only the owner (and root) of a file/directory can remove it from inside /tmp
[20:55] <powersj> well now I get to find out when this test stopped working and why it isn't marked as failed :\
[20:57] <hehehe> ahasenack: and how do you set sticky bit
[20:58] <ahasenack> hehehe: with chmod(1)
[20:58] <ahasenack> hehehe: chmod +t <directory> sets it, for example (there is also an octal syntax)
[20:59] <hehehe> cool
[20:59] <hehehe> and I also noticed while I use say chmod 755
[21:00] <hehehe> there is sometimes 0 before?
[21:00] <hehehe> 0755
[21:00] <hehehe> so whats that very first digit for?
[21:00] <ahasenack> that indicates it's a number in the octal base (base 8)
[21:00] <hehehe> cool
[21:00] <ahasenack> like when you see 0x0A meaning hexadecimal
[21:00] <ahasenack> the 0x precis means hexadecimal
[21:00] <hehehe> but it makes no difference if I use chmod 755 or 0755?
[21:00] <hehehe> or it does?
[21:01] <nacc> ahasenack: are you sure? I thought leading 0 just means no sticky, setuid or setgid?
[21:01] <ahasenack> right, it's a relaxed rule for chmod
[21:01] <nacc> ahasenack: ah ok
[21:01] <ahasenack> "Omitted  digits  are  assumed  to  be  leading
[21:01] <ahasenack>        zeros"
[21:01] <nacc> ahasenack: right, and numeric parameters to chmod are assumed octal anyways (afaict)
[21:02] <ahasenack> yeah
[21:02] <ahasenack> it's relaxed
[21:02] <ahasenack> leading us to surprises elsewhere where it's not relaxed :)
[21:02] <ahasenack> like yaml
[21:02] <nacc> heh
[21:02] <ahasenack> I banged my head against the table a few times with an yaml file that had something like key: 09
[21:02] <ahasenack> and 09 was treated as a string instead of a number
[21:03] <hehehe> what is yaml?
[21:03] <ahasenack> it's because it's invalid octal, therefore it must be a string (!)
[21:03] <dpb1> nacc: do we ever file bugs for packages stuck in proposed?
[21:03] <dpb1> or, is there specifically a bug for this python-boto thing is really what I'm after
[21:03] <powersj> ahasenack: guess it is glad you brought this up. Looks like that test has been failing to even run for sometime :\ other tests appear operational, so I'll dig into why samba hasn't
[21:04] <ahasenack> hehehe: loosely, a file format that is both readable by people (meaning it's visually simple) and computers at the same time
[21:04] <hehehe> ok anyways I run opencart app and it wants to access /cache /images  folders - I am thinking of safest permissions i can get away with
[21:04] <hehehe> :)
[21:04] <ahasenack> powersj: ok, just one more question
[21:04] <powersj> ok
[21:04] <ahasenack> powersj: https://platform-qa-jenkins.ubuntu.com/job/ubuntu-xenial-server-amd64-smoke-samba-server/303/console is this also defined by that yaml?
[21:04] <ahasenack> "smoke"tests
[21:04] <hehehe> I tried 770 and it wont display images inside admin bit sometimes
[21:04] <ahasenack> or something different
[21:05] <ahasenack> and if it's also a false success
[21:05] <nacc> dpb1: we have -- let me look
[21:05] <ahasenack> because I see errors, but RETCODE=0
[21:05] <powersj> ahasenack: those errors are red herrings from utah
[21:06] <dpb1> mmmmmm, herrings.
[21:06] <ahasenack> hehehe: I'm not familiar with that app, sorry. In general, you start with the error, then figure out what it tried to access (which you did), and as which user, then come up with the right permissions
[21:06] <ahasenack> powersj: ok
[21:06] <hehehe> ahasenack: yes
[21:06] <powersj> the YAML I linked to you are the results of test cases after an install which runs the tests themselves
[21:06] <ahasenack> powersj: and it's not the same as that manual test with which we started this conversation, right?
[21:06] <powersj> ahasenack: it is suppose to be the same
[21:06] <nacc> dpb1: not finding any bug filed
[21:06] <nacc> dpb1: this is intersting, though: LP: #519567
[21:07] <ahasenack> powersj: the yaml would have the output of that comment I'm looking for, had the test run?
[21:07] <powersj> ahasenack: yes
[21:07] <ahasenack> "net usersidlist", "step 28"?
[21:07] <ahasenack> ok
[21:07] <powersj> last fall none of these tests were working really at all
[21:07] <powersj> I spent a number of weeks going through them, updating them, and getting them all running for the yakkety release
[21:07] <dpb1> nacc: yes, I was suspecting something like that.  after I followed up with #is, my next stop was reproing a general squid proxy and see if it can pass when wide open
[21:07] <powersj> they are great when they work ;)
[21:08] <ahasenack> :)
[21:09] <nacc> dpb1: the eventual conclusion may be that we will need the release team to mark this a 'badtest'
[21:09] <nacc> dpb1: however, iirc, this test passes on debian -- so it'd be good to be sure about that
[21:10]  * ahasenack -> EOD
[21:10] <ahasenack> cya tomorrow
[21:10] <dpb1> nacc: how can I check that fact
[21:11] <dpb1> (passes on debian)
[21:11] <nacc> dpb1: https://ci.debian.net/packages/p/python-boto/
[21:11] <nacc> dpb1: says "OK (SKIP=8)" like the last pass on ubuntu
[21:17] <dpb1> and not surprisingly, I see no 'http_proxy' in the captured output anywhere
[21:24] <jgrimm> nacc, dpb1: sorry, I wasn't actively watching IRC today.. but yes, I think you are on the right path of what's going on with python-boto; i didn't get chance to track it down to root cause but given I could run tests locally fine, i was assuming it was an issue with the test environment.
[21:25] <nacc> jgrimm: thanks! :)
[21:25] <dpb1> jgrimm: hey there
[21:25] <jgrimm> nacc, dpb1: an added fun bit was that I was told (i think by steve?) that the firewall rules are potentially different depending where it ends up getting run and magical that only IS knows what they are
[21:25] <dpb1> yup
[21:25] <dpb1> that matches
[21:25] <jgrimm> that's as far as i got. :)
[21:26] <dpb1> good, glad I'm stuck where you were!
[21:26] <jgrimm> \o/ cool. documenting the FW rules would be nice to get done if discovered.
[21:27] <dpb1> jgrimm: indeed!!  there is a good place for them: https://wiki.ubuntu.com/ProposedMigration/AutopkgtestInfrastructure
[21:27] <dpb1> it just lacks mentioning them
[21:27] <dpb1> :)
[21:27] <jgrimm> :) have fun!
[21:27] <dpb1> always
[22:01] <hehehe> :)))
[22:01] <hehehe> fun is food
[22:01] <hehehe> good :D
[23:10] <nacc> jamespage: celery uploaded -- i think it should even pass it's dep8 tests now :)
[23:10] <nacc> or at least it does locally
[23:11] <mwhudson> nacc: hooray
[23:11] <mwhudson> i guess i should unblock the kombu migration
[23:12] <mwhudson> not that it's going to migrate by itself for weeks anyway
[23:12] <nacc> mwhudson: right, i can do that once it propagates into proposed (it = celery)
[23:29] <nacc> mwhudson: jamespage: and excellent, all tests passed in the build on python2.7, python3.5 and python3.6: https://launchpadlibrarian.net/323872413/buildlog_ubuntu-artful-amd64.celery_4.0.2-0ubuntu1_BUILDING.txt.gz
[23:29] <mwhudson> nacc: \o/
[23:29] <nacc> mwhudson: once it migrates to proposed and the dep8 tests pass there, i'll unblock the bug
[23:30] <nacc> mwhudson: if that's ok by you
[23:30] <mwhudson> nacc: +1
[23:30] <nacc> mwhudson: thanks