=== Jalen_ is now known as Jalen [00:49] How do I load a command with high priority on ubuntu server? [00:49] Could someone help me please? [02:26] supercool: look at the 'nice' command. 'man nice' for more info. [02:26] dpb1: I got high sd from top [02:27] guess it is not a inside issue but a server restriction of usage [02:27] I use renice -n -20 -p # but didn't solve nothing [03:08] also look at schedutil [03:09] *schedtool === JanC_ is now known as JanC [05:18] can someone talk to me about snaps on ubuntu server ... am i seriously going to need to manage packages from 2 separate sources now? [05:22] DirtyCajun, you do not really need to need snaps if you do not want to you can still use all .deb [05:23] lynorian, filebot (A wonderful program) has apparently moved completely to snaps. [05:23] I have not heard of filebot [05:23] lynorian, its a great file/folder automation tool for media [05:25] DirtyCajun, I cannot find it in the repos [05:25] in trusty even [05:25] lynorian, sudo snap find filebot [05:25] im on 16.04.2 [05:26] well if you used it without snaps you were already getting them from a seperate place [05:26] lynorian, it was originally directly a .deb file from their site. [05:27] DirtyCajun, yes that is another source so I do not understand your question [08:19] didnt subtitles get labeled eligal some court in EU a few months back? [08:19] by some* [08:51] jushur: Fan made sub-titles according to a Dutch court. So that is a court within the EU but not an EU level court. For those of you playing in the US think like a county (I do not think this was a big Dutch court yet) making a rulling. There are probably bigger national courts for the Dutch (so like a State level court) that could weigh in and then after that someone might take it to an EU (federal) level court. [08:52] Looks like that was going on at the end of April this year. === hehehe is now known as Guest50630 === Guest50630 is now known as hehehe [14:53] Does anyone here work with Dell or HP servers a lot? I remember dell or hp used to have a tool that you could install on a massive amount of servers and it would collect all the stats for those servers. So when migrating to new servers you know how much resources you need etc [14:53] I just can't remember the name of the utility. [15:05] Dell is Open Manage iirc [15:05] This is just a standalone application you can install on any server (virtual / etc) [15:06] Just collects stats / resource usages / etc for 7 days then emails you [15:49] jamespage, coreycb: did you guys ever get the fix for sqlalchemy issues pushed to updates? http://logs.openstack.org/68/473268/1/check/gate-puppet-magnum-puppet-beaker-rspec-ubuntu-xenial/a1745a6/logs/magnum/magnum-conductor.txt.gz#_2017-06-12_08_07_37_626 [15:49] mwhahaha: lemme check - I've had alot of plates spinning in the last week or so [15:50] mwhahaha: ah right - we pushed through updates to make magnum install; but that would appear to be an incompatibility with sqla 1.1.x [15:51] jamespage: ok, not a huge pressing issue but the magnum beaker jobs are blocked [15:51] jamespage, mwhahaha: i uploaded a new version of python-oslo.db in an attempt to fix that. i wasn't positive that was the right fix but seemed relevant. === hehehe is now known as hehehe_offline [16:50] hi guys trying to setup snort on my remote server running xenial. my ip ends with 111 and has a netmask of /27, so i set the home_net to 97/27 but when trying a port scan on my server the ids is not sending an alert. what could that be? === Ussat-1 is now known as Ussat [16:54] macskay: I'm not sure you've provided enough for a diagnosis, but you may find the "ipcalc" tool useful if you don't know about it. [16:55] Do you have broadcast ip set to .127 ? [17:12] RHEL offers a couple packages to manage virtualization tuning called tuned and tuned-adm. Is there an equivalent for ubuntu? === hehehe_offline is now known as hehehe [17:42] hi [17:43] I am running web app file permissions set to 660 and dirs to 770, now I moved from 14.4 to 16.4 appamor disabled, 403 yet to go [17:43] what else can i check? [17:47] btc 2400 [17:47] thats still above 1900 [17:47] why btc is overloaded? [17:49] lol wrong channel [17:50] hehehe: you were confusing me to no end [17:51] dont mind last lines [17:51] the question is about file permissions [17:51] I run a web app on 14,04 and 16.04 [17:52] using 660 and 770 as permissions [17:52] but on 16,04 its yet to work [17:52] make sure www-data is either the owner and/or group [17:52] that is done [17:53] is it owned by www-data:www-data or something else? [17:54] nr1 [17:54] www-data [17:54] ok it sounds like you may have a path issue, can you pastebin the relevant configuration files? [17:55] path issue? [17:55] you mean nginx home path? [17:55] yes, either the path to the files is incorrect or the www-data user cannot access it [17:56] well if I change permissions it does work [17:56] change to what? [17:57] btw the 'namei -l /path/to/file' tool is superb. It saves a bunch of repetitive ls -l [18:00] just a moment [18:00] going to check something [18:09] Poster: I dont know [18:10] Poster: I guess permissions were inherited from 14.4 tar archive [18:10] cant be sure [18:14] something went wrong [18:14] genii: Yes [18:15] sarnold: til, thx [18:15] rbasak: Well basically this: https://unix.stackexchange.com/questions/370709/snort-not-firing-alerts?s=1|2.6134 [18:16] dpb1: yeah isn't that nice? :) I'm surprised it's not more widely used [18:17] Poster: 755 644 works [18:17] macskay: I don't know snort, but what cutrightjm said. 176.9.103.97/27 is unusual. I'd expect .96 unless snort is special somehow. [18:25] hehehe: that means the web server is not running as www-data or the dirs/files that have g+r (regardless of o+r) are not in the group www-data [18:26] r the dirs/files that have g+r (regardless of o+r) are not in the group www-data how I can check if they are in a group [18:26] or not? [18:27] hehehe: namei -l is wonderful. [18:27] cool [18:28] sarnold: but whats it for? I use ls all [18:28] to see who owns files and dirs [18:29] hehehe: ls -l is nice but it doesn't show you parent directories, only the specific thing you ask for. but the permission denied messages may be coming from directories higher up. [18:30] hehehe: you need to know the user:group and permissions of all directories and the target file in a pathname when a program reports 'permission denied'. [18:30] sarnold: fair point I did issue chown -r from the top dir, one above html root [18:30] i see [18:31] handy tool [18:32] www-data www-data index.php [18:32] and above same [18:32] its some kinda of small thing but I am yet to recall what is it [18:35] brb I may fix it now [18:35] hehehe: how are you running php? unless apache with php DSO, it's not the webserver that reads index.php [18:35] i use nginx and php fpm 7 [18:35] if it's fastcgi, then it's the fastcgi daemon (eg. php-fpm) and user it runs under, not www-data (unless you configured it to run as www-data) [18:35] :) [18:36] fallentree: yes could be that also [18:36] going to recheck [18:36] with fastcgi, the web server sends a fastcgi request to php process, it doesn't check or touch the php files [18:38] i see [18:38] thanks for explaining [18:38] kinda common sense [18:43] once you understand how simple the unix access controls are you'll have trouble remembering that you used to find them difficult :) [18:43] :)))))))))))) [18:43] lol [18:43] well so yes fallentree u were right [18:44] I checked box1 setup -where friend helped me [18:44] and box n2 [18:44] listen.owner = www-data [18:44] listen.group = www-data [18:44] ;listen.mode = 0660 [18:44] in box nr 1 listen mode is uncommented and set to 0666 [18:49] I have changed listen mode to 0666 yet to work [18:49] 666 is not good, why world rw? [18:49] set up proper groups and permissions instead [18:50] fallentree: what is listen mode for anyway? [18:50] it's the owner of the socket file [18:50] it sets the permissions on the listener socket on the system. You should probably *not* be messing with it. [18:50] example setup: you have multiple pools each running under different user, so you set the socket ownership to thatuser:www-data and 0660 mode [18:51] so nginx can rw to the socket [18:51] but unless you have such a setup, you should leave it alone. [18:51] teward: it was designed exactly to be messed with [18:51] correct [18:51] messing is good, and you learn :D [18:51] no, the proper answer is: learn what it does and decide how to set it up [18:51] fallentree: you're right, but i mean for a basic setup :p [18:51] all else is black magick [18:51] like a 'bare minimum' [18:51] no [18:51] (the rest is blackmagicks) [18:51] * teward yawns [18:52] servers are not for users who don't understand how it works [18:52] its very easy to understand [18:52] of course. [18:52] onc explained [18:52] once [18:52] hehehe: if you set that mode 666 then you allow all users on the system to execute code with the privileges of the fpm service [18:53] thats not good [18:53] it's no big deal if it's a single-user machine and you don't care what happens; it's terrible if you've got multiple untrusted services or users on the system [18:54] so to sum up so far - I got 1 socket running owner is www:data group www:data, I want to use 660 and 770 permissions [18:55] hehehe: the socket must reflect ownership/mode so that BOTH nginx and php-fpm user can read and write to it. if both run as www-data, then yes, that's okay [18:55] yes they both run as such [18:55] idea is that dirs and files can be accessed only by owner and or group [18:55] which seems secure :) [18:55] well I meant modified [18:56] hehehe: if you want secure, also don't have the files owned and writable by the user running the php process. [18:56] only readable, but not writeable [18:57] that's why owning files to www-data is a bit insecure. the better setup is where the files are owned by root, in group www-data. 750 on dirs and 640 on files. fpm socket www-data:www-data, 0660. [18:58] however, only root can change those files (which is why it's secure). if you want sftp access, then it requires a different, a bit more complex setup. [18:59] fallentree: why would sftp nessesiate a bit more complex setup if I sftp as root? [18:59] I can then change files via chown [18:59] because you shouldn't sftp as root [19:00] its stfp so password cant be stolen [19:00] so whats the risks? [19:00] or maybe use pem? [19:00] sftp requries ssh access as root and that should be avoided [19:00] (sftp as root requires....) [19:00] fallentree: but I use 70+ random char passwd [19:00] :) [19:01] so yes ok some can try and guess it and get tired [19:01] hehehe: history lesson: few years ago a debian maintainer fskced up and weakened ssh keys security, reducing the possible combinations to only 65k [19:02] oooo [19:02] that's why you should never allow root to log in [19:02] oki I can create some other user to login [19:02] in such a case, an attacker breaking through 65k combinations would still have to sudo things so there's additional layer of security [19:02] 65K is alot [19:02] but not really [19:03] if they ssh from say 50,000 ips [19:03] it's a few minutes to try all on a system that doesn't ban failed attempts [19:03] its fast [19:03] fallentree: but since then it was fixed right? [19:03] if they try from 65k ips, it'd be broken through in a fraction of a second :) [19:03] it was fixed. the lesson here is to NEVER trust things. [19:03] lol [19:04] the principle of least privilege should be your guide, if you want secure. [19:04] you don't need to log in as root, so reduce that privilege. [19:06] I do need sftp access [19:06] so setup some ordinary user and login as him? [19:09] yes [19:09] ok [19:10] but you can't chown/chmod php files to www-dat, those would have to be owned by the sftp user (if you want to manipulate the files over sftp), which is insecure as php can write own files. [19:10] that's where you use apparmor to fine tune what php-fpm can read or write. [19:11] OR [19:11] run php-fpm as another unprivileged user, and put that user into the sftp user group. [19:12] that way you can have files 640 (and dirs 750). sftp user can read/write, php process can only read. also put nginx (user www-data) into that sftp user group so it can read static files. [19:12] if php needs to write (uploads), have a specific directory for that, owned by the user running php-fpm, but then the sftp user won't be able to change those. [19:13] it's a trade-off any way you look at it. either it's easy but insecure, or secure but inconvenient. [19:13] convenient (sftp can rw, php+nginx can read) but secure requires complex (apparmor) [19:15] ok changing conf [19:16] first i will implement . the better setup is where the files are owned by root, in group www-data. 750 on dirs and 640 on files. fpm socket www-data:www-data, 0660. [19:16] to see how that works :) [19:18] drwxr-x--- 8 root www-data added root to group www-data changed permissions [19:18] yet to work [19:25] now for some reason it gives nginx error index.html is foiden [19:25] forbiden [19:25] but its index.php ... [19:25] I am going to to shop to buy food [19:31] hehehe: do you have the "index" directive for the server{} ? if you want index.php to respond to example.com/ (without index.php explicitly stated), you need to set the "index" directive to index.php [20:32] home again [20:45] and yes I have index directive think [20:46] index index.html index.htm index.php; [20:46] it does work with less rescrtictive permissoions [20:47] hehehe: are you mixing up 'index' and 'DirectoryIndex'? [20:48] ignore this remark if this is nginx ratehr than apache httpd [20:49] it is nginx [20:56] hehehe: if it says 'access forbidden' for index.html when you requested / then it means the web server thinks that the /index.html location exists and it should handle it somehow. this could be, for example, because you pass all requests (not just those for paths ending in .php) to php-fpm [21:06] tomreyn: I am planing to run open cart app on more secure permissions [21:06] its nearly ready [21:06] tomreyn: well nginx setup passed only php to php fpm [21:07] maybe its something to do with app code? [21:21] I'd there some way to install server packages from an ISO on a desktop system looking at virtual machine host group. [21:22] zxliu: can you rephrase your question? you are on a desktop system and want to install server packages? [21:22] zxliu: apt-get install whatever [21:22] zxliu: just install them, server and desktop use the same packages [21:22] skip the iso, the packages are liable to be out of date anyway [21:22] sarnold: +1 [21:22] in the past apt hasn't allowed adding ISO sources for installing [21:23] eh? apt-cdrom has been there for ever, and it's always been confusing to me why anyone would bother with it :) [21:23] nacc that is about right [21:23] sarnold why should it be confusing? [21:23] zxliu: are you in an offline mode? [21:24] yes for building the base layer [21:24] zxliu: because in the time it takes to spin up a cd-rom you can often have downloaded the package entirely over the network.. [21:24] ahem [21:24] we have reasons [21:25] the question does specify "from an iso [21:25] zxliu: have you tried to use apt-cdrom? -- or you mean you are inthe installer and want to add more ISOs from there? [21:26] the desktop is installing now the server is laid down and U want to lift it into the desktop on a virtual machine [21:27] nacc so in the past yes apt-cdrom was tried [21:27] zxliu: i'm unable to follow that sentence. desktop is installing *then* server is laid down? "want to lift it"? [21:27] and I expect the same thing to happen when this is installed the solution was to run a local web server to serve the apt packages [21:28] but the package database needs rebuilt is that so? [21:28] that's not a bad option, apt-ftparchive, aptly, among other tools, can make that process reasonable enough [21:28] laid down the n the disk [21:28] then it can be copied into a VM "lifted [21:29] ftp? [21:29] I rsync the entire archive to a local machine and used NFS mounts for a while; I stopped doing that because NFS mounts with a portable laptop were more annoying than they could have been.. [21:30] yeah, don't worry about the ftp too much, we use the output of apt-ftparchive with apache or nginx as part of the workflow on the security team [21:30] so specify ftp::localhost/packagedir in the a apt config [21:32] so what needs be done then an extra script package for building an apt repo? [21:32] the server has an httpd installed [21:33] or 'deb http://192.168.122.14/ubuntu main' or whatever.. [21:35] this can't be done until the server is up and running for the are installed on the same disk [21:36] so what command can be found for checking the deps of package group virtual machine host looks like the quickest route is to issue dpkg install commands singly [21:37] can you rephrase that question? [21:37] zxliu: do you mean the virt-host task? [21:37] isn't it something like [21:37] apt install virt-host^ [21:38] how can the packages and package dependencies for package group virtual machine host be resolved to a list for manual install with dpkg [21:39] zxliu: well, you'd need all the packages in the tasks, all their dependencies, all their dependencies, ... until it stops growing, right? [21:39] zxliu: why not just set up a repo? [21:39] repo requires a repo [21:40] I went through the possible routes in this chat [21:40] I set up server as following now - php fpm user and group www data , files owned by root who is in a www data group and I get following error - 2017/06/12 [error] 269#269: *4 FastCGI sent in stderr: "Unable to open primary script: /home/op/gd.com/index.php (No such file or directory)" while reading response header from upstream, client: xx.xxx.xxx.xxx, server: www.gd.com, request: "GET /index.php HTTP/2.0", upstream: [21:40] "fastcgi://unix:/run/php/op.sock:", host: "www.gd.com" [21:41] I can download a small script package if needed over cellular data. [21:41] I don't want to be download packages ges located on the install ISO. [21:41] What package is needed from the repo to setup a repo? [21:42] I can run the httpd in a chroot. [21:43] from the other part while on the desktop then do apt http://127.0.0.1/Ubuntu main [21:44] so I copy the packages over too var/www/ubuntu [21:44] is there something which scans and builds the package database for apt [21:44] jamespage: mwhudson: do you happen to know if celery 4.0.2 is compatible woth python3.6? i'm getting pretty close, but the tests seem to be pegging my cpu and not making any progress with 3.6 :) [21:45] ..well there's worse things to lose [21:46] although wadya know looks like desktop doesn't boot after install [21:46] zxliu: if all the files are local just read them off the filesystem; I've got a line like this in my apt.sources on my archive mirror: deb file:///srv/mirror/ubuntu/ xenial main restricted [21:47] so it accepts file:// [21:47] fine [21:47] great answer [21:48] yeah way better than running a web server just for apt for local use :) [21:48] :)) [21:48] sarnold not way better but the right start [21:48] sarnold: any idea what is my mistake [21:49] :) [21:49] so the servers in the VM need to access it o er http [21:49] 'servers in the VM'? [21:49] overheating again , possibly why it didn't boot [21:50] hehehe: sorry, no, I'm not very familiar with php [21:50] if all files owned by root can www data user who owns php fpm sock send them via nginx? based on same group ownership [21:50] a laptop with a couple about as powerful as towers with radiators [21:51] hehehe: the error you pasted was "no such file or directory" -- no amount of permissions fiddling will fix that :) figure out why the file isn't there: is fastcgi looking in the wrong place? looking for the wrong thing? etc [21:51] file is there [21:51] nginx root dir is correct [21:52] hrm maybe that means the socket doesn't exist? [21:52] socket exist [21:52] it was all working 100% but with new more secure conf yet to work [21:52] maybe problem is - socket is owned by www-data and files by root? although they are in same group [21:52] why not play? [21:53] zxliu: what do u want to do? :) [21:53] have some private property [21:53] ... [21:54] nacc: no idea sorry [21:54] maybe a fingernail clipping that the public can't touch [21:54] mwhudson: np, just figured i'd ping to see :) [21:54] nacc: i had to backport a patch for kombu to get the tests to pass [21:54] zxliu: at this point, you're spamming the channel, please stop [21:54] getent group www-data - www-data:x:33:root [21:54] root is da group [21:55] so it might be worth checking celery upstream too? [21:55] mwhudson: ack, will look on celery's github. They say it's supposedly working, but possibly only on master. [21:55] a crescent fingernail clipping and then from there security can expand possibly too a wife [21:55] sarnold: all I did - I changed file owner to root [21:55] I will change it back to www data and see whats up [21:55] nacc: https://github.com/celery/celery/issues/4000 <- implies it works, i guess you've seen that too? [21:55] celery is down [21:55] mwhudson: yeah that's where i started, not much progress from that :) [21:56] where are youns that you think your working on my hardware which is disassembled [21:58] the only thing up is an overheating laptop [21:58] sarnold: now it does not give cant open index.php error just 403 [21:59] sarnold: could it be that open cart code does not make it easy to make it work with most secure settings? [21:59] hehehe: it's possible, most shopping carts are terrible rubbish [21:59] hehehe: but I'd hope you could make this work [21:59] I put some foam earplugs in a plastic tube and sealed it with wax. sure enough home was raided and the earplugs touched [22:00] sarnold: where do u think potential issue would b? [22:00] I think I just have to identify area of conflict and fix it [22:00] hehehe: i'm not sure. when it doubt follow the log files .. [22:00] When angels deserve to diiiiiiiiiiiiiiiiiiiiie [22:00] born of electeicity [22:01] while I born in the flesh [22:01] when angels deserve to diiiiiiiiiiiiiiiiiiiiie [22:02] the virtual machine can bridge me into the ram [22:02] zxliu: please stop. [22:02] where the egos of angels go [22:03] what do you want to do lay my brain down on an arctic icecap [22:03] talk about health problems [22:04] this little CPU overheats [22:05] and your running ram frogs that say "werk" "werk" [22:06] while the entire GOD damned town takes turns on every aspect of your soul [22:06] not foresaken but earned [22:06] of course in the end foresaken is seen that way [22:08] how bout a fingernail clipping? [22:08] can me own a fingernail clipping [22:08] or da police come and strip all posessions [22:09] hold the door open for the town to continue to pilliage almost the lowest class home on earth [22:11] waiting for the CPU to cool down [22:11] hello [22:12] hello randymarsh9 can you go pay exorbitant prices for some fake plant food gmo and bring it over for tricking the body into thinking itbis not hungry [22:13] while DNA degenerates [22:13] hi [22:13] light purple need kidney beans [22:13] "red" [22:14] zxliu, just say NO! to drugs plz. tyvm [22:14] if it were that easy [22:15] haven't you seen the population dropping dead from illicit drugs? [22:15] growing and hunting food requires a community and I don't mean of drug users [22:16] genii: thanks [22:16] genii: <3 [22:16] np [22:17] @comment 77064 Spam [22:17] Comment added. [22:17] sarnold: I think biggest mistake listen to someone advice and implementing it asap [22:17] as then stuff just hangs in da air half way :D [22:18] hehehe: aye that can be an issue. in the end we're all responsible for our own systems.. it's on us to know as much as we need to run the systems.. [22:18] ys [22:20] I say main reason many people dont code other people dont have time desire to explain [22:20] if say 99% of people were to become good at coding we need social coding clubs offlines enmasse [22:20] but that will bring existing people salaries to the ground [22:20] :) [22:20] so maybe thats also a demotivator for soe [22:21] some [22:21] and security can be never ending hole [22:21] lol [22:22] the better developers will always have more opportunities and more interesting problems to solve; doubled incentives to keep progressing onwards and upwards :) [22:22] dude most coders are $$%^& and some are cool :D [22:23] I do agree with you [22:23] its better to share what you know [22:23] so all can progress and you will also enjoy more [22:38] mwhudson: found it, buried in a semi-unrelated AWS change :) [22:39] nacc: haha [22:39] top-level commit message: "AWS DynamoDB result backend (#3736)" [22:39] relevant line: "* Fix endless loop in logger_isa (Python 3.6)" [22:40] nice [23:02] is it a security risk if file own by a root? [23:02] I dont think so [23:02] like web app files owned by root [23:13] everything is owned by root anyway [23:13] i.e., root can chown root:root on any file [23:15] having a file user permission as root is just saying that it's a "default" owner, or a system file. something like that. [23:17] the downside is that only root can modify files owned by root. that means your process deploying/updating those files, or any process that needs to write to them, has to run as root, which _could_ be a massive security hole if the code isn't extremely trustworthy [23:18] for files deployed from a deb package, owned and updated by the package manager, never written to by anything else - root ownership makes sense [23:18] for web app files deployed by an automated script or something, I'd prefer a non-root deploy user that the script can run under [23:23] jamespage: re: celery, upstream (4.0+) has removed celeryd, celerybeat, celeryd-multi. Does it make sense for our package to still be called celeryd? Or should we switch to binpkg called 'celery'? [23:27] :) [23:27] true [23:28] dpb1: do u know nginx and php? [23:28] I seems to be experiencing some simple issue but yet to nail it [23:28] :D [23:28] hehehe: teward is not around, but maintains nginx in ubuntu -- i'd just wait til he's around for help, he's quite fast to fix/explain :) [23:31] hehe o well I may as well read a bit [23:31] nacc: is there some cool video that explains all nginx and php fpm? [23:33] hehehe: i'm not sure [23:33] so far I understood - when visitor comes to site 1) nginx serves html 2) php-fpm serves php via nginx [23:33] right? [23:34] just to understand entire server mechanics [23:35] jamespage: finally, do you have testcases or otherwise that would help verify/vet my changes to celery are good? beyond the upstream test suite itself [23:36] https://serversforhackers.com/video/php-fpm-configuration-the-listen-directive [23:36] this one is pretty good for php :D [23:52] jamespage: woot, celery 4.0.2 built :) [23:54] what is celery!!! [23:54] " [23:55] hehehe: http://www.celeryproject.org/ [23:55] sarnold: thanks :) [23:56] hehehe: i'm just trying to unblock the new openstack in 17.10 [23:56] I just hope there's no follow-up questions :) "uh distributed job runner hey lookit the time!" [23:56] nacc: sheesh good luck [23:56] mwhudson: jamespage: i've added my debdiffs to the bug, i would like to spend some time testing it in practice, but both build and pass their tests [23:56] nacc: every round another two dozen dependencies [23:56] follow up questions are good [23:56] where both = celery + billiard [23:56] to archieve 100% clarity [23:56] sarnold: yeah, I'm just helping with this bit :) [23:57] sarnold: dont love it when all is crystal clear [23:57] mmmm [23:57] sarnold: kombu needs a newer celery, which pulls in some new upstream versions of deps [23:57] dont you ) [23:57] nacc: do I want to know what kombu is? :) [23:57] nacc: I have tried open stack a bit heat and ceilometer [23:58] but I dont know how to scale apps with it yet [23:58] sarnold: nah, and tbh, i barely do, but i know how to deal with uscan/uupdate and package interdeps/rebuilds/etc [23:58] nacc: :) [23:59] sarnold: lol php bitch wants to load index html for some reason [23:59] I triple checked all configs [23:59] nowhere its said to load html :D [23:59] check this out https://www.dynatrace.com/blog/proper-configuration-running-php-nginx/