[02:13] I have a huge file (~400GB) which I need to compress daily...this file is appended every day. compressing takes a lot of time, so I'd like to ask if there is a way to do something similar to rsync in a compressed file? [02:14] m_tadeu: what is this file? what are the rules for working with it? [02:14] how is it generated and what happens to it? [02:16] sarnold: it's a binary database and new data is appended at the end...so changes are always added at the end [02:17] m_tadeu: can you get just those changes all on their own? [02:19] sarnold: not easily...I mean, I'm rsync'ing the database file from the production system to a backup system, where I can take cpu/disk that I need without messing up with the production one [02:19] only then I'll be able to compress it [02:20] m_tadeu: if you could get a hold of those changes -- like, if they are *strictly* appended, you could take advantage of this neat little trick in gzip: [02:20] http://paste.ubuntu.com/p/ntPXYnqCVx/ [02:21] that would be cool...do you know a way to do it? I do have the "old" file and the "new" file...but how to make a diff on that? [02:23] m_tadeu: oh nice [02:24] m_tadeu: hmm; if you've got both an old and new file in one place, even better, ignore everything I just said :D take a look at xdelta3 [02:40] sarnold: this tools seems pretty cool...how to use it to output the diff only? it seems that it wants to write on the second file [02:41] ah -c [02:41] hmm.. let me give it another look :) [02:41] aha let me skip that ;L) [02:46] sarnold: doesn't seem to be doing what I want...the diff size is ~90MB...but the tool is outputing a lot more [02:48] does cmp file1 file2 report an EOF on the shorter file? or does it report a different byte? [02:50] checking... [02:53] I think this will take for ever :P [02:53] oh no :( [02:54] I mean reading 400 gigs is going to be a while.. [02:54] if you're reading at 100MB/s probably about an hour. dang. I shoulda done the math first ;) [02:55] well rsync works just fine, so it should get the EOF [02:55] er, that'd be an hour for one file. if you're getting 100MB/s total... two hours to read them both from start to finish [02:56] maybe using dd, let me see if it can be done [03:02] heh, that was going to be my suggestion before xdelta3 .. if you're confident that the data is being appended, you can use dd's skip_bytes to start reading at a specific byte offset [03:02] but now that I think through the fact that you've got 400 gigs of stuff, using dd to read just the end of the file, then compress that, and send that blob over, is probably the better approach [03:03] that'd save reading 800 gigs of data just to find the difference at the end. but that depends 100% on it being a real append [03:22] sarnold: ok...I think I managed to create the diff....now I'll compress the first time [03:23] tomorrow I'll try to append to the gzip'ed file [03:23] hope it works :) [03:23] sarnold: thanks a bunch for the tips [03:24] m_tadeu: cool :) time for me to bail too [03:24] m_tadeu: have fun :) [06:09] sarnold: that's the same person === akaWolf1 is now known as akaWolf [13:13] coreycb & jamespage: Good day! I am testing Octavia in my lab as it's something we want to eventually roll out in production. We are running Ubuntu 18.04 with Rocky ubuntu packages. One thing I am struggling with is getting the octavia-dashboard-plugin to work (show up!) in Horizon. I noticed that it is based on python3 and I also noticed that our heat and trove dashboard plugins are python2 and that [13:13] our horizon is also running under python2. Could this mix of python versions be my problem with the load balancers UI showing up on horizon? I hastily uninstalled the py2 heat and trove plugins in lieu of the py3 packages and then broke our lab horizon terribly.. lol. Just wondering if I am barking up the right tree here. [13:14] *with the load balancers UI NOT showing up in horizon, rather... [13:16] shubjero: o/ yes you'll want them all to be the same python version. what release are you running? [13:16] I'm having a problem with installing server. I can install desktop. I've done that. Here's the error. I see other people with this error and no resolution. https://imgur.com/jO3SCIC [13:17] shubjero: rocky+ should be good to go with py3 [13:17] lordcirth, loading from USB, right at the start of the installation, just after the disk-settings screen. It starts running the installation and asks you for information like Your Name [13:17] Yeah, we're running rocky on 18.04.. I am just figuring out how octavia works in the lab first.. so that I am better prepared to tackle prod obviously [13:18] on that screen I have about 5 seconds before the error [13:18] coreycb: thanks for that. Do you know how to make the switch for the openstack-dashboard to run under py3? Am I just pointing to a different python binary somewhere? [13:19] coreycb: i did find your post to openstack-discuss on Sep 7, 2018 about the rocky release useful, so thanks for that [13:21] shubjero: you'll need libapache2-mod-wsgi-py3 and python3-django-horizon installed and if you're upgrading you'll need to remove unused python2 packages after upgrading [13:28] coreycb: ok, thats my goal for today :) thanks again [13:28] shubjero: np good luck are you upgrading? [13:37] coreycb: no, we've been on 1804 & rocky for a few months now but we are looking to add new features to our cloud such as octavia, barbican, and magnum [13:43] shubjero: ok well if upgrading from py2->py3, after installing py3 packages you'll want to apt purge && apt autoremove --purge [13:43] coreycb: ok [13:50] coreycb: btw, i dont see a post on the openstack-discuss mailing list for stein like I do for Rocky. I find those posts have important information about the release. Any plans for that? [13:52] shubjero: we may have missed it. i was out of office at the time. this may help. https://javacruft.wordpress.com/ [13:53] coreycb: nice, that works [15:13] coreycb: everything works now :thumbsup: [15:13] shubjero: nice \o/ [17:57] I'm trying to setup nginx on an Ubuntu 18.04 LTS server. I removed /etc/nginx/sites-{available,enabled}/default and added my own config snippet, however, it's still showing the default welcome page. Am I missing something? [17:58] Fulgen: did you reload nginx, and what was the config snippet you added? [17:59] tds: yes, using both nginx -s reload and systemctl reload nginx [17:59] https://termbin.com/2fr6 (saved as sites-enabled/parry) [18:02] Fulgen: what are you expecting to happen for requests to anything at / other than /parry? [18:02] it's probably falling back to the default docroot which contains the example page, you could change that to another directory if you like [18:03] tds: I'd expect it to give me a 502 (the same it currently does for /parry (which tells me my config is somehow borked too, but I don't get why I still get welcomed)) [18:04] 502 just means that it can't connect to another webserver on 8080 [18:04] if you want that to apply to the root, you need location / rather than location /parry [18:05] I want it to apply just to /parry as / will be populated later on, but I misinterpreted that 502 as a 500 (I'm still new to server stuff...). thank you! [18:05] if you only want /parry reverse proxied, you might want config to return a 403 for / or something [19:08] Fulgen: FYI if you don't add a / handler it'll autoattempt whatever location you specify. So you need to add a separate { } block for it, such that you end up with this: https://p.ngx.cc/abcc189d93476683 [19:09] a separate location block* [19:09] ah, thank you both [19:09] Fulgen: (FYI I'm the semi-official/semi-unofficial NGINX maintainer in Ubuntu so i'm fairly fluent in nginx xD) [19:09] oi, nice :D [19:12] hm, for some reason, proxy_pass works with one URL, but gives me a blank window with another app on another port (a Quasar app). it works with return 301 though [19:12] `return 301` is a redirect, you are telling it where to actually look to get your stuff [19:12] not everything works with proxy_pass [19:12] because proxy_pass passes the request URI as well [19:13] ah, thanks! [19:13] and if the requested URI is *not* present at the backend webserver where it interprets that URI and returns the corresponding data, then it will fail [19:13] so if you are requesting /parry but the backend will look in /var/appdata/ as its root and /var/appdata/parry$request_uri isn't present in the backend it'll fail [19:13] and that's a backend issue [19:14] oh [19:14] whenever implementing proxy_pass ***ALWAYS*** remember the requested URI is passed to the backend as well [19:14] why does it work for :9621 and not for :8080 though? [19:14] so you'd have to rewrite the request first. [19:14] that'd be dependent on other factors [19:15] check what `netstat -tulpn | grep :8080` shows for output of what's listening where. [19:15] if it's only listening on 127.0.0.1:8080 that's your problem [19:15] if it's listening on 0.0.0.0:8080 then you need to look at the backend app and determine why it's not functioning as expected (which means you'd have to debug the Quasar app/environment) [19:16] it's listening on :8080 which is what I had specified for proxy_pass === MassDebates_ is now known as MassDebates [19:26] Fulgen: actually you specified 127.0.0.1 in your proxy_pass [19:26] not your actual server IP [19:26] if it's listening on the actual IP on port 8080 and is reachable but not functioning right then you need to focus on that app and figure out why that application doesn't like serving the content [19:26] (which I can't actually help with sorry!) [19:30] teward: oh...fail :x [19:30] no problem, you've helped me more than enough, thanks! [19:30] yep. (I don't know enough about Quasar to debug sorry!) [23:15] I don't know why but a server reboot and the services are down such as apache [23:15] How can I start apache, mysql, mongo and these [23:18] robertparkerx: what's in the logs? [23:19] SSH isn't even working either [23:19] sarnold, what logs [23:20] robertparkerx: dmesg, journalctl, /var/log/, etc [23:24] nothing in apache error log [23:30] I cannot even access the internet