[00:55] sarnold: in front of a bunch of older spinning drives , about 8TBs of WD reds [00:55] drab: yeah that sounds like quite a step up :) [00:55] sarnold: you may be right tho, and I will give up shortly because it's taking up a lot of my time and I may not need it right now [00:55] and maybe it's a firmware thing that will get fixed [00:56] so I just need to hold tight for a while, emailed the vendor with some deets after a call so we'll see what they have to say [00:56] drab: oh! cool [00:56] lets hopeyou can reach a human :) [00:57] I got to one, let's see how good this human is, they have honeslty already done more than most by not just saying "fio? oh sorry, we just run CDM on windows, your problem" [00:57] which is in many ways an understandable business strategy when your market is primarily ms desktop/laptops [00:58] understandable but dissapointing [00:58] of course I can relate, I've got near-zero ability to help with windows things. so. :) [01:01] the one thing that does worry me is taht testing with a higher numjobs seems to pose quite a challenge :(, whcih is actually closer to the workload given that server is supposed to run a bunch of VMs [01:05] the other thing I have not been able to put my finger on is to map a workload in terms of block size reads and writes [01:05] I wonder if I can do something with iostat or whatnot [01:05] but I don't think that tells you what kind of block size reads/writes happen at from say mysql or apache etc [01:06] drab: hopefully useful https://github.com/iovisor/bcc/blob/master/tools/bitesize.py [01:06] drab: fwiw I've never used the bcc version of that tool, only the perf version: https://github.com/brendangregg/perf-tools/blob/master/disk/bitesize -- but the bcc version may have lower overhead [01:09] sarnold: thanks, will take a look [01:24] sarnold: thanks, that's really nice, I ended up installing bcc and it has a biosnoop which does already what I needed (prints a SIZE column with the size of the IO request , per process) [01:25] in fact that's a great toolset I didn't know about, thank you for sharing it [01:28] drab: here have a weekend's worth of reading or videos, whichever you prefer :) http://brendangregg.com/ [01:31] I'm also going to try and give kernel 4.9 a go just in case, I'm seeing a bunch of updates to nvme [01:31] altho I still feel like firmware at this point [01:32] various tests with higher numbjobs peg at 700MBs consistent, so somehow that seems a cap and it's not the PCIe bus cap, not the interlink with the CPU === Guest43 is now known as thirty3 [03:10] Good evening, what server solution is beter 16.04 or 16.10 ? [03:10] i think i chose 16.04 [03:12] and swap 1024 MB getting off? i mean enoght? [03:16] no idea, dunno what your doing on it [03:16] Village: I strongly recommend picking LTS releases (16.04 LTS) over non-LTS releases unless you happen to like filing bug reports and so on :) [03:16] personally if you need any swap at all, I would consider you are using too much [03:17] unless you plan to use hibernate [03:17] the kernel can make some better use of memory if you have -some- swap space [03:17] but if you wind up using it often then probably you didn't buy enough hardware [03:17] the kernel always does bad things using swap for me [03:17] one gigabyte is probably a good idea [03:17] patdk-lap, it's three partitions swap last 1024 MB, boot - ext4 200 MB and data [03:18] it's by default i leave it [03:18] my servers run with 300megs swap [03:18] last configuration was not bad, ok thanks Guys [03:19] 1gig boot [03:19] that sounds like a small /boot [03:19] one gig /boot is better [03:19] 200mb boot? that isn't going get you anywhere [03:19] hm very much boot [03:19] if it's efi, then that is different [03:19] boot / data / three swap [03:19] right [03:19] if using efi boot, no need for a seperate /boot [03:20] I had gotten so used to newer systems [03:20] that on many of my servers, I stopped using /boot [03:20] and hit a stupid raid card bios that can only boot the first 4gigs of the disk :( [03:21] ewwww [03:21] I found it had an option to change it to 8gigs, if I reformatted the raid === G_ is now known as G [04:50] while atempting to install radeon drivers there is encountered configure problems preventing install [04:50] starting with udev [04:52] dpkg --configure udev doesn't work for not finding update-intramfs [04:55] how can update-initramfs be found in the packages with the end result of installing radeon drivers in mind? [08:14] I have a vm exporting /var/www, which is mounted rw by my desktop. guest and host have same username. I've done chmod -R g+w on the mounted /var/www, and verified that apache is so far unaffected. I've added my host user to the www-data group, so that I can modify files in mounted system, and verified group membership with id command. But I can't modify or create still. [08:14] what filesystem type? [08:15] both ext4 [08:15] and how about the network filesystem between the two? [08:16] nfs? [08:16] mount command shows it's mounted RW [08:16] nfs4 [08:18] nfs usually works with userids, not usernames; does id show the same -numbers- everywhere? [08:18] oh in trying to describe my issue, and your question, I may have found my solution. [08:18] thanks [08:18] oh? :) [08:18] what was it? [08:19] * sarnold <-- always happy to be a rubber ducky [08:19] oh I don't know yet, I'm assuming my first search result was related, as it's about nfs version differences [08:20] but I haven't got that far yet. They do have the same user id, actually [08:20] and group ids? [08:20] for www-data? [08:21] they're the same. both ubuntuish [08:23] no kidding, i never noticed that it was the same everywhere; 33? my three easy hosts show 33 [08:24] user and group id for my users also match (all 1000) [08:25] I was wrong, also. I haven't found my answer yet. === IdleOne is now known as Guest59465 [08:30] aww :( [08:32] ok I might have found an issue. when I do id myuser i see www-data as a group, but when I just to id I don't see it. But I'm logged in as myuser [08:32] AH [08:33] hrm this description might take some time [08:33] processes inherit their ids from their parents [08:33] and when you logged in via sshd or lightdm or whatever, PAM told the process which groups to use for your process [08:34] you fiddled with group database, but that only affects when processes go through PAM again -- existing processes are unchanged [08:34] so you can either use 'newgrp' or 'sg' to start a new shell with the new group [08:34] or you can log in again via ssh or lightdm or whatever [08:35] (note that /usr/bin/newgrp is setuid root -- it will essentially take the place of sshd or lightdm, and run through the full PAM stack again) [08:36] hm [08:37] odd. I ran newgrp and it changed my bash history. I'll log in again. Thanks! [08:38] well that's the thing [08:38] it started an entirely new shell :) [08:38] that's the only way to get the group in a process -- to be a child of a process that had it, the setuid newgrp. [08:50] i am using openvswitch on ubuntu 16.04.1 ,on this host machine i have installed one guest (centos) using kvm ,please help me to correct my understand or is that expected [08:50] when i run arp on ubuntu i am getting incomplete message with ethernet but bridge show ip and mac both end mac id correct (ubuntu is dhcp client) [08:50] on guest machine /centos ? (192.168.80.125) at xxxxmac id of host [ether] on eth0 === AndrewAlexMac_ is now known as AndrewAlexMac === mmcc_ is now known as mmcc === fr0st- is now known as fr0st [09:46] teward: how's nginx looking? FF next week. === RoyK^ is now known as RoyK_Heime [11:15] rbasak: would you have the time to review the second nut merge branch ? [11:15] rbasak: I cannot see any tagging issue with it, wrt https://code.launchpad.net/~louis-bouchard/ubuntu/+source/nut/+git/nut/+merge/311471 [11:16] rbasak: maybe I should just drop this MR & create a new one with the _v2 branch ? [11:17] caribou: sure, though it's my SRU day today. Can I look tomorrow? [11:17] rbasak: oh, sure np [11:18] rbasak: or point me to the bits you use to verify it and I can start to have a look [11:18] in case i see something obvious [11:19] caribou: you can run wip/review from the usd-importer repo. But you have to edit the top, and read stdout/stderr carefully IIRC. [11:19] rbasak: ok, I'll have a look [11:20] sudo vconfig add eth0 100 <- this are tagged or untagged ? === ikonia_ is now known as ikonia [12:50] coreycb, jamespage: did you guys push new UCA packages? watcher is broken http://logs.openstack.org/28/430828/1/check/gate-puppet-watcher-puppet-beaker-rspec-ubuntu-xenial/de2a6c1/logs/watcher/watcher.txt.gz#_2017-02-08_12_19_50_416 [12:52] mwhahaha, are you using ocata-proposed? [12:52] jamespage: yea [12:52] not changed since 2017-01-17 [12:53] jamespage: i guess we didn't both testing that one since the last update, well it's broken [12:54] mwhahaha, I see an update in staging, coreycb is working through some challenges re cells stuff for b3/rc1 which I think is blocking our testing atm [12:54] coreycb, ^^ is that correct? [12:55] oh cell v2, how i know that pain [12:55] jamespage, yes cells is blocking [12:56] that's fine i just wanted to make sure you're aware. i'll just skip it for now. === pleia2_ is now known as pleia2 [12:57] coreycb, zul: ok this is our old favorite don't pull from github.com problem [12:58] coreycb, zul: can we phase in a policy to update watch files to use release tarballs from tarballs.openstack.org please [12:58] tarballs based on github tags and no better that using git + tags [12:58] jamespage: slowly changing the watch files as I go ;) [12:59] zul, awesome - looks like watcher needs a recut/re-upload to fix missing alembic migrations [12:59] jamespage: fudge [13:07] jamespage, zul: ok. i see alembic.ini file in the package source but it doesn't get installed so it may need a manual install in d/rules too. [13:08] thanks mwhahaha we'll get that fixed up [13:08] np [13:08] i think we should be adding basic dep8 tests to more stuff as well [13:27] coreycb, zul: this problem will go away if we switch to using the release tarball [13:27] coreycb, zul: adding things to d/rules is death by a 1000 cuts for this sort of thing [13:27] jamespage, ok. why is that though, is the manifest missing in the git tarball? [13:28] coreycb, yes - because the git tarball is just a snapshot of the tree, whereas the release tarballs is generated from the tree usng pbr via sdist [13:29] jamespage, ok that makes sense [13:29] coreycb, this is what we do in the ci system - we grab git, generate the tarball using python setup.py sdist and then package against that [13:29] pbr does some automagic stuff and generates a better MANIFEST [13:29] coreycb, relying less of human brains to get things right [13:30] jamespage, right. they've been dropping manifest files because pbr does manifest magic now. === stokachu_ is now known as stokachu === dosaboy_ is now known as dosaboy [15:02] xnox: when you ran vmtest on s390x in the past do you recall if you were customizing the images? still trying to figure out why all of a sudden s390x-tools was required. [15:05] powersj: which should be default installed anyway - I don't see much cases to run without it [15:13] zul: coreycb: I had no luck in reproducing the qemu triggered nova test isssue either [15:13] zul: coreycb: I went for the more reduced proposed set first "--apt-pocket=proposed=src:qemu" [15:14] zul: coreycb: and then wit hsome hackery to get it working to run in a KVM guest [15:14] cpaelzer, ok. we need to add some extra debug to our dep8 test [15:14] zul: coreycb: ?unfortunately? all are working so far [15:14] zul: coreycb: there is one thing I found thou [15:14] zul: coreycb: maybe that helps with your thoughts on debug [15:14] zul: coreycb: look at https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/ppc64el/n/nova/20170208_114508_9825b@/log.gz and https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/ppc64el/n/nova/20170203_151321_9825b@/log.gz [15:15] zul: coreycb: do note that they fail at DIFFERENT spots in the nova-compute-daemon tests [15:16] cpaelzer, what do you mean by different spots [15:16] cpaelzer/coreycb: im in the middle of fixing the dep8 tests and will be trying to add more debug stuff if there is a failure [15:17] zul: coreycb: search for "NOVA-COMPUTE FOR nova-compute-" in there - one stops at kvm one at qemu [15:17] zul: coreycb: but the other works in each case [15:17] zul: coreycb: so it might be racy as well [15:18] zul: coreycb: furthermore - I requested the config of the autopkgtest flavor and got it from Laney [15:18] cpaelzer, oh i thought it was just nova-compute-qemu that's been failing [15:18] zul: coreycb: with the same sizes I still get all PASS, but when attaching to the serial main console I see plenty of OOMs [15:18] zul: coreycb: OOMs could be the factorr that makes this somewhat transient [15:18] cpaelzer, ok interesting [15:19] coreycb: zul: like "Out of memory: Kill process 12028 (nova-conductor) score 70 or sacrifice child" [15:19] hmmm...interesting [15:20] all lxc based tests will never get this until host is exchausted ( I dont think we have mem limits, and even if so it is more efficient) [15:20] but with KVM as I set it up I see them [15:20] I need to do another rerun once the current one is over, trying to get the log out to a file or so [15:20] coreycb: zul: FYI 1536 M is the limit [15:34] coreycb: zul: I have a local repro - on the third re-run [15:35] coreycb: zul: yet it is somewhat dead, so I assume even more the OOM to be involved [15:36] cpaelzer: how much memory does the instance has...since i just ran a regular test and i get a "failed to fork" on the tests [15:38] the ppc64el autopkgtest flavour has 1536M [15:38] it just cleaned up and should get me to shell-fail soon - I can then check if thre were OOMs in this case as well that I had reproducing [15:42] zul: coreycb: here the ooms in the failing case http://paste.ubuntu.com/23954935/ [15:42] zul: coreycb: maybe it is transient depending on what the OOM killer hits [15:43] cpaelzer: yeah if the nova-conductor goes away then the nova-compute tests fail [15:45] I'm analyzing on my system (whats left) where the memory goes to [15:51] it seems to be a fight with openstack respaning things and OOM Killer reaping it - already 60 kills and up [15:51] I'm restarting it with way more memory to see if it would work then [15:52] to note - the test is waiting for me due to shell-fail - but spawning and killing goes on [15:52] zul: are you ok with me restarting this - or do you think you need any debug data from this case? [15:53] cpaelzer: try it with more memory please [16:24] powersj, i have note. [16:24] powersj, i have note [16:24] powersj, but note, that previously s390-tools used to be declared as important and installed everywhere (including chroots/containers) [16:25] powersj, very late in the cycle (can't remember which) we have made s390-tools unimportant (thus not debootstrapped by default) [16:25] and instead made it to be installed as part of cloud-image generation, and inside d-i's zipl-installer [16:25] such that it is only installed on systems that are going to boot. [16:26] powersj, i can't remember what vmtest does, but it does make sense if one now needs to add "apt install s390-tools" somewhere. [16:29] xnox: powersj: will read bakctrace, but sounds like maybe an arch dependency in tools/vmtest-system-setup? [16:30] probably. before it was automagic =) [16:30] nacc, and/or maybe whatever generates maas images should be including s390-tools [16:31] if it start off from stock container tarballs [16:31] xnox: ah good point [16:33] xnox: thanks for the info, the priority change at least explains why it is no longer there [16:36] zul: coreycb: with 8G all passing and no OOM [16:36] cpaelzer: thought so [16:37] cpaelzer, ok thanks for digging into that [16:37] coreycb: zul: I'm at EOD - would you let me know what the conclusion on this will be once you settle on the new tests you said you are working on? [16:37] cpaelzer: ack === Agent is now known as Guest9364 [18:01] I need to transition a server over to a colleague. This server lives on AWS. What do I need to do to get them primary access? I create an ssh key, and create a user, but how do I promote that user to the level of my old user? [18:03] You could add your key to authorized_keys. Handing over control over the instance (for example permission to terminate it) is really a question for Amazon. [18:04] It is more for the scripts and jobs the server runs, not the "ownership" on the AWS side. If I give them ssh access, that's the end of it? [18:06] By default users belong to the admin and/or sudo groups. You may want to add those. [18:07] Beyond that, it depends on what you set up. [18:07] There's also stuff like libvirtd, which on install copies the admin group (IIRC). Or something like that. libvirtd probably doesn't apply to an AWS instance though. [18:08] anybody knows enough about block devices to help me understand why, with a O_DIRECT flag, I'm still seeing much faster speed writing to a filesystem than writing to the raw device? [18:08] I thought once o_direct was enabled FS caching would be out and at most I'd be using the same OS cache that dd would [18:09] drab: I'm under the impression that O_DIRECT is best effort, and it depends on your filesystem. [18:10] mmmk, I'm testing on ext4 and the diffrenc between writing directly to /dev/nvme or to a file on an ext4 fs on it is huge [18:12] I don't know that it does, but I wouldn't be surprised to hear that ext4 completely ignores O_DIRECT. [18:13] rbasak: I think drab's test is to open the raw block device with O_DIRECT and write to it that way [18:13] I assumed he meant he was opening a file on the filesystem O_DIRECT. [18:14] both, well fio is doing taht for me (and dd), not coding anything up [18:14] it's both opening the raw device with o_direct and in another test a file on a ext4 fs on the same device [18:21] ls -l === miczac\away is now known as miczac === JanC_ is now known as JanC [20:19] nacc: hi! I'm currently preparing security updates for php5...are you planning an SRU to php 7.0.15 soon? [20:20] mdeslaur: yeah, i think i'm going to need to for a bug in the last SRU [20:20] mdeslaur: was hoping to do that this week [20:21] nacc: oh, interesting [20:21] mdeslaur: an upstream bug, that is [20:23] ok, so I'll fix php5 with backports, and I'll wait for your 7.0.15 SRU to go through, and once it does, I'll rebuild it and re-release it as a security update, how's that sound? [20:23] unless you think it's safe to just update to 7.0.15 directly as a security update [20:23] but I really hate doing that as I can't back out patches if there's a regression [20:25] nacc: ^ [20:53] mdeslaur: yeah, that makes sense to me [21:41] beisner, hi can you promote qemu 1:2.3+dfsg-5ubuntu9.4~cloud3 to liberty-proposed and qemu 1:2.2+dfsg-5expubuntu9.7~cloud8 to kilo-proposed please? [21:46] hi coreycb, both synced to -proposed. [21:46] beisner, thanks [21:46] yw coreycb [23:07] I have an ubuntu vm exporting /var/www as an NFS share. host and guest have same username, each with same user ID and group ID. host user belongs to www-data group, and folders/files in mounted folder are g+rwX but I still can't create or modify files. [23:08] none of that matters, unless your using nfs3 [23:08] nfs4 is a totally different beast [23:08] hey Seven_Six_Two, still no luck? :( [23:08] patdk-lap, I am using 4. I got a hint at that last night, but didn't find a solution. No sarnold, but I haven't worked on it since yesterday [23:09] I'm not sure what to search for ... I have a situation where I need to install multiple php applications (1 currently installed and 1 one lined up to install any time soon). I don't know enough about what scheme/system/type of soln I need to employ so that the urls used to access each application are unique. I have heard different terms related to configuring a web server or something like... [23:09] What is in your /etc/exports [23:09] ...that, but I don't know what to choose or where to begin. Can someone steer me in the right direction? [23:10] dino82, /var/www *(rw,sync,no_subtree_check,no_root_squash) [23:10] that is defently a nfs3 export [23:11] countingdaisies: depends upon your webserver of choice; nginx for example has multiple location blocks: https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ [23:11] but my mount on the host: webdev1.local:/var/www on /home/username/Workspace/vm/Webdev1 type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.168.3,local_lock=none,addr=192.168.168.81) [23:11] countingdaisies: apache may also call their things "location".. [23:14] I'm also unsure of whether my guest user or host user needs to be a member of www-data. or both? but like you mentioned patdk-lap maybe it's irrelevant? [23:15] sarnold: Thanks. I have a package called eramba installed (the installed php package) but I don't know in what way it was configured since I just followed instructions (from more then one source). Now I also want to install media wiki which is a php application. Right now I access eramba by entering "http://localhost/login" or if I type "http://localhost/" it redirects to the first. Also ,... [23:15] ...eramba is not installed properly (barely installed I'd call it) though That's my current sitch. I'm using apache2 (ubuntu's version/default). Wish i had a clearly laid out plan to achieve my goals but feeling a bit overwhelmed at the prospect. [23:15] Sorry, I'm wordy (Lord knows I try) [23:16] countingdaisies, I use apache for multiple sites. [23:17] this may be a usful high-level overview of ubuntu-style apache config https://help.ubuntu.com/lts/serverguide/httpd.html [23:17] Seven_Six_Two: I'm sure that's what I need to do, but how? Where to start if what I want is merely to solve a problem not learn everything there is to know? [23:17] countingdaisies, you just need a config file in /etc/apache2/sites-available for each website that you have. The config file specifies the url that the site will respond to, and until you have a real domain name, you can just make one up and use /etc/hosts to point it to 127.0.0.1 [23:17] sarnold: I'll read that, ty [23:18] then you enable it using sudo a2ensite siteconfigfile.conf [23:18] or whatever the file name is in /etc/apache2/sites-available (and it must end in .conf) [23:19] heh, good point about 'ending in .conf'. it's easy to lose -hours- trying to track that down. [23:19] yeah, the switch messed me up, as I wasn't really reading the release notes like I should have been [23:19] Seven_Six_Two: Are you suggesting that it's permissible to change the address in a apache config file to estend beyond the primary domain? eg: localhost/some_additional_stuff_I_completely_make_up ?? [23:19] Seven_Six_Two: you weren't the only one [23:20] Seven_Six_Two: That super super helps a lot. ty [23:20] sounds simple [23:21] countingdaisies, your "ServerName" directive doesn't have to be a real domain. But it has to be something that your browser can resolve, hence the hosts entry [23:21] I use business.local or something like that [23:21] .local is trouble since apple bonjour assumes it owns it [23:21] (very annoying) [23:22] just don't use a .com or .org..oh really? I guess anything not-official would be best then? is .dev a real tld? [23:22] oh no, they are now... [23:23] looks like .dev is owned by google https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains [23:23] s/owned/operated/ [23:24] I guess I should switch! [23:24] afaik there's no "safe" tld choice set aside for folks to use internally :( [23:24] really it can be anything though. If I block a website called something.dev, I guess it's a risk I take. [23:25] if you haven't hit it yet.. :) [23:25] or, the only one set aside is the one you own [23:25] good point [23:32] Seven_Six_Two: Like death, the day I'd have to learn this had to come. Glad you made it a little easier to swollow. [23:33] np! apache isn't so bad, once you give it a chance [23:34] but it's definitely not meant for your average user, since misconfiguration of a public server can be tragic