[07:18] Hi all. What would be the best way to have a server continuously attempt to mount a NFS file system, until it’s available, but without holding up boot? [07:19] you could use autofs [07:28] hateball: That looks like it may be perfect - thanks! [07:55] Hello, maybe someone try run DC++ server on Ubuntu..? [07:56] Good morning. [07:57] Good morning, lordievader [07:57] Hey Village, how are you doing? [07:58] Not bad, thanks, looking how to run dc++ hub server on ubuntu, you don't try it? [07:59] Nope, never done anything with that. [07:59] Maybe someone try, but looks like american peoples sleep now, and there are morest american peoples [08:33] coreycb, ddellav: aodh and ceilometer are still foobar [08:34] they both now listen on port 8000 rather than the configured port in /etc//.conf [08:40] hateball: Took some fiddling - but works beautifully, thank you! [08:47] coreycb, ddellav: https://bugs.launchpad.net/aodh/+bug/1629796 [08:47] Launchpad bug 1629796 in ceilometer (Ubuntu) "wsgi_script generated binaries listen on (incorrect) default port 8000" [Undecided,New] === _degorenko|afk is now known as degorenko === a3pq51_ is now known as a3qp51 [09:02] Using Apache2 on AWS on Ubuntu 14.04. I need to include an environment variable in my log file. I tried export VARIABLE="something" and then add %{VARIABLE}e to my log formats. The variable is empty in the logs, though [10:01] percona-galera 50%: Checks: 2, Failures: 0, Errors: 1 [10:01] galera/tests/write_set_ng_check.cpp:246:E:WriteSet:ver3_basic:0: (after this point) Received signal 7 (Bus error) [10:01] on armhf =( [10:48] Hi [10:49] hello. [11:03] I'm kinda noob if it comes to DNS [11:04] So my question is, if I have zone file, how to redirect it (cname this domain) to another domain? [11:04] When I'm trying to use cname, I'm getting: loading from master file XXX failed: CNAME and other data [11:04] you can only cname individual records, not the entire domain [11:08] Any idea how to do it? [11:09] In best possible way? [11:09] LostSoul: you cannot do that, you can only redirect individual records. [11:13] <_ruben> cnames are probably the most misunderstood records within dns :) [11:14] bekks: So I have domain X and I want it to redirect to domain Y [11:15] <_ruben> define "redirect" [11:30] Ok, so there is no easy way to give all domain in certain zone IP/redirect/cname of other adres? [11:32] Not at the DNS protocol level. It may be possible to configure a nameserver to do it dynamically or something like that, but I don't know of a specific example. [11:32] <_ruben> one example would be: https://doc.powerdns.com/md/authoritative/howtos/#using-alias-records [11:50] jamespage, urgh, thanks. hopefully we're good now on aodh/ceilometer. [11:51] coreycb, aodh tested OK - have a charm change up for that [11:51] doing the same with ceilometer - package is OK now [11:51] coreycb, next cycle we switch to apache2+mod_wsgi (<< EmilienM you'll probably be interested in that switch) [11:52] like you did for keystone? [11:52] creating default enabled vhosts, etc [12:00] EmilienM, yup [12:00] same model [12:00] ok [12:40] Maybe someone try, but looks like american peoples sleep now, and there are morest american peoples [12:40] maybe someone try run DC++ server on Ubuntu..? === degorenko is now known as _degorenko|afk === fr0st- is now known as fr0st === _degorenko|afk is now known as degorenko [14:46] ddellav, I synced magnum, gnocchi, and sahara [15:00] coreycb ack === mfisch` is now known as mfisch === mfisch is now known as Guest7223 === Guest7223 is now known as mfisch === lynxman is now known as lynxman_ === lynxman_ is now known as lynxman === php_ is now known as php [17:23] coreycb: nova-lxd rc1 is out. https://pypi.python.org/pypi?:action=display&name=nova-lxd&version=14.0.0.0rc1 === degorenko is now known as _degorenko|afk [17:43] rockstar, nova-lxd uploaded [17:43] hey [17:43] coreycb: ta [17:43] im trying to setup an apt-mirror of ubuntu [17:43] with the pxe stuff [17:43] for 14.04 i used debian-installer [17:43] what is it for 16.04? === markthom- is now known as markthomas [17:56] Trying to make different log file for status code 200 with Apache.2 Is it possible? === ndboost is now known as Guest469 [21:30] Hey guys, where can I found the docs to setup OVS-2.6 with DPDK-16.07 from Newton Cloud Archive? [21:31] I managed to make it work with plain Xenial (OVS-2.5 / DPDK 2.2) but, super unstable, trying it again this week... [21:31] ThiagoCMC: hah, your name was the first thing that came to mind.. "sounds like something thiago would have done" :) [21:32] LOL [21:32] heh [21:37] I'm fooling with juju lxd here, and following a reboot, none of my lxc containers will start properly. With no lxc processes running, I can run lxc list without issue, but "lxc start " hangs. If I CTRL-C and check ps, there's something called "forkstart" still running, and two more processes of [lxc monitor] on the specific container.. [21:38] Things were running okay before I rebooted, following a reboot I had to kill all my LXC processes before I could basic functionality back. [21:50] hey all :) [21:50] I have some questions about openstack, to prevent myself from spamming this IRC channel, I have put it in an askubuntu question [21:51] http://askubuntu.com/questions/832736/openstack-with-autopilot-some-networking-clear-up [21:51] I hope the questions make sense and imo this could very help other people too [21:51] (some upvotes would help too ;) ) [21:53] PCdude: oy that's a huge series of questions :) [21:55] sarnold: haha sorry, I even picked out the important ones, I could add more if u like? haha [21:55] :) [21:57] sarnold: I did some serious digging around, before posting those questions. Therefore maybe the answers could be added to the documentation for others [22:05] PCdude: I beleive there's an #openstack channel - might be more appropriate [22:06] RoyK: some questions are specific to ubuntu,therefore I went here [22:08] PCdude: I can guess that the two nics are required for the maas layer vs the openstack layer [22:08] PCdude: and I suspect the two hard drives is just to have raid on the things, but maybe one of them -is- devoted to the clients or something. seems strange. [22:09] sarnold: yeah, I thought the same thing too about the RAID setup at first, but after installing it seems in MAAS that no RAID is applied. [22:10] there's also a #maas, they may be able to answer the more ubuntu-specific portions of your questions [22:10] It would be strange for the MAAS layer and the openstack layer to be seperated, why not all the machines? They all are controlled by MAAS [22:11] PCdude: the canonical team took some liberty with how openstack is installed and configured [22:11] PCdude: the 2nd disk is for a ceph cluster used by cinder [22:11] PCdude: the 2nd nic is for neutron to segregate data and mgmt traffic (I think) [22:17] holocron: ah ok, the 2nd disk kind of makes sense, but what will happen if I add 15 disks will it only add the second disk? Can I add it manually? [22:17] Is there some issue known with the 14.04.5 server ISO, for not being able to autoconfigure (dhcp) a network interface? [22:17] This has been working for a lot of machine deployed with the 14.04 iso, and without any network change the .5 iso isnt able to detect a dhcp config. [22:17] Where can I get a stock 14.04 server iso? [22:18] holocron: the 2nd NIC for data and mgmt seperation looks weird to me, simply coz there are more then 2 networks that are used by openstack, what about the others? [22:18] PCdude: sorry, I don't know.. out of the box it won't do anything I think [22:18] PCdude: you're going to have to ask someone else more knowledgeable.. i've only just started looking at this myself. [22:20] bekks: releases.ubuntu.com seems to still host 14.04.4 images. for older that that, http://old-releases.ubuntu.com/releases/trusty/ [22:20] I suspect "N computers with five NICs" would be a non-starter for most places even if it would make sense to have maas vs block layer vs openstack management vs application ... [22:20] tarpman: 14.04.4 has the same issues for me, I'll try an older release, thank you [22:20] bekks: http://cdimages.ubuntu.com/releases/ [22:21] tarpman: ah, crazy, I wondered where the 14.04 LTS releases were stored, funny that they're not on cdimages.. [22:21] holocron: will do [22:21] Most of the networks are segregated via VLAN, openvswitch, linux bridges, etc etc etc depending on what layer of the stack you're at [22:22] sarnold: good point, of course. I think I would like some more freedom here and there, but the whole thing is pretty complicated to do it all urself [22:22] older point releases: http://old-releases.ubuntu.com/releases/ [22:22] oh, i'm late [22:22] tomreyn: thx, downloading a 14.04 now. [22:23] PCdude: amen -- do you know if autopilot uses juju openstack-base charm? I have a suspicion that it does [22:23] openstack-base charm bundle*? [22:26] holocron: yeah, I am 95% sure that it does. I am still in the learning process of JUJU, maybe I can tweak the standard openstack version in JUJU and edit it to how I like it [22:27] PCdude: that's what I was thinking. I know that the various charms that make up that bundle have lots of configuration options exposed, but you cannot edit the openstack config files directly as they will get changed back (ala chef) [22:28] PCdude: if I've understood the autopilot thing correctly, you should be able to change some of the chargm settings from e.g. https://jujucharms.com/nova-compute/xenial/3 and be able to configure things more as you wish [22:28] PCdude: but the autopilot tool itself may make some assumptions about configurations that are available in the charms [22:33] holocron: sarnold good point about JUJU, maybe its even better keep it in JUJU and leave autopilot out of it [22:49] PCdude: I'm actually running juju with MAAS but all my MAAS machines are KVM VMs ^^ It's not super performant but I get to see how some of it's plumbed out [22:49] PCdude: i put that project on the back burner and just started messing around with the pure LXD openstack bundle [22:50] holocron: I have it running in some VM's in ESXI right now, not very performing too. I really wanna put it on real hardware, but mostly the cost of it all holds me back [22:52] LXC on bare metal is performant [22:53] PCDude: i had a 16G machine with it running, but it was swapping heavily.. i'm working on another install with 32G [22:53] Honestly we avoid bare metal here as much as possible. Unless you need special hardware, VM is the way to go [22:53] Ussat++ [22:54] Ussat: >< I'm on s390x and will have more special hardware [22:54] With VMware I get redundancy, backups, HA... [22:54] everything I have is HA'd between two datacenters.................. [22:54] congrats? [22:55] holocron: that's nice - what are the specs of such a machine? [22:55] holocron: I tried deploying on a machine with 8gb, did not work out very well haha [22:55] I have now a machine with 24gb and it all works pretty ok-ish [22:56] not good, but workable [22:56] PCdude: monitor it with something useful, like munin or zabbix, to see where the bottleneck is [22:56] PCdude: better monitor the hosts too [22:56] RoyK hmm, perhaps you've seen it.. http://www-03.ibm.com/systems/linuxone/enterprise-linux-systems/emperor-product-details.html [22:56] what's the point of openstack overhead when you've got one machine though? wouldn't libvirt get you 60-70% of the way there and be less overhead? [22:57] holocron: which model? [22:57] sarnold: libvirt gets me 100% of the way there, it's the interop that's missing [22:57] RoyK hmm, i'm not sure, we've got 3 on the floor and they get swapped often [22:57] RoyK: well, I have a mid range CPU with 4 cores. That is seriously to little for this [22:57] I like the thing about current POWER CPUs allow for sub-allocation of CPU cores [22:58] nice systems holocron [22:58] RoyK: I have done some monitoring, I was mainly focusing on getting openstack more in the way I want it. The tweaking part [22:59] RoyK, we have about 20 LPARS of RHEL on P8 at the moment, about 100 other VM;s on VMware [22:59] POWER riocks [22:59] We are mostly AIX on POWER except for those RHEL systems [23:01] Ussat: thanks, yeah s390x is always a bit strange, but it's fun [23:02] Plus, i can be a middling linux admin and look like a hero because 90% of mainframers know diddly about linux ;) [23:02] :) [23:02] Ussat: ok, how much does that hardware cost? [23:03] Ussat: and btw, what sort of storage? [23:04] Well, I am not involved in that aspect, but we have 4 870's. And all our storage for prod is IBM V9000in a streached cluster with encryption and V840 for non prod [23:05] as for the price...I have no clue, I dont even see that part of the deals [23:05] thats all director level stuff [23:05] OUr windows storage is all on isilon [23:06] IIRC we have around 200 VMs on ESXi with 10 or 12 hosts from dell with some Dell Equallogic crap for storage (around 150TiB) [23:06] we have multiple SVC's in front of the V9000 and V840 [23:07] OUr shit is WAY over engineered though, multiple datacenters , fully redundant, can run from either. We are a hospital so.... [23:07] some of it is rather old (3+ years) and I guess a pricetag of around USD 300k for the lot (or a bit more) [23:07] perhaps 400 [23:12] I guess that s390x costs a wee bit more :D [23:12] heh [23:15] s390x is definitely only for certain use cases... [23:15] you know, Walmart-scale [23:16] just fyi though, if you wanted to play around on one, you can check out the linuxone community cloud [23:31] had tons of POWER at $oldjob [23:32] I was always underwhelmed, but it was very robust at least. [23:33] power + nvidia looks like a nifty solution... i always figured power was fit for scientific computing and couldn't really understand why anybody'd run business on it [23:34] single thread perf and latency wasnt very good, but overall throughput was not too shabby. [23:35] not great, but [23:35] now, i/o and single thread performance is where s390x beats all hands down [23:35] this was a few gens ago anyhow. [23:37] we had some s390(x?), but I never really touched it other than some light integration work. [23:37] so no idea [23:38] trippeh: how's it compare to your home rigs? :) [23:39] POWER, s390x, Itaniums, SPARCs, MIPS (SGI), we had most things that could run "UNIX" [23:39] ;) [23:41] sarnold: spinning rust SAN sure didnt help [23:41] trippeh: heh, not great for latency but depending upon how many of them you've got maybe good for throughput despite spinning... :) [23:42] was always fighting with the san people for iops ;-) [23:42] hah :) [23:43] home rig totally crushes them, I'm sure, but age difference helps [23:43] :) [23:43] most of them got canned after the Big Merger(tm) [23:43] non-windows systems that is ;) [23:46] man, so much $$$ saved just not having to fight the SAN people for iops with modern SSD SANs ;) [23:48] hah [23:48] and here I'm slightly dissapointed that my pcie nvme card can only do ~4k iops for my use rather than the 400k iops that I was expecting [23:49] hah, yeah, current nvme likes parallelism [23:50] I thought that something like ag --workers 300 or something would be able to generate enough parallelism in the filesystem to actually -use- all those iops. no such luck :( [23:50] fio had no problems for me :-) [23:50] if your workload matches fio, well.. .:) [23:51] but ag -is- the workload I wanted to scream, haha [23:51] heheh [23:51] sarnold: wha block sizes did you test it with? [23:52] s/wha/what/ [23:52] RoyK: I'm using it as an l2arc for zfs; afaik there's no way to set an explicit block size for l2arc devices, only for vdevs [23:53] so it is a caching drive? [23:53] sarnold: guess it just uses the block size from the pool [23:54] trippeh: yeah [23:54] sarnold: did you do a zdb check on how large the records were? [23:54] RoyK: I -assume- that the blocks are whatever sizes the data elements are when they're read.. [23:55] sarnold: possibly - I don't know the code