[07:18] <tjbiddle> Hi all. What would be the best way to have a server continuously attempt to mount a NFS file system, until it’s available, but without holding up boot?
[07:19] <hateball> you could use autofs
[07:28] <tjbiddle> hateball: That looks like it may be perfect - thanks!
[07:55] <Village> Hello, maybe someone try run DC++ server on Ubuntu..?
[07:56] <lordievader> Good morning.
[07:57] <Village> Good morning, lordievader
[07:57] <lordievader> Hey Village, how are you doing?
[07:58] <Village> Not bad, thanks, looking how to run dc++ hub server on ubuntu, you don't try it?
[07:59] <lordievader> Nope, never done anything with that.
[07:59] <Village> Maybe someone try, but looks like american peoples sleep now, and there are morest american peoples
[08:33] <jamespage> coreycb, ddellav: aodh and ceilometer are still foobar
[08:34] <jamespage> they both now listen on port 8000 rather than the configured port in /etc/<pkg>/<pkg>.conf
[08:40] <tjbiddle> hateball: Took some fiddling - but works beautifully, thank you!
[08:47] <jamespage> coreycb, ddellav: https://bugs.launchpad.net/aodh/+bug/1629796
[09:02] <gargsms> Using Apache2 on AWS on Ubuntu 14.04. I need to include an environment variable in my log file. I tried export VARIABLE="something" and then add %{VARIABLE}e to my log formats. The variable is empty in the logs, though
[10:01] <xnox> percona-galera 50%: Checks: 2, Failures: 0, Errors: 1
[10:01] <xnox> galera/tests/write_set_ng_check.cpp:246:E:WriteSet:ver3_basic:0: (after this point) Received signal 7 (Bus error)
[10:01] <xnox> on armhf =(
[10:48] <LostSoul> Hi
[10:49] <leangjia> hello.
[11:03] <LostSoul> I'm kinda noob if it comes to DNS
[11:04] <LostSoul> So my question is, if I have zone file, how to redirect it (cname this domain) to another domain?
[11:04] <LostSoul> When I'm trying to use cname, I'm getting: loading from master file XXX failed: CNAME and other data
[11:04] <maswan> you can only cname individual records, not the entire domain
[11:08] <LostSoul> Any idea how to do it?
[11:09] <LostSoul> In best possible way?
[11:09] <bekks> LostSoul: you cannot do that, you can only redirect individual records.
[11:13] <_ruben> cnames are probably the most misunderstood records within dns :)
[11:14] <LostSoul> bekks: So I have domain X and I want it to redirect to domain Y
[11:15] <_ruben> define "redirect"
[11:30] <LostSoul> Ok, so there is no easy way to give all domain in certain zone IP/redirect/cname of other adres?
[11:32] <rbasak> Not at the DNS protocol level. It may be possible to configure a nameserver to do it dynamically or something like that, but I don't know of a specific example.
[11:32] <_ruben> one example would be: https://doc.powerdns.com/md/authoritative/howtos/#using-alias-records
[11:50] <coreycb> jamespage, urgh, thanks. hopefully we're good now on aodh/ceilometer.
[11:51] <jamespage> coreycb, aodh tested OK - have a charm change up for that
[11:51] <jamespage> doing the same with ceilometer - package is OK now
[11:51] <jamespage> coreycb, next cycle we switch to apache2+mod_wsgi (<< EmilienM you'll probably be interested in that switch)
[11:52] <EmilienM> like you did for keystone?
[11:52] <EmilienM> creating default enabled vhosts, etc
[12:00] <jamespage> EmilienM, yup
[12:00] <jamespage> same model
[12:00] <EmilienM> ok
[12:40] <Village> Maybe someone try, but looks like american peoples sleep now, and there are morest american peoples
[12:40] <Village> maybe someone try run DC++ server on Ubuntu..?
[14:46] <coreycb> ddellav, I synced magnum, gnocchi, and sahara
[15:00] <ddellav> coreycb ack
[17:23] <rockstar> coreycb: nova-lxd rc1 is out. https://pypi.python.org/pypi?:action=display&name=nova-lxd&version=14.0.0.0rc1
[17:43] <coreycb> rockstar, nova-lxd uploaded
[17:43] <ndboost> hey
[17:43] <rockstar> coreycb: ta
[17:43] <ndboost> im trying to setup an apt-mirror of ubuntu
[17:43] <ndboost> with the pxe stuff
[17:43] <ndboost> for 14.04 i used debian-installer
[17:43] <ndboost> what is it for 16.04?
[17:56] <gargsms> Trying to make different log file for status code 200 with Apache.2 Is it possible?
[21:30] <ThiagoCMC> Hey guys, where can I found the docs to setup OVS-2.6 with DPDK-16.07 from Newton Cloud Archive?
[21:31] <ThiagoCMC> I managed to make it work with plain Xenial (OVS-2.5 / DPDK 2.2) but, super unstable, trying it again this week...
[21:31] <sarnold> ThiagoCMC: hah, your name was the first thing that came to mind.. "sounds like something thiago would have done" :)
[21:32] <ThiagoCMC> LOL
[21:32] <nacc> heh
[21:37] <holocron> I'm fooling with juju lxd here, and following a reboot, none of my lxc containers will start properly. With no lxc processes running, I can run lxc list without issue, but "lxc start <container>" hangs. If I CTRL-C and check ps, there's something called "forkstart" still running, and two more processes of [lxc monitor] on the specific container..
[21:38] <holocron> Things were running okay before I rebooted, following a reboot I had to kill all my LXC processes before I could basic functionality back.
[21:50] <PCdude> hey all :)
[21:50] <PCdude> I have some questions about openstack, to prevent myself from spamming this IRC channel, I have put it in an askubuntu question
[21:51] <PCdude> http://askubuntu.com/questions/832736/openstack-with-autopilot-some-networking-clear-up
[21:51] <PCdude> I hope the questions make sense and imo this could very help other people too
[21:51] <PCdude> (some upvotes would help too ;) )
[21:53] <sarnold> PCdude: oy that's a huge series of questions :)
[21:55] <PCdude> sarnold: haha sorry, I even picked out the important ones, I could add more if u like? haha
[21:55] <sarnold> :)
[21:57] <PCdude> sarnold: I did some serious digging around, before posting those questions. Therefore maybe the answers could be added to the documentation for others
[22:05] <RoyK> PCdude: I beleive there's an #openstack channel - might be more appropriate
[22:06] <PCdude> RoyK: some questions are specific to ubuntu,therefore I went here
[22:08] <sarnold> PCdude: I can guess that the two nics are required for the maas layer vs the openstack layer
[22:08] <sarnold> PCdude: and I suspect the two hard drives is just to have raid on the things, but maybe one of them -is- devoted to the clients or something. seems strange.
[22:09] <PCdude> sarnold: yeah, I thought the same thing too about the RAID setup at first, but after installing it seems in MAAS that no RAID is applied.
[22:10] <sarnold> there's also a #maas, they may be able to answer the more ubuntu-specific portions of your questions
[22:10] <PCdude> It would be strange for the MAAS layer and the openstack layer to be seperated, why not all the machines? They all are controlled by MAAS
[22:11] <holocron> PCdude: the canonical team took some liberty with how openstack is installed and configured
[22:11] <holocron> PCdude: the 2nd disk is for a ceph cluster used by cinder
[22:11] <holocron> PCdude: the 2nd nic is for neutron to segregate data and mgmt traffic (I think)
[22:17] <PCdude> holocron: ah ok, the 2nd disk kind of makes sense, but what will happen if I add 15 disks will it only add the second disk? Can I add it manually?
[22:17] <bekks> Is there some issue known with the 14.04.5 server ISO, for not being able to autoconfigure (dhcp) a network interface?
[22:17] <bekks> This has been working for a lot of machine deployed with the 14.04 iso, and without any network change the .5 iso isnt able to detect a dhcp config.
[22:17] <bekks> Where can I get a stock 14.04 server iso?
[22:18] <PCdude> holocron: the 2nd NIC for data and mgmt seperation looks weird to me, simply coz there are more then 2 networks that are used by openstack, what about the others?
[22:18] <holocron> PCdude: sorry, I don't know.. out of the box it won't do anything I think
[22:18] <holocron> PCdude: you're going to have to ask someone else more knowledgeable.. i've only just started looking at this myself.
[22:20] <tarpman> bekks: releases.ubuntu.com seems to still host 14.04.4 images. for older that that, http://old-releases.ubuntu.com/releases/trusty/
[22:20] <sarnold> I suspect "N computers with five NICs" would be a non-starter for most places even if it would make sense to have maas vs block layer vs openstack management vs application ...
[22:20] <bekks> tarpman: 14.04.4 has the same issues for me, I'll try an older release, thank you
[22:20] <tomreyn> bekks: http://cdimages.ubuntu.com/releases/
[22:21] <sarnold> tarpman: ah, crazy, I wondered where the 14.04 LTS releases were stored, funny that they're not on cdimages..
[22:21] <PCdude> holocron: will do
[22:21] <holocron> Most of the networks are segregated via VLAN, openvswitch, linux bridges, etc etc etc depending on what layer of the stack you're at
[22:22] <PCdude> sarnold: good point, of course. I think I would like some more freedom here and there, but the whole thing is pretty complicated to do it all urself
[22:22] <tomreyn> older point releases: http://old-releases.ubuntu.com/releases/
[22:22] <tomreyn> oh, i'm late
[22:22] <bekks> tomreyn: thx, downloading a 14.04 now.
[22:23] <holocron> PCdude: amen -- do you know if autopilot uses juju openstack-base charm? I have a suspicion that it does
[22:23] <holocron> openstack-base charm bundle*?
[22:26] <PCdude> holocron: yeah, I am 95% sure that it does. I am still in the learning process of JUJU, maybe I can tweak the standard openstack version in JUJU and edit it to how I like it
[22:27] <holocron> PCdude: that's what I was thinking. I know that the various charms that make up that bundle have lots of configuration options exposed, but you cannot edit the openstack config files directly as they will get changed back (ala chef)
[22:28] <sarnold> PCdude: if I've understood the autopilot thing correctly, you should be able to change some of the chargm settings from e.g. https://jujucharms.com/nova-compute/xenial/3 and be able to configure things more as you wish
[22:28] <sarnold> PCdude: but the autopilot tool itself may make some assumptions about configurations that are available in the charms
[22:33] <PCdude> holocron: sarnold good point about JUJU, maybe its even better keep it in JUJU and leave autopilot out of it
[22:49] <holocron> PCdude: I'm actually running juju with MAAS but all my MAAS machines are KVM VMs ^^ It's not super performant but I get to see how some of it's plumbed out
[22:49] <holocron> PCdude: i put that project on the back burner and just started messing around with the pure LXD openstack bundle
[22:50] <PCdude> holocron: I have it running in some VM's in ESXI right now, not very performing too. I really wanna put it on real hardware, but mostly the cost of it all holds me back
[22:52] <holocron> LXC on bare metal is <supposed to be> performant
[22:53] <holocron> PCDude: i had a 16G machine with it running, but it was swapping heavily.. i'm working on another install with 32G
[22:53] <Ussat> Honestly we avoid bare metal here as much as possible. Unless you need special hardware, VM is the way to go
[22:53] <RoyK> Ussat++
[22:54] <holocron> Ussat: >< I'm on s390x and will have more special hardware
[22:54] <Ussat> With VMware I get redundancy, backups, HA...
[22:54] <Ussat> everything I have is HA'd between two datacenters..................
[22:54] <holocron> congrats?
[22:55] <RoyK> holocron: that's nice - what are the specs of such a machine?
[22:55] <PCdude> holocron: I tried deploying on a machine with 8gb, did not work out very well haha
[22:55] <PCdude> I have now a machine with 24gb and it all works pretty ok-ish
[22:56] <PCdude> not good, but workable
[22:56] <RoyK> PCdude: monitor it with something useful, like munin or zabbix, to see where the bottleneck is
[22:56] <RoyK> PCdude: better monitor the hosts too
[22:56] <holocron> RoyK hmm, perhaps you've seen it..  http://www-03.ibm.com/systems/linuxone/enterprise-linux-systems/emperor-product-details.html
[22:56] <sarnold> what's the point of openstack overhead when you've got one machine though? wouldn't libvirt get you 60-70% of the way there and be less overhead?
[22:57] <RoyK> holocron: which model?
[22:57] <holocron> sarnold: libvirt gets me 100% of the way there, it's the interop that's missing
[22:57] <holocron> RoyK hmm, i'm not sure, we've got 3 on the floor and they get swapped often
[22:57] <PCdude> RoyK: well, I have a mid range CPU with 4 cores. That is seriously to little for this
[22:57] <RoyK> I like the thing about current POWER CPUs allow for sub-allocation of CPU cores
[22:58] <Ussat> nice systems holocron
[22:58] <PCdude> RoyK: I have done some monitoring, I was mainly focusing on getting openstack more in the way I want it. The tweaking part
[22:59] <Ussat> RoyK, we have about 20 LPARS of RHEL on P8 at the moment, about 100 other VM;s on VMware
[22:59] <Ussat> POWER riocks
[22:59] <Ussat> We are mostly AIX on POWER except for those RHEL systems
[23:01] <holocron> Ussat: thanks, yeah s390x is always a bit strange, but it's fun
[23:02] <holocron> Plus, i can be a middling linux admin and look like a hero because 90% of mainframers know diddly about linux ;)
[23:02] <sarnold> :)
[23:02] <RoyK> Ussat: ok, how much does that hardware cost?
[23:03] <RoyK> Ussat: and btw, what sort of storage?
[23:04] <Ussat> Well, I am not involved in that aspect, but we have 4 870's. And all our storage for prod is IBM V9000in a streached cluster with encryption and V840 for non prod
[23:05] <Ussat> as for the price...I have no clue, I dont even see that part of the deals
[23:05] <Ussat> thats all director level stuff
[23:05] <Ussat> OUr windows storage is all on isilon
[23:06] <RoyK> IIRC we have around 200 VMs on ESXi with 10 or 12 hosts from dell with some Dell Equallogic crap for storage (around 150TiB)
[23:06] <Ussat> we have multiple SVC's in front of the V9000 and V840
[23:07] <Ussat> OUr shit is WAY over engineered though, multiple datacenters , fully redundant, can run from either. We are a hospital so....
[23:07] <RoyK> some of it is rather old (3+ years) and I guess a pricetag of around USD 300k for the lot (or a bit more)
[23:07] <RoyK> perhaps 400
[23:12] <RoyK> I guess that s390x costs a wee bit more :D
[23:12] <Ussat> heh
[23:15] <holocron> s390x is definitely only for certain use cases...
[23:15] <holocron> you know, Walmart-scale
[23:16] <holocron> just fyi though, if you wanted to play around on one, you can check out the linuxone community cloud
[23:31] <trippeh> had tons of POWER at $oldjob
[23:32] <trippeh> I was always underwhelmed, but it was very robust at least.
[23:33] <holocron> power + nvidia looks like a nifty solution... i always figured power was fit for scientific computing and couldn't really understand why anybody'd run business on it
[23:34] <trippeh> single thread perf and latency wasnt very good, but overall throughput was not too shabby.
[23:35] <trippeh> not great, but
[23:35] <holocron> now, i/o and single thread performance is where s390x beats all hands down
[23:35] <trippeh> this was a few gens ago anyhow.
[23:37] <trippeh> we had some s390(x?), but I never really touched it other than some light integration work.
[23:37] <trippeh> so no idea
[23:38] <sarnold> trippeh: how's it compare to your home rigs? :)
[23:39] <trippeh> POWER, s390x, Itaniums, SPARCs, MIPS (SGI), we had most things that could run "UNIX"
[23:39] <trippeh> ;)
[23:41] <trippeh> sarnold: spinning rust SAN sure didnt help
[23:41] <sarnold> trippeh: heh, not great for latency but depending upon how many of them you've got maybe good for throughput despite spinning... :)
[23:42] <trippeh> was always fighting with the san people for iops ;-)
[23:42] <sarnold> hah :)
[23:43] <trippeh> home rig totally crushes them, I'm sure, but age difference helps
[23:43] <sarnold> :)
[23:43] <trippeh> most of them got canned after the Big Merger(tm)
[23:43] <trippeh> non-windows systems that is ;)
[23:46] <trippeh> man, so much $$$ saved just not having to fight the SAN people for iops with modern SSD SANs ;)
[23:48] <sarnold> hah
[23:48] <sarnold> and here I'm slightly dissapointed that my pcie nvme card can only do ~4k iops for my use rather than the 400k iops that I was expecting
[23:49] <trippeh> hah, yeah, current nvme likes parallelism
[23:50] <sarnold> I thought that something like ag --workers 300 or something would be able to generate enough parallelism in the filesystem to actually -use- all those iops. no such luck :(
[23:50] <trippeh> fio had no problems for me :-)
[23:50] <sarnold> if your workload matches fio, well.. .:)
[23:51] <sarnold> but ag -is- the workload I wanted to scream, haha
[23:51] <trippeh> heheh
[23:51] <RoyK> sarnold: wha block sizes did you test it with?
[23:52] <RoyK> s/wha/what/
[23:52] <sarnold> RoyK: I'm using it as an l2arc for zfs; afaik there's no way to set an explicit block size for l2arc devices, only for vdevs
[23:53] <trippeh> so it is a caching drive?
[23:53] <RoyK> sarnold: guess it just uses the block size from the pool
[23:54] <sarnold> trippeh: yeah
[23:54] <RoyK> sarnold: did you do a zdb check on how large the records were?
[23:54] <sarnold> RoyK: I -assume- that the blocks are whatever sizes the data elements are when they're read..
[23:55] <RoyK> sarnold: possibly - I don't know the code