/srv/irclogs.ubuntu.com/2016/10/03/#ubuntu-server.txt

tjbiddleHi all. What would be the best way to have a server continuously attempt to mount a NFS file system, until it’s available, but without holding up boot?07:18
hateballyou could use autofs07:19
tjbiddlehateball: That looks like it may be perfect - thanks!07:28
VillageHello, maybe someone try run DC++ server on Ubuntu..?07:55
lordievaderGood morning.07:56
VillageGood morning, lordievader07:57
lordievaderHey Village, how are you doing?07:57
VillageNot bad, thanks, looking how to run dc++ hub server on ubuntu, you don't try it?07:58
lordievaderNope, never done anything with that.07:59
VillageMaybe someone try, but looks like american peoples sleep now, and there are morest american peoples07:59
jamespagecoreycb, ddellav: aodh and ceilometer are still foobar08:33
jamespagethey both now listen on port 8000 rather than the configured port in /etc/<pkg>/<pkg>.conf08:34
tjbiddlehateball: Took some fiddling - but works beautifully, thank you!08:40
jamespagecoreycb, ddellav: https://bugs.launchpad.net/aodh/+bug/162979608:47
ubottuLaunchpad bug 1629796 in ceilometer (Ubuntu) "wsgi_script generated binaries listen on (incorrect) default port 8000" [Undecided,New]08:47
=== _degorenko|afk is now known as degorenko
=== a3pq51_ is now known as a3qp51
gargsmsUsing Apache2 on AWS on Ubuntu 14.04. I need to include an environment variable in my log file. I tried export VARIABLE="something" and then add %{VARIABLE}e to my log formats. The variable is empty in the logs, though09:02
xnoxpercona-galera 50%: Checks: 2, Failures: 0, Errors: 110:01
xnoxgalera/tests/write_set_ng_check.cpp:246:E:WriteSet:ver3_basic:0: (after this point) Received signal 7 (Bus error)10:01
xnoxon armhf =(10:01
LostSoulHi10:48
leangjiahello.10:49
LostSoulI'm kinda noob if it comes to DNS11:03
LostSoulSo my question is, if I have zone file, how to redirect it (cname this domain) to another domain?11:04
LostSoulWhen I'm trying to use cname, I'm getting: loading from master file XXX failed: CNAME and other data11:04
maswanyou can only cname individual records, not the entire domain11:04
LostSoulAny idea how to do it?11:08
LostSoulIn best possible way?11:09
bekksLostSoul: you cannot do that, you can only redirect individual records.11:09
_rubencnames are probably the most misunderstood records within dns :)11:13
LostSoulbekks: So I have domain X and I want it to redirect to domain Y11:14
_rubendefine "redirect"11:15
LostSoulOk, so there is no easy way to give all domain in certain zone IP/redirect/cname of other adres?11:30
rbasakNot at the DNS protocol level. It may be possible to configure a nameserver to do it dynamically or something like that, but I don't know of a specific example.11:32
_rubenone example would be: https://doc.powerdns.com/md/authoritative/howtos/#using-alias-records11:32
coreycbjamespage, urgh, thanks. hopefully we're good now on aodh/ceilometer.11:50
jamespagecoreycb, aodh tested OK - have a charm change up for that11:51
jamespagedoing the same with ceilometer - package is OK now11:51
jamespagecoreycb, next cycle we switch to apache2+mod_wsgi (<< EmilienM you'll probably be interested in that switch)11:51
EmilienMlike you did for keystone?11:52
EmilienMcreating default enabled vhosts, etc11:52
jamespageEmilienM, yup12:00
jamespagesame model12:00
EmilienMok12:00
VillageMaybe someone try, but looks like american peoples sleep now, and there are morest american peoples12:40
Villagemaybe someone try run DC++ server on Ubuntu..?12:40
=== degorenko is now known as _degorenko|afk
=== fr0st- is now known as fr0st
=== _degorenko|afk is now known as degorenko
coreycbddellav, I synced magnum, gnocchi, and sahara14:46
ddellavcoreycb ack15:00
=== mfisch` is now known as mfisch
=== mfisch is now known as Guest7223
=== Guest7223 is now known as mfisch
=== lynxman is now known as lynxman_
=== lynxman_ is now known as lynxman
=== php_ is now known as php
rockstarcoreycb: nova-lxd rc1 is out. https://pypi.python.org/pypi?:action=display&name=nova-lxd&version=14.0.0.0rc117:23
=== degorenko is now known as _degorenko|afk
coreycbrockstar, nova-lxd uploaded17:43
ndboosthey17:43
rockstarcoreycb: ta17:43
ndboostim trying to setup an apt-mirror of ubuntu17:43
ndboostwith the pxe stuff17:43
ndboostfor 14.04 i used debian-installer17:43
ndboostwhat is it for 16.04?17:43
=== markthom- is now known as markthomas
gargsmsTrying to make different log file for status code 200 with Apache.2 Is it possible?17:56
=== ndboost is now known as Guest469
ThiagoCMCHey guys, where can I found the docs to setup OVS-2.6 with DPDK-16.07 from Newton Cloud Archive?21:30
ThiagoCMCI managed to make it work with plain Xenial (OVS-2.5 / DPDK 2.2) but, super unstable, trying it again this week...21:31
sarnoldThiagoCMC: hah, your name was the first thing that came to mind.. "sounds like something thiago would have done" :)21:31
ThiagoCMCLOL21:32
naccheh21:32
holocronI'm fooling with juju lxd here, and following a reboot, none of my lxc containers will start properly. With no lxc processes running, I can run lxc list without issue, but "lxc start <container>" hangs. If I CTRL-C and check ps, there's something called "forkstart" still running, and two more processes of [lxc monitor] on the specific container..21:37
holocronThings were running okay before I rebooted, following a reboot I had to kill all my LXC processes before I could basic functionality back.21:38
PCdudehey all :)21:50
PCdudeI have some questions about openstack, to prevent myself from spamming this IRC channel, I have put it in an askubuntu question21:50
PCdudehttp://askubuntu.com/questions/832736/openstack-with-autopilot-some-networking-clear-up21:51
PCdudeI hope the questions make sense and imo this could very help other people too21:51
PCdude(some upvotes would help too ;) )21:51
sarnoldPCdude: oy that's a huge series of questions :)21:53
PCdudesarnold: haha sorry, I even picked out the important ones, I could add more if u like? haha21:55
sarnold:)21:55
PCdudesarnold: I did some serious digging around, before posting those questions. Therefore maybe the answers could be added to the documentation for others21:57
RoyKPCdude: I beleive there's an #openstack channel - might be more appropriate22:05
PCdudeRoyK: some questions are specific to ubuntu,therefore I went here22:06
sarnoldPCdude: I can guess that the two nics are required for the maas layer vs the openstack layer22:08
sarnoldPCdude: and I suspect the two hard drives is just to have raid on the things, but maybe one of them -is- devoted to the clients or something. seems strange.22:08
PCdudesarnold: yeah, I thought the same thing too about the RAID setup at first, but after installing it seems in MAAS that no RAID is applied.22:09
sarnoldthere's also a #maas, they may be able to answer the more ubuntu-specific portions of your questions22:10
PCdudeIt would be strange for the MAAS layer and the openstack layer to be seperated, why not all the machines? They all are controlled by MAAS22:10
holocronPCdude: the canonical team took some liberty with how openstack is installed and configured22:11
holocronPCdude: the 2nd disk is for a ceph cluster used by cinder22:11
holocronPCdude: the 2nd nic is for neutron to segregate data and mgmt traffic (I think)22:11
PCdudeholocron: ah ok, the 2nd disk kind of makes sense, but what will happen if I add 15 disks will it only add the second disk? Can I add it manually?22:17
bekksIs there some issue known with the 14.04.5 server ISO, for not being able to autoconfigure (dhcp) a network interface?22:17
bekksThis has been working for a lot of machine deployed with the 14.04 iso, and without any network change the .5 iso isnt able to detect a dhcp config.22:17
bekksWhere can I get a stock 14.04 server iso?22:17
PCdudeholocron: the 2nd NIC for data and mgmt seperation looks weird to me, simply coz there are more then 2 networks that are used by openstack, what about the others?22:18
holocronPCdude: sorry, I don't know.. out of the box it won't do anything I think22:18
holocronPCdude: you're going to have to ask someone else more knowledgeable.. i've only just started looking at this myself.22:18
tarpmanbekks: releases.ubuntu.com seems to still host 14.04.4 images. for older that that, http://old-releases.ubuntu.com/releases/trusty/22:20
sarnoldI suspect "N computers with five NICs" would be a non-starter for most places even if it would make sense to have maas vs block layer vs openstack management vs application ...22:20
bekkstarpman: 14.04.4 has the same issues for me, I'll try an older release, thank you22:20
tomreynbekks: http://cdimages.ubuntu.com/releases/22:20
sarnoldtarpman: ah, crazy, I wondered where the 14.04 LTS releases were stored, funny that they're not on cdimages..22:21
PCdudeholocron: will do22:21
holocronMost of the networks are segregated via VLAN, openvswitch, linux bridges, etc etc etc depending on what layer of the stack you're at22:21
PCdudesarnold: good point, of course. I think I would like some more freedom here and there, but the whole thing is pretty complicated to do it all urself22:22
tomreynolder point releases: http://old-releases.ubuntu.com/releases/22:22
tomreynoh, i'm late22:22
bekkstomreyn: thx, downloading a 14.04 now.22:22
holocronPCdude: amen -- do you know if autopilot uses juju openstack-base charm? I have a suspicion that it does22:23
holocronopenstack-base charm bundle*?22:23
PCdudeholocron: yeah, I am 95% sure that it does. I am still in the learning process of JUJU, maybe I can tweak the standard openstack version in JUJU and edit it to how I like it22:26
holocronPCdude: that's what I was thinking. I know that the various charms that make up that bundle have lots of configuration options exposed, but you cannot edit the openstack config files directly as they will get changed back (ala chef)22:27
sarnoldPCdude: if I've understood the autopilot thing correctly, you should be able to change some of the chargm settings from e.g. https://jujucharms.com/nova-compute/xenial/3 and be able to configure things more as you wish22:28
sarnoldPCdude: but the autopilot tool itself may make some assumptions about configurations that are available in the charms22:28
PCdudeholocron: sarnold good point about JUJU, maybe its even better keep it in JUJU and leave autopilot out of it22:33
holocronPCdude: I'm actually running juju with MAAS but all my MAAS machines are KVM VMs ^^ It's not super performant but I get to see how some of it's plumbed out22:49
holocronPCdude: i put that project on the back burner and just started messing around with the pure LXD openstack bundle22:49
PCdudeholocron: I have it running in some VM's in ESXI right now, not very performing too. I really wanna put it on real hardware, but mostly the cost of it all holds me back22:50
holocronLXC on bare metal is <supposed to be> performant22:52
holocronPCDude: i had a 16G machine with it running, but it was swapping heavily.. i'm working on another install with 32G22:53
UssatHonestly we avoid bare metal here as much as possible. Unless you need special hardware, VM is the way to go22:53
RoyKUssat++22:53
holocronUssat: >< I'm on s390x and will have more special hardware22:54
UssatWith VMware I get redundancy, backups, HA...22:54
Ussateverything I have is HA'd between two datacenters..................22:54
holocroncongrats?22:54
RoyKholocron: that's nice - what are the specs of such a machine?22:55
PCdudeholocron: I tried deploying on a machine with 8gb, did not work out very well haha22:55
PCdudeI have now a machine with 24gb and it all works pretty ok-ish22:55
PCdudenot good, but workable22:56
RoyKPCdude: monitor it with something useful, like munin or zabbix, to see where the bottleneck is22:56
RoyKPCdude: better monitor the hosts too22:56
holocronRoyK hmm, perhaps you've seen it..  http://www-03.ibm.com/systems/linuxone/enterprise-linux-systems/emperor-product-details.html22:56
sarnoldwhat's the point of openstack overhead when you've got one machine though? wouldn't libvirt get you 60-70% of the way there and be less overhead?22:56
RoyKholocron: which model?22:57
holocronsarnold: libvirt gets me 100% of the way there, it's the interop that's missing22:57
holocronRoyK hmm, i'm not sure, we've got 3 on the floor and they get swapped often22:57
PCdudeRoyK: well, I have a mid range CPU with 4 cores. That is seriously to little for this22:57
RoyKI like the thing about current POWER CPUs allow for sub-allocation of CPU cores22:57
Ussatnice systems holocron22:58
PCdudeRoyK: I have done some monitoring, I was mainly focusing on getting openstack more in the way I want it. The tweaking part22:58
UssatRoyK, we have about 20 LPARS of RHEL on P8 at the moment, about 100 other VM;s on VMware22:59
UssatPOWER riocks22:59
UssatWe are mostly AIX on POWER except for those RHEL systems22:59
holocronUssat: thanks, yeah s390x is always a bit strange, but it's fun23:01
holocronPlus, i can be a middling linux admin and look like a hero because 90% of mainframers know diddly about linux ;)23:02
sarnold:)23:02
RoyKUssat: ok, how much does that hardware cost?23:02
RoyKUssat: and btw, what sort of storage?23:03
UssatWell, I am not involved in that aspect, but we have 4 870's. And all our storage for prod is IBM V9000in a streached cluster with encryption and V840 for non prod23:04
Ussatas for the price...I have no clue, I dont even see that part of the deals23:05
Ussatthats all director level stuff23:05
UssatOUr windows storage is all on isilon23:05
RoyKIIRC we have around 200 VMs on ESXi with 10 or 12 hosts from dell with some Dell Equallogic crap for storage (around 150TiB)23:06
Ussatwe have multiple SVC's in front of the V9000 and V84023:06
UssatOUr shit is WAY over engineered though, multiple datacenters , fully redundant, can run from either. We are a hospital so....23:07
RoyKsome of it is rather old (3+ years) and I guess a pricetag of around USD 300k for the lot (or a bit more)23:07
RoyKperhaps 40023:07
RoyKI guess that s390x costs a wee bit more :D23:12
Ussatheh23:12
holocrons390x is definitely only for certain use cases...23:15
holocronyou know, Walmart-scale23:15
holocronjust fyi though, if you wanted to play around on one, you can check out the linuxone community cloud23:16
trippehhad tons of POWER at $oldjob23:31
trippehI was always underwhelmed, but it was very robust at least.23:32
holocronpower + nvidia looks like a nifty solution... i always figured power was fit for scientific computing and couldn't really understand why anybody'd run business on it23:33
trippehsingle thread perf and latency wasnt very good, but overall throughput was not too shabby.23:34
trippehnot great, but23:35
holocronnow, i/o and single thread performance is where s390x beats all hands down23:35
trippehthis was a few gens ago anyhow.23:35
trippehwe had some s390(x?), but I never really touched it other than some light integration work.23:37
trippehso no idea23:37
sarnoldtrippeh: how's it compare to your home rigs? :)23:38
trippehPOWER, s390x, Itaniums, SPARCs, MIPS (SGI), we had most things that could run "UNIX"23:39
trippeh;)23:39
trippehsarnold: spinning rust SAN sure didnt help23:41
sarnoldtrippeh: heh, not great for latency but depending upon how many of them you've got maybe good for throughput despite spinning... :)23:41
trippehwas always fighting with the san people for iops ;-)23:42
sarnoldhah :)23:42
trippehhome rig totally crushes them, I'm sure, but age difference helps23:43
sarnold:)23:43
trippehmost of them got canned after the Big Merger(tm)23:43
trippehnon-windows systems that is ;)23:43
trippehman, so much $$$ saved just not having to fight the SAN people for iops with modern SSD SANs ;)23:46
sarnoldhah23:48
sarnoldand here I'm slightly dissapointed that my pcie nvme card can only do ~4k iops for my use rather than the 400k iops that I was expecting23:48
trippehhah, yeah, current nvme likes parallelism23:49
sarnoldI thought that something like ag --workers 300 or something would be able to generate enough parallelism in the filesystem to actually -use- all those iops. no such luck :(23:50
trippehfio had no problems for me :-)23:50
sarnoldif your workload matches fio, well.. .:)23:50
sarnoldbut ag -is- the workload I wanted to scream, haha23:51
trippehheheh23:51
RoyKsarnold: wha block sizes did you test it with?23:51
RoyKs/wha/what/23:52
sarnoldRoyK: I'm using it as an l2arc for zfs; afaik there's no way to set an explicit block size for l2arc devices, only for vdevs23:52
trippehso it is a caching drive?23:53
RoyKsarnold: guess it just uses the block size from the pool23:53
sarnoldtrippeh: yeah23:54
RoyKsarnold: did you do a zdb check on how large the records were?23:54
sarnoldRoyK: I -assume- that the blocks are whatever sizes the data elements are when they're read..23:54
RoyKsarnold: possibly - I don't know the code23:55

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!