=== mpjetta_ is now known as mpjetta === ivanht_ is now known as ivanht [10:52] hi guys. [10:52] jam: the new release fixed the juju status problems. you to let you know [10:53] question. can i deploy ceph-osd and nova-compute on on machine or doesnt it make sense [11:14] ybaumy: sure it does, sure you can [11:15] Dmitrii-Sh: will high load affect storage performance not too much? [11:15] ybaumy: depends [11:15] Dmitrii-Sh: thats what im worried about [11:15] ybaumy: it is a sizing exercise in general [11:16] ybaumy: if you have an all-flash array, you have to be careful [11:16] ybaumy: as CPU load will increase due to a larger number of IOPS (or interrupts) [11:16] ybaumy: + you need to process replication-related network interrupts which is also additional CPU load [11:17] Dmitrii-Sh: i probably have to test it in my lab first [11:18] ybaumy: there is no general advice in terms of what's best. There are many things involved: the number of disks, their speed, CPU characteristics, NICs, jumbo frames, PCIe standard version (max speed on a bus), number of FileStore and osd threads [11:18] ybaumy: and a workload of course [11:19] if you stick a block-level cache there (e.g. bcache), it will affect CPU performance even more [11:19] Dmitrii-Sh: hmm there are alot of parameters to look for. [11:21] ybaumy: yes, exactly. That's why it is worthwhile to know what is the desired result and have some fio-based test cases (https://github.com/axboe/fio) [11:24] ybaumy: I haven't mentioned everything though, there are also things like instance CPU overcommit ratio and a number of instances on a compute node, RAID controller caches (if you use those), network switches' performance (whether they have a good backplane and can do line-rate switching on all interfaces at the same time), storage driver quality and other things [11:27] Dmitrii-Sh: you are alot of help for my thinking process. i will test this in test cloud and then i will see how everything plays together [11:28] ybaumy: yw, it's a lot of parameters but it's good to try it out first and then optimize [11:48] Dmitrii-Sh: when i add-unit nova-compute it add a second ntp apperantly though port 123 is already used. can i just remove-unit ntp on the machine? [11:49] ybaumy: There's a bug in some versions of ceph charms which installs NTP unconditionally [11:49] blahdeblah: it seemms that ceph-osd already has ntp installed [11:49] https://bugs.launchpad.net/charms/+source/ceph/+bug/1690513 <-- that probably explains why you're seeing port 123 already used [11:49] Bug #1690513: ceph-mon charm incorrectly embed ntp package installation ceph-radosgw charm:Fix Committed by thedac> [11:50] ok thanks [11:52] might be worth a backport to stable/17.02 [14:25] how do i juju deploy hacluster mysql-cluster --to machine ? [14:25] its not supported [14:26] is there any other way or how to proceed [14:58] got it [14:58] its a meta package [14:58] or something [15:19] how do i retry status when its blocked .. resolved doesnt work [15:37] would it be possible to display vip in status [16:31] how to resolv blocked status [16:32] or retry [16:32] or whatever [16:41] haproxy is up [16:42] but status says its blocked because haproxy is missing [16:42] i dont get it [16:45] https://bugs.launchpad.net/charms/+source/keystone/+bug/1599636 [16:45] Bug #1599636: pause failing on HA deployment: haproxy is running [16:45] Collection):Invalid> [16:46] jamespage: what is the workaround here [16:49] i did pause and resume [16:49] its resolved but why is this not a bug? [17:03] i had to pause/resume this on neutron-api hacluster and on every instance of neutron-openvswitch [17:04] in order to get rid of the blocked status [17:04] this seems to be a bug