[11:17] <frickler> is there some glibc maintainer around here? I'd like to get the fix for https://sourceware.org/bugzilla/show_bug.cgi?id=23844 into bionic, seems to affect multiple consumers, in our case openvswitch
[11:21] <rbasak> frickler: packages in Ubuntu are team maintained
[11:22] <rbasak> frickler: we'd be grateful if you could test/QA/land any bugfix in Ubuntu. See https://wiki.ubuntu.com/StableReleaseUpdates for details on how to do that, and please ask if you have any questions.
[11:39] <frickler> rbasak: thanks, I guess I would rewrite my question then as: Is there someone with enough interest in fixing that glibc bug such that I don't have to do the SRU procedure myself? ;) otherwise I'll try to get something started once I can confirm that the fix works for me locally
[11:40] <rbasak> frickler: please start by filing a bug - then coordination with others who are affected can begin.
[11:41] <rbasak> frickler: what proportion of Ubuntu users are likely to be affected? If that number is high, then the Canonical Server Team will prioritise it. If low, then it'll be up to volunteers only.
[11:41] <frickler> rbasak: I would re-target https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1839592 once I confirm the correlation
[11:42] <rbasak> frickler: sounds good. Thank you for coordinating!
[11:43] <rbasak> frickler: the 'sts' tag suggests to me that there's interest from Canonical customers, so you may be in luck with "someone with enough interest" :)
[12:15] <m_tadeu> hi...I have a VM that is mounting a ext4 that lives on the host:tempfs...seems like it's caching, which seems unnecessary....how can I enable direct access (no caching for this device on the VM)?
[12:35] <tomreyn> this sounds like something you'd manage non the virtualization you're using
[12:41] <m_tadeu> tomreyn: sorry didn't get it...I'm creating img files in the host tmpfs and using them as disks in the VM....problem is that the VM is caching data from a disk that is already in memory
[12:44] <tomreyn> m_tadeu: i don't think i'm getting this scenario where "the VM is caching data from a disk that is already in memory". maybe it'll help if you'll discuss which ubuntu server version you're discussing here, what's running on the host and guest, which virtualization you're using, and how virtual storages translate into physical storage.
[12:47] <tomreyn> maybe you're saying that you would like the gurst system to use the hosts' I/O cache. this is not something you can setup on the guest, though, but need to configure on the virtualization.
[12:48] <m_tadeu> host is ubuntu-server16.04 and the vm is ubuntu-server18.04. I'm using libvirt+kvm+qemu. I have a tempfs in /ramdisks, where I'm creating a disk1.qcow2 file. This file (/ramdisks/disk1.qcow2) will be used and mounted in the VM.
[12:50] <m_tadeu> now this disk1.qcow2 is already in the host memory (since it's in a tmpfs), but when the VM is using it, it's caching data (from disk1.qcow2) that is already in memory
[12:51] <cpaelzer> m_tadeu: cache=none in the libvirt xml would do what you need
[12:52] <cpaelzer> => https://libvirt.org/formatdomain.html#elementsDisks
[12:57] <m_tadeu> cpaelzer: I'm getting a 'Invalid argument' error when I use that. any ideas?
[12:57] <cpaelzer> m_tadeu: it will have to open the disk with O_DIRECT not sure if that works on TMPFS
[12:57] <cpaelzer> but also I think on tmpfs there will be no page cache
[12:58] <cpaelzer> I'd not assume the kernel is that stupid
[12:58] <cpaelzer> can't proove it, just gut feeling
[12:59] <cpaelzer> yep, as I assumed https://lkml.org/lkml/2007/1/4/55
[12:59] <cpaelzer> nice to see nown names so much back in time :-)
[13:03] <m_tadeu> so it fails because cache=none uses O_DIRECT and fails because it's in tmpfs?
[13:03] <cpaelzer> yes
[13:04] <m_tadeu> crap
[13:04] <cpaelzer> you can't tell it to not-cache where there isn't a concept of cache+backingstore to begin with
[13:06] <m_tadeu> well not on the tmpfs...but the vm seems to see it a regular partition....so it's filling all the memory as it uses that partition....so it seems to be caching in the vm
[13:19] <tomreyn> !info nocache
[13:20] <tomreyn> ^ in case you can't edit source code.
[13:21] <rafaeldtinoco> !info eatmydata
[13:21] <rafaeldtinoco> as well =)
[14:53] <m_tadeu> :) gonna check that
[18:38] <m_tadeu> is there a way for systemctl staatus <service> not to print special chars, like the initial ball?
[18:43] <sdeziel> m_tadeu: if all you care is to know if a unit runs or not, you might try 'systemctl is-active <service>' instead. Not sure that would achieve what you want
[18:44] <m_tadeu> sdeziel: woaa...that's what I really need....thx
[18:44] <sdeziel> cool
[18:45] <sdeziel> m_tadeu: it accepts --quiet if you only care about the return code
[18:48] <m_tadeu> sweet