[01:38] <bdx> magicaltrout: the storage entry in metadata.yaml "hdfs-devices" will be similar to "osd-devices" in the ceph-osd charm see https://github.com/openstack/charm-ceph-osd/blob/master/metadata.yaml#L32,L37
[06:49] <seyeongkim> deploying juju with lxd is working correctly?, I could deploy it yesterday but today. lxd is started on host but juju status show me it is down or pending. ( I see this issue on artful release for several weeks )
[06:50] <seyeongkim> and stuck on that status
[09:40] <srihas> hi guys, we are trying to install openstack with maas and juju. currently we are facing an error which blocks neutron gateway service. The message is like "Services not running that should be: nova-api-metadata". When tried to restart the service manually, there is no unit itself in systemd for nova-api-metadata
[09:40] <srihas> which charm could be responsible for installing it? This has been a long standing error for 3 weeks now, and could not find necessary data. Can someone please help?
[10:07] <TheAbsentOne> srihas: I'm not an expert but maybe consider asking on the openstack channel as well I assume they are more experienced ^^ Otherwise you will eventually get an answer here but I think most experts aren't here right now
[10:47] <manadart> stickupkid: Can you rubber-stamp this? https://github.com/juju/juju/pull/8820 It is the same 5 commits that you and John approved for the develop branch.
[10:48] <TheAbsentOne> kwmonroe: https://github.com/Ciberth/gdb-use-case/tree/master/mininimalexamples#issues if you would have time could you tell me what stupid mistake I now did? I seemed to have miss something with my redis request and for mysql/mariadb everything seemingly works fine but my second page does not get rendered o.O
[10:48] <TheAbsentOne> or rick_h_ if you are interested ^^
[10:48] <stickupkid> manadart: done
[10:49] <manadart> Ta.
[11:04] <srihas> TheAbsentOne: thank you, is there another channel for openstack-juju?
[11:05] <TheAbsentOne> I would try #openstack srihas
[11:05] <srihas> TheAbsentOne: thank you
[12:44] <elox> What would be the best way to retrieve node information in a charm about things like number of CPU:s, Available RAM etc on a running unit ?
[12:45] <rick_h_> elox: just calling out to system details like /proc/cpu and running stuff like free or the like?
[12:49] <elox> rick_h: Yeah, I was just curious if there was anything available for one charm to access other units info. I have a situation where I need to provide info to a "slurm-controller" (master) node from "slurm-nodes" (worker) where their number of CPU:s gets written to a configuration file.
[12:49] <rick_h_> elox: yea, so that could either be built into the relation details sent back/forth which would require updating the interfaces used
[12:49] <rick_h_> elox: are you using bdx's slurm stuff?
[12:50] <elox> rick_h: Yeah!
[12:50] <rick_h_> elox: very cool
[12:50] <rick_h_> elox: so I'd chat with him, he'll be online in a couple of hours (he's US west coast) and see if it makes sense for the interfaces to be updated to add it
[12:51] <elox> We are applying the slurm work to build a large HPC cluster at Scania (erik_lonroth <-> elox )
[12:51] <rick_h_> elox: the other way to go is to use a middle-man like telegraf on those
[12:51] <rick_h_> elox: and have the master get whatever details telegrad exports over their APIs instead
[12:52] <elox> rick_h: Sure, I think we'll try our best to do it in some way we can to learn more about juju aswell.
[12:54] <rick_h_> elox: cool, I don't know slurm that well but if you want to do a lot of this and the data you need varies a lot than I'd look at something like telegraf as it provides more data. However, if you just need the basics and it's common for all slurm use cases updating the charm interface and the relation data sent is probably the smallest change you can do
[13:42] <elox> rick_h: thanx. Where can I find information about telegraf ?
[13:48] <rick_h_> elox: https://jujucharms.com/telegraf/ it's a subordinate charm that surfaces machine level metrics and info that you can use to wire up to prometheus for data gathering and such
[14:00] <elox> rick_h: Thanx alot! Its a lot to take in here. =D
[14:03] <rick_h_> elox: <3 let me know how it goes
[14:21] <hml> stickupkid: do we know if the locking issue is linux specific?  or could it happen with windows?
[14:21] <stickupkid> hml: that i don't know
[14:21] <rick_h_> hml: can you sudo with windows though?
[14:22] <rick_h_> hml: I assumed it was ubuntu centric due to the sudo nature
[14:22] <hml> rick_h_: stickupkid: not sure there is some sort of permission thing though
[14:24] <hml> stickupkid: so the reproducer is to bootstrap with sudo?  then try to run juju status without it?
[14:25] <hml> or any command with sudo, then the rest won’t work
[14:26] <stickupkid> to reproduce, run `sudo juju list-controllers` followed by `juju status`
[14:27] <stickupkid> hml: if you `sudo juju status`, because of the caching involved you can end up causing all sorts of permission issues in `~/.local/share`
[14:27] <stickupkid> hml: the idea is to just warn on the lock file and if you can't gain access, at least just let people know with better messaging
[14:28] <stickupkid> hml: it's a slippery road, allowing sudo, as you have to set the permissions on every file we ever save to the filesystem and that's a lot of work
[14:45] <stickupkid> hml: rick_h_: you can run a windows command using `runas` and set it to run as administrator, I'm looking at how that changes the current windows code
[14:45] <rick_h_> stickupkid: cool, good to know.
[14:46] <manadart> stickupkid: Approved #8818 with a comment
[14:47] <stickupkid> manadart: perfect, thanks