[04:29] does anyone have a juju 2.x work around for centos not supporting python3 [04:30] it seems some of the juju libs require python3. which part of the repos for centos7 === thumper is now known as thumper-afk [07:04] Good morning Juju World! [07:47] Hello! Is there a way to configure the size of the 'logs' database? Currently it is 4GB, but I'd like to shrink that, if possible? [09:47] Is there a way to include non-PyPi packages on wheelhouse.txt [09:47] I mean that I created python package and I would like to include that on a charm, but not (yet) publish it on PyPi === thumper-afk is now known as thumper [10:57] anrah_: the wheelhouse.txt is very similar to requirements.txt, you can add your own local paths or paths to a git repo [10:58] Thanks, Noticed also that :) [10:58] anrah_: for example, i have this in a charm's wheelhouse.txt "-e git://github.com/simonklb/aiosmtpd.git@merged#egg=aiosmtpd" [10:58] great :) [11:55] icey: ping [11:55] morning lazyPower [11:55] hey there, can you refresh my memory on the ceph-osd osd-devices config string? [11:55] does that *have* to point to a /dev device or can it just be a filepath? [11:57] lazyPower: it can be a directory, although there are some practical caveats [12:02] basically around using the same disk for multiple purposes, so the directory use case better serves for development, but can be used for example with bcache devices [12:05] yep [12:05] we're working on some demoware and i forgot how osd-devices config string worked, that unblocked me :) thanks [12:05] glad to help lazyPower === bdxbdx is now known as bdx [16:16] hello all, can I apply constraints to juju deployed lxd instances via `--constraints`` ? [16:17] I'm trying it out a few different ways and not seeing the lxd profile modified to any degree [16:18] for example `juju deploy ubuntu --to lxd:0 --constraints "mem=1G"` [16:18] I thought ^ was a thing [16:18] possibly I'm using incorrectly [16:40] bdx: so it was something getting worked on. lxd now supports enforcing those and kvm was updated to respect them [16:40] bdx: I don't think the lxd side was landed [16:40] bdx: the 2.2 release notes call out kvm supporting them but no mention of lxd so looks like it didn't get in there [16:54] Hey guys quick question. I'm deploying a bundle of cephmon and ceph node charms from the store. On there CIDR network definitions do they need me to provide the actual static IPs or are they capable of DHCP? [17:46] 14min juju show warning: https://www.youtube.com/watch?v=7Tqg3Hnkq2U [17:47] lazyPower: marcoceppi bdx kwmonroe jrwren and more ^ === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [17:50] rick_h: Hey you got a second to chat? === scuttle|afk is now known as scuttlemonkey [17:50] vlad_: a sec, doing a live stream in 10 [17:51] rick_h: Is that the youtube link? If so I'd be down to watch [17:51] vlad_: I'm not sure on the CIDR/DHCP bit on the cephmon/node charms. Maybe cholcombe has the details there. [17:51] vlad_: yes, that link should go live once we hit go on the stream [17:52] vlad_: we don't do anything different than this: http://docs.ceph.com/docs/hammer/rados/configuration/network-config-ref/. We just pass the settings along [17:53] vlad_: take a look at the ceph networks section [17:53] cholocombe: Thank you [17:54] rick_h: Thanks to you as well! Good luck on the stream I'll watch [17:54] https://hangouts.google.com/hangouts/_/y5xturh7gffpzedw7ob6hzpijme for those joining in (lazyPower bdx kwmonroe jrwren etc) [17:54] marcoceppi: you around for it today? [17:55] rick_h: skipping this week, will catch you on the next one [17:55] sure, i think I can join [17:55] lazyPower: k [17:55] lazyPower: you get to be the start next one? [17:58] cholocombe: Hey ok cool that document totally cleared it up. If I'm deploying a set of them in a bundle I need to provide an array of addresses then correct? [18:13] vlad_: are you deploying to metal? [18:15] rick_h: uhhh [18:18] cholocombe: Yeah I'm deploying to metal. I don't think the array idea I had makes any sense [18:18] cholocombe: I've got all my networks setup on MaaS. When the deploy kicks of it spits out errors trying to get addresses for my services immediately, but keeps deploying. It gets IPs for all the physical machines and ends up failing around the ceph area [18:24] marcoceppi: https://hangouts.google.com/hangouts/_/hjkwxg4fvjb3rhvfwoqswejggue [18:25] for folks watching heading to https://www.youtube.com/watch?v=Ii1Ax4HgAP0 [18:25] I'll have to see if I can download/stitch them together later [18:30] vlad_: you can have ceph-mon/osd gather their ip info from the juju network spaces if you want [18:30] it's in the docs: https://jujucharms.com/ceph-mon/ under network support [18:33] cholocombe: Is there any way to define spaces in a bundle? [18:38] vlad_: i don't think so. i think you define them on maas? [18:38] i'm not entirely sure [18:39] cholocombe: Yeah at first I didn't even try and use spaces outside of MaaS, and I just realized that juju has picked up my space from MaaS, and I guess I've architected this a bit wrong from that standpoint [18:39] Is there anyway to remove a space once it has been added? I saw no mention of this in the docs [18:40] cholcombe: vlad_ yes, the spaces must exist in the underlying provider first [18:40] cholcombe: vlad_ and then you tell the bundle to leverage them and request that the right unit is on the right machine with access to the right spaces [18:40] rick_h: thanks [18:46] rick_h: Is there a delete space command for when I've created a space in juju that doesn't exist ont he provider? [18:47] vlad_: hmm, checking "juju help commands" there's a reload-spaces but not seeing a remove-space :( [20:40] are we having fun yet? [20:48] Budgie^Smore: with 2.2 on the horizon this week according to rick, you bet :) [20:49] lazyPower so he is being particular bossy this week then? ;-) [20:49] not even close (not to me anyway) [20:49] he gave me leeway [20:49] hehehe nice :) [20:51] so just went will it hit jaas is the real question ;-) [20:52] Should hit really tight on release [20:53] coolio :) [20:56] Finally started building a preseed file for my "cluster in a box" idea === cargonza_ is now known as cargonza === diddledan_ is now known as diddledan === zerick_ is now known as zerick === blahdeblah_ is now known as blahdeblah === JoseeAntonioR is now known as jose