[01:41] <stokachu> anyone seen an error like this http://paste.ubuntu.com/12266891/, running on wily inside a wily container (as in juju bootstrap while inside the container)
[01:42] <stokachu> and a custom juju_home /home/ubuntu/.cloud-install/juju/local
[06:08] <stub> aisrael: With the new charm, you will get the same database if you recreate the service with the same service name. With the old charm, it deliberately won't clean up (because that would be dataloss).
[06:09] <stub> aisrael: We could provide actions for some of these operations, for people uncomfortable with reaching in and dealing with PG directly.
[10:29] <jamespage> coreycb, does https://bugs.launchpad.net/charms/+source/neutron-api/+bug/1456291 ring bells for you? I remember a fix for keystone token handling that you did but I can't find it
[10:29] <mup> Bug #1456291: HA deploy: one neutron-api unit had wrong credentials in memory <landscape> <neutron-api (Juju Charms Collection):New> <https://launchpad.net/bugs/1456291>
[13:05] <aisrael> stub: The problem I see with the new behavior is that the schema (tables and such) are owned by the previous user. Would it make sense to change ownership of those things when new credentials are handed out?
[13:18] <ParsectiX> Hi guys. I find the idea of juju very interesting. I'm wondering about the H/W and what I need to lunch my own private juju service.
[13:18] <ParsectiX> Can someone advice about physical deployments ?
[13:27] <coreycb> jamespage, yes there was an sru to python-keystonemiddleware in (at least) utopic for that
[13:29] <jamespage> coreycb, oh - right so for juno only  gotcha
[13:48] <aisrael> stub: cleanup from the -broken relation doesn't work because the credentials are already removed from the database. I don't see a way for the charm author to deal with this.
[14:10] <jamespage> beisner, coreycb, zul, gnuoy: liberty is deployable from trusty-liberty-proposed with next charms
[14:10] <jamespage> basic boot and access worked OK
[14:10] <beisner>  \o/
[14:10] <gnuoy> tip top
[14:10] <jamespage> I'm tempted to slam-dunk that straight into updates as well and bank it now
[14:14] <beisner> jamespage, i queued up a deploy test w/ tempest etc in uosci
[14:14] <jamespage> beisner, yeah - that's not working so well right now
[14:15] <beisner> jamespage, ah but a point of reference "the beginning"  ;-)
[14:15] <jamespage> indeed
[14:34] <jamespage> beisner, lol - I just screwed up my tempest.conf that's all
[14:40] <apuimedo> jamespage: ping
[14:43] <jamespage> apuimedo, hey
[14:43] <jamespage> coreycb, zul: b3 next week then :-)
[14:43] <apuimedo> jamespage: just a question about the rendering
[14:44] <apuimedo> of the templates in charmhelpers/openstack/templating.py
[14:44] <apuimedo> what is creating the target directories for the the rendering?
[14:45] <jamespage> apuimedo, directories are not created by the templating framework - normall they are overwriting configuration files provided in packaging
[14:45] <jamespage> apuimedo, if not then charm-helpers provides a mkdir function that you can call before trying to render anything.
[14:46] <jamespage> allowing you to secure things as required...
[14:46] <apuimedo> jamespage: I see, thanks ;-)
[14:46] <apuimedo> I did put the mkdir workaround but stupidly did it before neutron got installed so the user that I was using for ownership of the dir did not exist back then
[14:47] <apuimedo> :P
[14:49] <apuimedo> I was checking how nuage and calico deal with the lack of creation of the plugin dir
[14:49] <apuimedo> but I could not find any mkdir that they issue for /etc/neutron/plugin/{nuage,calico} so I guess that their packages create those directories
[14:49] <jamespage> apuimedo, normally yes
[14:50] <apuimedo> jamespage: I'll file a bug agains midonet plugin packaging then to create the empty dir or maybe even with an example config :-)
[14:51] <apuimedo> jamespage: thanks for the info ;-)
[14:51] <jamespage> apuimedo, http://paste.ubuntu.com/12273645/
[14:51] <jamespage> ok in distro
[14:52] <apuimedo> jamespage: unfortunately those are outdated from back when the vendor plugins were in the neutron tree
[14:55] <apuimedo> jamespage: we are moving to have a ppa
[14:55] <apuimedo> and we want to move it inside cloud-archive
[14:58] <lazyPower> apuimedo: I assume the service you're writing files for is not a subordinate service?
[15:00] <apuimedo> no
[15:00] <apuimedo> lazyPower: it is neutron-api
[15:00] <lazyPower> ah hokay
[15:01] <apuimedo> lazyPower: why, would that complicate matters?
[15:01] <apuimedo> (well, ordering could)
[15:01] <lazyPower> apuimedo: i was going to suggest that it could be easier :)
[15:02] <apuimedo> lazyPower: well, I'm really looking forward to the neutron plugin configuration being based on subordinates
[15:02] <apuimedo> I think it was jamespage who showed me the plans and they looked really nice
[15:03] <lazyPower> apuimedo: http://bazaar.launchpad.net/~charmers/charms/trusty/neutron-api-plumgrid/trunk/view/head:/metadata.yaml - it can be today unless i'm missing something
[15:03] <lazyPower> i just reviewed this stack earlier in the week
[15:04] <lazyPower> yeah jamespage is a bit of a local folk hero 'round these parts :)
[15:06] <lazyPower> apuimedo: is this ready for re-review and hasn't gotten any attention or are we just monitoring for progress at the moment? https://bugs.launchpad.net/charms/+bug/1453678
[15:06] <mup> Bug #1453678: New charms: midonet-host-agent, midonet-repository, midonet-api <Juju Charms Collection:New> <https://launchpad.net/bugs/1453678>
[15:08] <apuimedo> lazyPower: I was blocked due to other work
[15:08] <apuimedo> lazyPower: this week I got back to it
[15:08] <lazyPower> ack, just checking in, no pressure :) Wanted to make sure you weren't blocked on us.
[15:08] <apuimedo> lazyPower: no, not yet
[15:08] <apuimedo> thanks
[15:08] <apuimedo> all ready for the Charmers Summit?
[15:10] <lazyPower> not exactly - but getting there :)
[15:11] <apuimedo> :-)
[15:12] <lazyPower> apuimedo: so, we're going to have a couple sessions about rapid charming w/ layers to deliver app containers re-using components that mbruzek and I have built
[15:13] <lazyPower> based on our last convo, might be applicable to your efforts
[15:13] <apuimedo> lazyPower: sounds like it
[15:13]  * mbruzek waves at apuimedo
[15:13] <apuimedo> I'll have to make one last effort to get approval to join
[15:13] <apuimedo> mbruzek: hey!
[15:14] <mbruzek> hello apuimedo!
[15:15] <jamespage> lazyPower, apuimedo: can't really take credit for that - gnuoy did the work :-=)
[15:15] <apuimedo> gnuoy: good job on that ;-)
[15:28] <jamespage> apuimedo, two of thos base layers should be 'neutron-api-subordinate' and 'neutron-edge-suboridate' - I need to flesh those out
[15:29] <apuimedo> jamespage: will there be a session about that in the charmers summit?
[15:29] <jamespage> apuimedo, hopefully
[15:34] <apuimedo> :-)
[15:34] <apuimedo> catbus1: nice nick
[15:34] <catbus1> :)
[15:40] <lazyPower> apuimedo: i'm going to mark 1453678 as in progress since you're working on it and its sitting in teh queue at 30 days, unreviewed, and its not ready for re-review.
[15:40] <apuimedo> ok
[15:40] <lazyPower> when you're ready, make sure you update that bug to fix-committed or new and it'll make its way back in
[15:41] <apuimedo> lazyPower: ok, thanks!
[15:46] <apuimedo> lazyPower: is there some way to change history in launchpad's bzr?
[15:47] <lazyPower> apuimedo: in what context?
[15:47] <lazyPower> as in force push your repository?
[15:52] <jamespage> beisner, we'll need todo a general charm update for liberty to switch mysqldb -> pymysql
[15:53] <beisner> jamespage, huh where?
[15:57] <jamespage> beisner, all over
[15:57] <jamespage> mysql:// -> mysql+pymysql://
[15:59] <beisner> jamespage, in the db_uri in amulet tests?  or even broader?
[16:06] <jamespage> beisner, for liberty configuration file templates - and I guess the amulet tests as well
[16:07] <beisner> jamespage, gotcha.   kind of a weird db uri.  is that going to stick @ L?
[16:08] <beisner> they must have some add'l uri parser foo in L
[16:10] <jamespage> beisner, yeah - its in sqlalchemy
[16:11] <jamespage> most of our charms continue to work as they use the mysqldb syntax and explicitly install the mysqldb package
[16:11] <jamespage> but there are good reasons to switch to pymysql
[16:11] <jamespage> py3, better aio
[16:14] <jamespage> beisner, hmm - although I think that coreycb may have tried to fix this up in sqlalchemy
[16:16] <beisner> jamespage, coreycb - trusty-liberty-proposed - basic deploy check ok, just 2 tempest fails;  http://10.245.162.77:8080/job/deploy_with_deployer/11245/consoleFull
[16:19] <beisner> jamespage, can you review this nova-compute ppc64el MP?  i may also do a separate proposal later to make it smarter / more automagic, but needed to at least expose these controls to have a working bundle.  https://code.launchpad.net/~1chb1n/charms/trusty/nova-compute/cpu-mode/+merge/269952
[16:24] <jamespage> beisner, one niggle
[16:24] <jamespage> beisner, also does that context get used for nova.conf?
[16:25] <beisner> jamespage, ack re: arch detection conditional;   yep, nova.conf.
[16:27] <beisner> jamespage, on the topic of arch detection, should we go as far as to make the nova-compute charm just do those things when deployed on ppc64el, and indicate that in the config.yaml config descriptions?
[16:27] <beisner> ie.  just work
[16:29] <jamespage> beisner, yes
[16:29] <jamespage> skip if its not ppc64el
[16:29] <jamespage> no-op
[16:29] <beisner> jamespage, and if user sets options explicitly, use those in all cases?
[16:30] <beisner> except non ppc64el of course
[16:30] <beisner> ie. explict trumps automagic
[16:30] <beisner> explicit even
[16:38] <benbc> Hello. What is the normal approach to installing different versions of software? Do users expect to see charms named for different versions (e.g. `juju deploy postgresql-9.3`)? Or are charms parameterized with the version number (e.g. `juju deploy --config config.yaml postgresql` with `version: 9.3` in config.yaml?) or do all charms in practice just install the latest version of the software?
[16:39] <benbc> Or some other possibility that hasn't occured to me?
[16:44] <jamespage> beisner, oh wait - I see what you're saying
[16:44] <jamespage> beisner, ok - how about we reword that config option to be a boolean - as its on/off true/false
[16:44] <jamespage> provide a suitable default, and only apply it on ppc64el
[16:44] <beisner> jamespage, int is a valid value in that cmd
[16:45] <beisner> on/off/int
[16:45] <jamespage> oh
[16:45] <jamespage> what does int do?
[16:46] <beisner> there are smt modes, which can be set by ints
[16:48] <jamespage> beisner, I need to eod - my brain is fried
[16:48] <beisner> jamespage, np.  thx for the input.  i won't be doing much with that until next wk, no worries.
[16:48] <jamespage> beisner, ok - it sounds like on/off/int is valid then but that might be better modelled as two config options - smt on/off
[16:49] <jamespage> and then an optional 'int' value if its turned on?
[16:49] <jamespage> does 'on' do a sane default?
[16:49] <beisner> jamespage, or no config options and it all just works ;-)   jk, kind of.
[16:49] <beisner> jamespage, no it would need to be off for nova-compute
[16:49] <jamespage> beisner, we should provided an optionionate default for our experience, with knobs for experts
[16:50] <jamespage> does that make sense
[16:50] <jamespage> ?
[16:50] <beisner> jamespage, yep, agree
[16:50] <jamespage> and make sure that if someone tries this on amd64 - it no-op's
[16:50] <jamespage> that's my guidance - I'll review again on monday if you want to update today :-)
[16:50] <jamespage> ttfn
[16:50] <beisner> jamespage, ack, will adjust next wk  or on my next ppc64el endeavor.
[16:50] <jamespage> beisner, oh - btw - I've pushed proposed->updates for liberty
[16:50] <jamespage> its OK for now
[16:50] <beisner> woot
[16:51] <jamespage> beisner, I've also switch the default mysql dialect in sqlalchemy to be pymysql
[16:51] <jamespage> so that the existing mysql:// string dtrt
[16:51] <beisner> ah nice
[17:35] <skylerberg> beisner: I am getting CI failure's on my patch (https://code.launchpad.net/~sberg-l/charms/trusty/nova-compute/tintri-interface/+merge/269998).
[17:36] <skylerberg> I think the issue is on the CI server's end, because it looks like a deployment is timing out and my commit shouldn't have any impact on that process really.
[17:39] <beisner> o/  skylerberg, commented, re-queued the job.
[17:40] <skylerberg> beisner: thanks!
[17:43] <beisner> skylerberg, welcome
[19:53] <DrewT> anyone here have experience deploying openstack using the postgresql charm? I can't get the keystone charm to populate the db after db_sync creates the schema
[20:05] <marcoceppi> DrewT: I've not tried actually, I've always just kind of used mysql
[20:05] <marcoceppi> beisner: do we have a test in OSCI for postgresql?
[21:00] <beisner> hi marcoceppi - pgsql isn't watched by uosci
[21:01] <marcoceppi> beisner: DrewT was having some issues with psql and keystone, was wondering if it was tested at all
[21:01] <beisner> marcoceppi, we exercise mysql + keystone || percona xtradb + keystone, but not pgsql
[21:37] <dbainbri> with the docker charm I can deploy docker and then use juju run to execute a container on the docker instance, but what I am looking for is a way to expose one or more servers running docker as a "machine" onto which I can "place" docker based charms. So that in the JuJu UI I can create configurations based on docker charms and then commit them to be placed at which point they will be deployed to those systems that are running docke
[21:48] <beisner> thedac, ok, rabbits released to wolves @ https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b/+merge/270102    ...thanks!
[21:48] <thedac> beisner: great, thanks
[21:50] <marcoceppi> dbainbri: you should talk with mbruzek and lazyPower, they might be able to help you there
[21:51] <dbainbri> marcoceppi: thx.
[21:54] <mbruzek> dbainbri: There is a lot of docker in your question.  Let me ask a few questions.
[21:54] <dbainbri> mbruzek: fire away
[21:55] <mbruzek> dbainbri: How would the Juju UI create configurations ?
[21:56] <mbruzek> One could certainly write a charm that installs docker, and as a configuration option lets you configure which docker image to pull and run.
[21:56] <mbruzek> That configuration option could be changed via the UI (or command line).
[21:56] <mbruzek> I don't think I understand your question correctly.
[21:56] <dbainbri> mbruzek: i am a newbie wrt Juju so please bear with me. You mean configurations for the containers or for the docker instances?
[21:57] <marcoceppi> mbruzek: it sounds like, and correct me if I'm wrong dbainbri, that you have a docker solution for something, but you want to wrap a charm around it so you can deploy it with docker, get density, but still use juju to manage it (relations, etc)
[21:57] <dbainbri> I would like to point Juju at a bunch of docker instances (hosts running docker)
[21:58] <mbruzek> dbainbri: I am not sure we can do that at the moment.  Juju would have to have deployed those docker instances to be able to orchestrate/manage them.
[21:58] <dbainbri> mbruzek: I would like Juju to treat hosts running docker as a machine on which "docker" charms can be placed.
[21:58] <mbruzek> ah
[21:59] <mbruzek> dbainbri: That does not work at the moment, we do have a container technology called LXC that is very similar to Docker, but it models a machine container, rather than an application container.
[21:59] <mbruzek> But since LXC is not Docker you can't run your favorite Docker image in LXC.
[22:01] <dbainbri> mbruzek: is there a writeup of that somewhere?
[22:01] <mbruzek> Juju can currently treat the Virtual Machines that you get from Amazon, GCE, OpenStack as LXC hosts and you can put many LXC images inside a VM.
[22:02] <dbainbri> and those LXC images are "placed" much like a charm on a host in Juju?
[22:03] <mbruzek> dbainbri: yes.
[22:03] <mbruzek> dbainbri: I am looking for some documentation for you, the problem is LXC is also our "local" cloud story.
[22:03] <dbainbri> that sounds like what I am looking for except I am looking for that on a local hosts in Juju and with docker
[22:04] <mbruzek> Using LXC you can make your computer look like a cloud, so you can deploy multiple LXC images to your desktop or latop
[22:04] <marcoceppi> the problem with this is that you still ahve to code up how to install the software onto the LXC container, the LXC image is just a base clodu image
[22:04] <mbruzek> But that is not what you are looking for, unfortunately that is all I find in the search results.
[22:04] <dbainbri> any desire / plan to add this same capability with docker or the open container work?
[22:05] <marcoceppi> dbainbri: yes, we're actively working on something like this in juju though I'm not sure if it's exactly how you described it
[22:05] <dbainbri> marcoceppi: where are my thoughts off from what is being worked on?
[22:07] <marcoceppi> dbainbri: we're working to expose "workloads"/process running in juju, so you could deploy the docker charm, add workloads. Or deploy kubernetes, rocket, KVMs, and the charm would tell juju that it's running these items
[22:08] <mbruzek> dbainbri: We don't have as good of integration with Docker as you want.  Even the current work I am not sure it will be what you are looking for.
[22:08] <marcoceppi> dbainbri: it wouldn't directly allow you to configure it, but we have tools that will let you wrap docker in charms, so if you have a foo server in docker, you can easily build a charm around that, exposing configuration, etc, then deploy it and manage it with juju
[22:09] <marcoceppi> mbruzek lazyPower do you guys ahve examples using the docker layer stuff?
[22:09] <mbruzek> marcoceppi: we are writing that up now, it is very rough and not what dbainbri has described
[22:10] <marcoceppi> mbruzek: right, but if you take the docker composer items, and use juju deploy --to
[22:10] <mbruzek> That is writing a docker charm, he wants to deploy charms to a docker host
[22:10] <dbainbri> marcoceppi: i thought i saw a docker-seed charm that could be expanded to run a single (or multiple) docker containers in a charm, but it really is just a wrapper to run a static set of containers
[22:11] <mbruzek> dbainbri: That is where we are at at the moment
[22:11] <mbruzek> dbainbri: https://jujucharms.com/docs/stable/charms-deploying#deploying-to-specific-machines-and-containers
[22:13] <mbruzek> dbainbri: So when you deploy by default Juju gives you a new Virtual Machine.  You can use "--to <machine number>/lxc/1" to deploy the same charm to an LXC container
[22:13] <mbruzek> I know that is not docker but that is how we use containers in Juju at the moment.
[22:13] <dbainbri> mbruzek: i suppose the "docker-ssed" charm could take a config file that is a set of containers to start and how to connect them (a docker-compose file for example)
[22:13] <mbruzek> The Juju GUI also has a tab called "machine view" where you can deploy to LXC containers within a VM as well.
[22:14] <dbainbri> or use Juju to deploy a bunch of docker instances and then Kubernetes or or compose to layer containers on them.
[22:14] <mbruzek> dbainbri: Yeah I think you are on to something there.
[22:14] <mbruzek> dbainbri: We have kubernetes charms, but that creates a cluster for you and then you would have to deploy things to the cluster with kubectl
[22:14] <dbainbri> does every charm map to a VM ?
[22:15] <marcoceppi> almost always
[22:15] <mbruzek> dbainbri: Except for the "subordinate" charms, which can share a VM with another charm.
[22:15] <marcoceppi> mbruzek: and manual placement via --to
[22:15] <marcoceppi> and containers which really aren't vms
[22:16] <marcoceppi> and in the case of bare metal
[22:16] <mbruzek> dbainbri: but yes almost always a VM.  And you can pack many LXC containers on one VM similar to Docker
[22:16] <marcoceppi> I think "machine" may be more appropriate verbiage
[22:16] <dbainbri> mbruzek: subordinate charms, sounds interesting. do subordinate charms show up in the UI as first class citizens?
[22:16] <mbruzek> yes
[22:16] <mbruzek> https://jujucharms.com/docs/1.24/authors-subordinate-services
[22:16] <dbainbri> so could "docker container charms" be subordinate charms that you could dynamically relates to a docker charm?
[22:17] <marcoceppi> dbainbri: yes
[22:17] <mbruzek> Subordinates don't use container technology at all, they just share the filesystem with a VM.
[22:17] <dbainbri> ah, ok
[22:17] <marcoceppi> mbruzek: well, what he described could work
[22:17] <dbainbri> so i couldn't make a subordinate charm that essentially did a "docker run"
[22:17] <marcoceppi> mbruzek: docker base charm, that manages installing docker, subordiante that provides a relation that describes the container it's installing
[22:18] <mbruzek> dbainbri: and one strategy could be to deploy a VM running Docker (via a docker charm) and then deploy a bunch of subordinates that know how to start up their own docker services.
[22:18] <dbainbri> marcoceppi: yes something like that
[22:18] <marcoceppi> okay, I think we're straying a bit in this conversation
[22:18] <mbruzek> There you go
[22:19] <marcoceppi> So, from a juju perspective you have services, units, and machines
[22:19] <dbainbri> mbruzek: would agree we are straying. just looking at possibilities. would prefer that docker instances could be treated as "object on which docker based charms could be placed, but interested in what can be done now as well.
[22:19] <marcoceppi> services are charms, units are the number of machines that are running in that service (think scale out), and machines are the resources in a cloud that are deployed
[22:20] <marcoceppi> with that model, you could create a charm for each docker service you want to deploy, and then have juju provision one machine and force each unit of the charm to live on that single machine.
[22:20] <marcoceppi> the second workflow is a subordinate workflow, but subordinates aren't first class citizens, they don't get assigned to machines (and machines are a full operating system so either bare metal, a cloud instance, or an LXC container)
[22:21] <marcoceppi> instead they are attached to primary services, so think of things like monitoring agents or logging agents
[22:21] <marcoceppi> it doesn't make sense to have them on their own machine, but instead on an existing workload
[22:21] <marcoceppi> so you could use a subordinate model to drop workloads on a docker service
[22:21] <dbainbri> marcoceppi: with respect to your first option. ultimately you could build out several units for "docker" and then the docker services are forced to one of those machines
[22:22] <marcoceppi> dbainbri: So the first solution isn't the worst, i think for what you're describing it's the best course of action until juju grows app containers as a first class citizen
[22:23] <dbainbri> marcoceppi: * nod *
[22:23] <marcoceppi> dbainbri: if you force X primary services onto a single machine, you run the risk of collisions, so if the charm was clever enough (ie, is docker installed? no - install, other wise just docker run) and the containers didn't conflcit with resources
[22:24] <marcoceppi> dbainbri: if you were to "scale out" any of those charms, they'd just get a fresh machine from the provider, where as you can't scale out a subordinate without scaling the base. So if you have foo and bar subordiantes, and you scale the docker primary service, you get an additional foo and bar container running
[22:25] <marcoceppi> I feel like I'm not doing the explaination justice, it's a pretty niche scenario, we call it "hulk-smashing" and it typically ends up broken deployments. (ie, forcing mysql and mariadb to the same machine in juju will break because they'll stomp on each other)
[22:25] <mbruzek> (use the same files)
[22:26] <marcoceppi> this is why we have LXC support in juju, you can have one machine in juju, but juju can create LXC containers, full system containers, on this machine, so MySQL and MariaDB could be deployed to two LXC containers on one machine without conflicting
[22:26] <marcoceppi> the charm has no idea, it just has root on an operating systema and runs as expected
[22:26] <dbainbri> marcoceppi: with docker that conflict would likely be at exposing ports on the hosts.
[22:26] <marcoceppi> dbainbri: exactly, so you'd have to make sure port was configurable in each image, and you'd have to make it a configuration option on each of the charms
[22:27] <marcoceppi> dbainbri: so you could map each port yourself in the deployment
[22:27] <marcoceppi> again, it's not the prettiest, and it's an edgecase, but there are ways to achieve what you've described
[22:27] <marcoceppi> dbainbri: I'd be happy to write up a little example set of charms if you're interested
[22:28] <marcoceppi> dbainbri: also, not sure where you are in the world, but we've got a juju charmer summit coming up in two weeks: http://insights.ubuntu.com/event/juju-charmer-summit-2015/ if you're interested in attending and chatting more about this
[22:29] <dbainbri> marcoceppi: i am interested if you are willing to do the write up ;) i am on the left coast (US) so not near DC (used to live in Boston, but now in CA)
[22:30] <marcoceppi> dbainbri: sure, I'll try to create a few real simple examples
[22:30] <marcoceppi> dbainbri: I'm not really proficent in docker, but mbruzek and lazyPower have some great examples already
[22:32] <dbainbri> i will start by playing with the docker-seed and expanding it, seeing what i can do there. but any info from those with more knowledge is more than appreciated.
[22:32] <dbainbri> thx everyone, just for this chat, very helpful.
[22:32] <mbruzek> welcome