[02:25] <wallyworld> babbageclunk: here's that refactor PR https://github.com/juju/juju/pull/8815
[03:35] <babbageclunk> wallyworld: yeah, looking
[03:36] <babbageclunk> (Weird, I don't know why I didn't get a notification about that)
[03:36] <wallyworld> babbageclunk: awesome. will be some work ahead but i think i have line of sight to removing IAASModel/CAASModel
[03:36] <wallyworld> we'll see how it goes
[04:11] <anastasiamac_> wallyworld: babbageclunk (or anyone else keen), PATL https://github.com/juju/juju/pull/8816 - invalidate credential call back when coming from bootstrap
[04:11] <wallyworld> ok
[04:16] <wallyworld> anastasiamac_: lgtm assuming it's been tested live
[04:51] <babbageclunk> wallyworld: reviewed! I complained about naming a bit, ping if you want to discuss.
[04:51] <wallyworld> ok, ty, looking
[04:53] <wallyworld> babbageclunk: i specifically avoided the parent interface containing all 3 - that was the point of the split. i needed to be able to pass in something providing filesystem but not volume
[04:53] <wallyworld> or did i not understand?
[04:55] <babbageclunk> wallyworld: the parent interface would have 3 methods, one each returning the new interfaces.
[04:55] <babbageclunk> wallyworld: (or nil, if this model doesn't support that interface)
[04:56] <wallyworld> hmmm. that essentially is what we have now with model.IAASModel() and model.CAASModel() and is what i was trying toget away from
[04:56] <babbageclunk> well, instead you'd call model.FilesystemAccess()
[04:56] <wallyworld> the IAASModel() and CAASModel() methods return err, not just nil thouhj
[04:57] <wallyworld> i'll take a look to see if it works nicely
[04:57] <babbageclunk> well, you could do that too. This just matched what you had already (checking for nil for that argument).
[04:58] <wallyworld> i was trying to avoid the existing pattern
[04:58] <wallyworld> it seems more idiomatic to pass in things satisfying smaller interfaces
[04:59] <wallyworld> i guess it's just the constructor you want changed
[05:00] <wallyworld> the api struct would still have 3 attributes representing the smaller interfaces
[05:00] <wallyworld> assuming i understand correctly you point
[05:00] <babbageclunk> Yeah, I think so
[05:01] <wallyworld> ok, i'll dive in and see. i'll just do a bit more work on the followup PR first
[05:01] <wallyworld> will probably push changes tomorrow, we see how i go
[05:01] <babbageclunk> I mean, it matches what you're doing in functions like StorageAttachmentInfo or ClassifyDetachedStorage
[05:01] <babbageclunk> ok
[05:02] <wallyworld> yeah, it's messy, thanks for helping em untabgle it
[05:02] <wallyworld> it will still be a mess even when i'm done
[05:02] <wallyworld> the state vs model stuff is still in so many places
[05:18] <babbageclunk> wallyworld: yeah, it's definitely still not going to be super-elegant
[05:18] <wallyworld> got to start somewhere right
[08:09] <BlackDex> how do i add a local charm to the bundle, i can't find it any more, and i forgot howto ;)
[08:13] <BlackDex> is it just "charm: /path/to/local/charm" ?
[09:43] <zeestrat> BlackDex: Yeah, last I checked it's just a path. Not sure if it's only relative or not though.
[09:56] <BlackDex> i will see that at that time
[09:56] <BlackDex> thx :)
[11:53] <rick_h_> BlackDex: starts with a .
[12:01] <BlackDex> so relative then
[12:01] <BlackDex> thx rick_h_
[12:12] <KingJ> If I set a http-proxy and a https-proxy, do I also need to set an apt-http-proxy and apt-https-proxy or will Juju use the proxies defined by http-proxy and https-proxy?
[12:16] <BlackDex> KingJ: You need to set all
[12:17] <BlackDex> apt-* are set in the /etc/apt/* special files
[12:17] <BlackDex> https-* are set as environment values
[12:17] <BlackDex> if applications support that they will try to use it
[12:17] <BlackDex> most python apps do
[12:18] <KingJ> Ahhh right, that explains why I have a few units with apt issues then - I thought just setting the general proxy would be enough.
[12:18] <rick_h_> KingJ: right, because many folks will run mirrors/etc for apt that won't follow the normal http rules for traffic
[12:18] <KingJ> I've just set the apt-* proxies in my model config - will the machines/units pick that up automatically?
[12:18] <BlackDex> keep in mind that it will then traffic everything over the proxy
[12:18] <rick_h_> KingJ: so it's the cost of flexibility tbh
[12:18] <rick_h_> BlackDex: you trying out 2.4 rc yet?
[12:18] <BlackDex> if you have a local network which you do not want to be proxied
[12:18] <BlackDex> you need to set that
[12:19] <BlackDex> rick_h_: no, not yet
[12:19] <BlackDex> but i like the new stuff in it :)
[12:19] <KingJ> I'm looking to proxy all http(s) traffic, but not any other traffic.
[12:20] <BlackDex> i know that if you set the proxy, and deploy something like openstack, even the openstack services will use the proxy, even when they are on the same subnet
[12:20] <BlackDex> best is to exclude the local network or some special IP's if that is the case
[12:21] <KingJ> Ah... that would be problematic. I'll set no-proxy to the local network then.
[12:21] <BlackDex> you can add those exceptions to the "no-proxy" setting of the model
[12:21]  * rick_h_ is looking for 2.4 rc feedback so starts bugging folks bdx magicaltrout zeestrat TheAbsentOne 
[12:21] <BlackDex> i don't know if using a CIDR works these day's
[12:21] <BlackDex> else you need to add every IP in the subnet ;)
[12:23] <BlackDex> rick_h_: if i have some spare time (and i do not have much) i will try maas 2.4 my self :)
[12:24] <rick_h_> BlackDex: :) cool. we're going to have another rc2 for a bug in the oracle provider but I want to start panning folks for feedback before we go final.
[12:24] <TheAbsentOne> rick_h_: unfortunately I won't be much of a help as I'm not even allowed to try it out, I don't even have access to my controllers
[12:24]  * rick_h_ would rather catch any issues in rc and fix vs having to spin the .1 in a hurry
[12:24] <rick_h_> TheAbsentOne: on JAAS?
[12:25] <rick_h_> TheAbsentOne: or some other reason you don't have access?
[12:25] <KingJ> BlackDex: When I set these values in the model config, will Juju push it out to the machines automatically? Or do I need to do something extra to ensure that the machine is reconfigured with the appropriate config?
[12:25] <rick_h_> KingJ: Juju will handle it
[12:25] <BlackDex> hmm
[12:25] <BlackDex> ah greate
[12:25] <TheAbsentOne> I don't even know how they installed juju, all I know my machines are managed by a vmware-cluster and I (my user) have 2 models as my plaground :P
[12:25] <rick_h_> TheAbsentOne: oh, I see
[12:25] <BlackDex> it didn't do that when the models where first introduced if i'm correct
[12:26] <TheAbsentOne> I'm busy with a repo that might interest you too though rick_h_
[12:26] <BlackDex> atleast i had some issues then
[12:26] <rick_h_> BlackDex: hmmm, let's say that I'd expect Juju to and if it doesn't please file a bug :)
[12:26] <BlackDex> haha
[12:26] <BlackDex> i haven't checked it latley
[12:27] <BlackDex> same goes for the cidr in the no-proxy
[12:27] <BlackDex> i created a script which will create a bootstrap config which will add all the ip's of a subnet i enter
[12:27] <rick_h_> BlackDex: so I do know there's updates around that in 2.4 specifically for the issue you mention
[12:27] <BlackDex> else it was to much work
[12:27] <KingJ> Right, http(s)-proxy set back to default, apt-http(s)-proxy set, and no-proxy set to a CIDR... let's see how this goes :)
[12:28] <rick_h_> BlackDex: so there's new proxy values that the charms can selectively use or not use vs setting the main system ones for all traffic and having to manage no-proxy for large groups of IPs
[12:28] <BlackDex> KingJ: when you `juju ssh X` and execute `env` you should see the proxy settings
[12:28] <KingJ> /etc/apt/apt.conf.d/95-juju-proxy-settings seems to have picked up the settings
[12:29] <KingJ> env isn't showing any proxy settings
[12:29] <BlackDex> also check if the proxy is set in the /etc/apt/preferences.d if i'm correct
[12:29] <BlackDex> oke cool
[12:29] <KingJ> /etc/apt/preferences.d/ is empty
[12:29] <BlackDex> conf.d it is yea
[12:29] <BlackDex> did you logged in after the changes
[12:29] <BlackDex> or whas it an old connection?
[12:30] <KingJ> New connection... I think. It was an LXC container so I just exited and re-ran lxc exec
[12:30] <BlackDex> hmm
[12:30] <BlackDex> don't know if it works like that
[12:31] <BlackDex> i think you need to `juju ssh application/0` to it
[12:31] <KingJ> Ah right, let me try that then
[12:32] <KingJ> Hrm, connected in that way, ran env and it's hanging. Hmm.
[12:33] <BlackDex> hanging?
[12:33] <BlackDex> on env :s
[12:33] <KingJ> Yeah, not quite what i'd expect heh
[12:34] <KingJ> Ah hold on, I think i've spotted something odd that could be causing issues at a lower level - MASS DHCP assigned this LXC container a .254 address, that's probably not going to play too well with things.
[12:35] <BlackDex> ;)
[12:39] <KingJ> Right now that's corrected in MAAS, probably easiest to tear down the model and recreate :)
[12:39] <BlackDex> KingJ: you can provide the model config during the add-model command
[12:40] <BlackDex> which includes all the config you want like http-* apt-* and no-proxy stuff
[12:40] <KingJ> Can I put the model config in a yaml file and run juju add-model model.yaml ?
[12:45] <BlackDex> yea
[12:45] <BlackDex> KingJ: https://docs.jujucharms.com/2.3/en/models-config
[12:46] <BlackDex> it is stated over there
[12:46] <KingJ> Perfect, let's try this...
[12:47] <BlackDex> just simple "http-proxy: http:maas:8000"
[12:47] <BlackDex> without the " ofcourse
[12:47] <KingJ> Silly question, what key would I use to set the model name?
[12:48] <rick_h_> KingJ: you have to do that at add-model time
[12:48] <KingJ> Ah right, so juju add-model name, then juju model-config file.yaml
[12:48] <BlackDex> no key
[12:48] <rick_h_> KingJ: right
[12:48] <BlackDex> `juju add model --config myconfig.yaml default`
[12:48] <rick_h_> or I think add-model takes a --config
[12:48] <rick_h_> yea
[12:48] <BlackDex> where default is the model name
[12:49] <BlackDex> no special stuff you need to do in the yaml regarding the model-name
[12:49] <BlackDex> only the settings with each setting on a new line
[12:49] <KingJ> Excellent, that all seemed to work
[12:53] <KingJ> Ok, time to deploy the bundle again. Thanks for all your help so far :)
[12:53] <BlackDex> yw :)
[12:53] <BlackDex> goodluck
[12:54] <BlackDex> if  you are using juju 2.3 you can use the --dry-run now :)
[12:54] <BlackDex> Very nice
[12:54] <BlackDex> it filters some basic errors out of it
[12:54] <BlackDex> not all, like bad config options of the charms
[12:54] <BlackDex> mistyped etc..
[12:54] <rick_h_> BlackDex: cool, glad to hear you're using that and finding it useful
[12:55] <KingJ> Ah yeah, that would have been a good idea, but I think the bundle should be OK now - i've made enough revisions to it over the past few days heh. I'm on 2.4-rc1 at the moment because of a bionic related issue.
[12:55] <BlackDex> ah!
[12:56] <BlackDex> that would explain some issues then i had with 2.3.x and bionic
[12:56] <BlackDex> didn;t look any further
[12:56] <BlackDex> no time ;)
[12:56] <BlackDex> used xenial again
[12:56] <BlackDex> i normally want to wait for the .1 release anyway
[12:56] <BlackDex> rick_h_: Yea, i really like it
[12:57] <BlackDex> now it needs to be extended to check if all the config options are valid ;)
[12:57] <rick_h_> KingJ: cool, let me know if you hit any 2.4 issues.
[12:57] <KingJ> https://bugs.launchpad.net/juju/+bug/1764317 is the one I ran in to on 2.3.x and made me jump on to 2.4-beta. This is a new greenfield environment, albeit lab, so I wanted to jump on to the latest and greatest.
[12:57] <mup> Bug #1764317: bionic LXD containers on bionic hosts get incorrect /etc/resolve.conf files <bionic> <cdo-qa> <cdo-qa-blocker> <foundations-engine> <kvm> <lxd> <network> <uosci> <juju:Fix Committed by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1764317>
[12:58] <BlackDex> KingJ: yea netplan is a bit to new for my tast
[12:59] <BlackDex> didn't expected it to be in a LTS release
[12:59] <KingJ> I think it was in 17.10, but it's still quite a big change. I like it conceptually but still a few rough edges around.
[12:59] <BlackDex> i had some issues with netplan and bonding
[12:59] <KingJ> BlackDex: This bug? https://bugs.launchpad.net/maas/+bug/1774666
[12:59] <mup> Bug #1774666: Bond interfaces stuck at 1500 MTU on Bionic <cdo-qa> <foundations-engine> <mtu> <netplan> <cloud-init:Fix Committed by chad.smith> <MAAS:Invalid> <cloud-init (Ubuntu):Confirmed> <netplan.io (Ubuntu):Confirmed> <cloud-init (Ubuntu Xenial):New> <netplan.io (Ubuntu Xenial):Invalid>
[12:59] <mup> <cloud-init (Ubuntu Artful):New> <netplan.io (Ubuntu Artful):Invalid> <cloud-init (Ubuntu Bionic):New> <netplan.io (Ubuntu Bionic):Invalid> <cloud-init (Ubuntu Cosmic):Confirmed> <netplan.io (Ubuntu Cosmic):Confirmed> <https://launchpad.net/bugs/1774666>
[13:00] <BlackDex> no, not that one, but that is nasty also
[13:00] <BlackDex> it didn't connect
[13:00] <BlackDex> or it didn't created the LACP bonding the right way
[13:01] <KingJ> Huh interesting, i've not had any problems with the bond formation itself (using 802.3ad) but the MTU issue is affecting me, but it's not a blocker at least just slightly less optimal.
[13:38] <stickupkid> rick_h_: here are the QA steps for the PR https://github.com/juju/juju/pull/8818
[13:38] <rick_h_> stickupkid: cool ty, I'm going to try a slightly different tact and see if it works and if so share how that might be made a little easier
[13:39] <stickupkid> rick_h_: it assumes you don't already have a tmp folder in your $HOME dir
[13:39] <rick_h_> stickupkid: k, lol at using the charm as the resource to itself :)
[13:40] <stickupkid> rick_h_: easiest way without forking the world!
[13:40] <rick_h_> stickupkid: made me smile
[13:43] <MrOldest2> hello
[13:48] <u0_a274> hi
[13:48] <u0_a274> hello
[13:48] <u0_a274> come on
[13:50] <rick_h_> having fun?
[13:53] <u0_a274> hi
[14:11] <zeestrat> rick_h_: got a rough eta when y'all want cut a ga? I'd love to kick the tires but got some pto coming up.
[14:48] <BlackDex> hmm, how do i upgrade a local charm?
[14:48] <BlackDex> do i need to create a new folder, or can i overwrite the current and just tell juju to upgrade the charm?
[14:49] <BlackDex> i probably need to update the revision then i think?
[15:36] <zeestrat> BlackDex: Can overwrite but is probably good hygiene to do a clean build. Juju should bump the revision automatically when upgrading locally.
[15:45] <BlackDex> clean build ?
[15:45]  * BlackDex whistles ;)
[15:46] <BlackDex> i'm currently just doing dirty hacks to get vmware working with a charm directly instead of manully hacking the configs afterwards with ansible or `juju run` scripts
[15:47] <BlackDex> modified the nova-compute charm
[15:55] <TheAbsentOne> is relate the new word for add-relation? :O
[16:04] <rick_h_> BlackDex: just reuse the current space and use the path on the upgrade command.
[16:05] <rick_h_> TheAbsentOne: a nicer alias heh
[16:05] <TheAbsentOne> it's kinda romantic xD
[16:08] <TheAbsentOne> rick_h_: if you would have time want to browse through these folders: https://github.com/Ciberth/gdb-use-case/tree/master/mininimalexamples
[16:08] <TheAbsentOne> Your detective eye will immediately see if something is wrong, I haven't tested/deployed them yet I will in a bit normally
[16:09] <rick_h_> TheAbsentOne: run charm proof on each?
[16:09] <TheAbsentOne> I will as soon as I'm on a ubuntu machine x)
[16:09] <BlackDex> rick_h_: it works
[16:10] <BlackDex> i can now deploy/upgrade my nova-compute-vmware charm
[16:10] <rick_h_> BlackDex: sweet
[16:10] <TheAbsentOne> rick_h_: you know by any chance an example charm using mongo?
[16:10] <TheAbsentOne> some sort of webapp or something
[16:10] <BlackDex> i really need to get more into the charms
[16:10] <rick_h_> TheAbsentOne: hmm...not really. There was the old mongodb cluster bundle
[16:10] <BlackDex> like pushing them to the charm-store etc..
[16:10] <rick_h_> So mongonwith itself
[16:10] <BlackDex> if i want
[16:11] <TheAbsentOne> hmm
[16:11] <rick_h_> BlackDex: yea handy even if you just use for yourself
[16:11] <BlackDex> indeed :)
[16:11] <BlackDex> local is nice, but git/launchpad/store is better
[16:11] <TheAbsentOne> and I was surprised that the mongodb database interface layer wasn't on the layer-index, this one: https://github.com/tengu-team/interface-mongodb-database
[16:13] <BlackDex> now i have to check if it all works ofcourse and that openstack is able to use vmware, but that is the next step
[16:35] <kwmonroe> TheAbsentOne: you can query the store for charms that use mongo -- https://jujucharms.com/q/?requires=mongodb.  here's how telegraf uses mongo: https://git.launchpad.net/telegraf-charm/tree/reactive/telegraf.py#n386 and here's something similar for graylog: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py#n465
[16:37] <TheAbsentOne> gonna try this one out in a few hours kwmonroe: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mongo/mongo-proxy/reactive/mongo-proxy.py#L26
[16:37] <TheAbsentOne> should work right? :/
[16:40] <kwmonroe> yup TheAbsentOne, that'll work, but note that your request_mongodb function will fire every time a hook runs.  iow, you'll render that mongo template at least every 5 minutes (when update-status runs).
[16:41] <kwmonroe> TheAbsentOne: to prevent that, consider adjusting the decorator to "when(mongodb.connected); when_not(template.rendered); blah blah blah; set_flag(template.rendered)"
[16:42] <TheAbsentOne> hmm that's not good xD how would I solve that in a clean way setting up a flag and a when_not?
[16:42] <TheAbsentOne> ow lol xD
[16:42] <kwmonroe> you got it :)
[16:42] <TheAbsentOne> awesome!
[16:42] <kwmonroe> also, i said it will run on every hook invocation -- i meant it will run with every hook as long as mongodb.connected is set.  but you already knew that :)
[16:42] <TheAbsentOne> thx for saying that though, I didn't think about that at all
[16:43] <TheAbsentOne> I hope the collection might be of use to others too x)
[16:44] <kwmonroe> fo sho
[16:45] <kwmonroe> TheAbsentOne: you might also consider the case where mongodb is connected, but the connection string changes (perhaps a new mongo cluster member arrives and the address changes, or perhaps the port changes).  in that case, you may want to check for that in your request_mongodb function and only render if the mongodb relation data has changed... graylog does that here: https://git.launchpad.net/graylog-charm/tree/reactive/
[16:45] <kwmonroe> graylog.py#n472 <-- see it returns if the data hasn't changed since the initial invocation.
[16:47] <kwmonroe> TheAbsentOne: one other thing -- the tengu-team interface isn't in the layer index because therey's already a mongodb interface that points to https://github.com/cloud-green/juju-relation-mongodb.  so when you include interface:mongodb in your layer.yaml, you'll get that one.
[16:48] <TheAbsentOne> correct but as I understood the tengu team interface was meant as a proxy between mongodb interface and another one
[16:48] <TheAbsentOne> however I'm not sure about the changing connection string
[16:48] <TheAbsentOne> if it changes my function wont run right? So what use is that check
[16:49] <TheAbsentOne> if I add a when_not(template.rendered) flag that is
[16:50] <kwmonroe> right TheAbsentOne -- i was just saying there's 2 ways of handling that render function.  either do it once and set a flag + a when_not so the function doesn't execute again, or leave it the way it is and return if ! data_changed.
[16:51] <TheAbsentOne> ahn I understand my bad, I'll go with the flag I think it's more clear and it is more "reactive" programming
[16:51] <kwmonroe> the latter is more robust because it'll handle the case of a changed connection string.  the former means you'd be making an assertion that you never want to re-render that template as long as it's be done once.
[16:52] <TheAbsentOne> kwmonroe: if you have time would you mind checking my mysql folder too? It uses both the mysql-root and mysql-shared interface. I would love to hear your thoughts. Same remark about the rendering function I need to add a when_not
[16:52] <kwmonroe> yup, will do TheAbsentOne
[16:52] <TheAbsentOne> that's true too hmm since it's just minimal example I'll stick to the flags but I will add a note I think
[16:52] <TheAbsentOne> first some cooking x)
[16:52] <kwmonroe> +1
[17:27] <rick_h_> zeestrat: sorry, missed your question. We're waiting for feedback atm. RC's are promised to be able to upgrade to final
[17:28] <rick_h_> zeestrat: we've got one oracle bug that'll cause us to do a rc2 this week I think?
[17:28] <rick_h_> zeestrat: and hopefully get some positive feedback and feel good calling it final
[21:14] <magicaltrout> kev i'm gonna get my chaps going on hadoop storage in a week or so
[21:14] <magicaltrout> before i do so, anything i need to know in advance other than "currently we don't support it"
[21:15] <magicaltrout> kwmonroe
[21:15] <kwmonroe> magicaltrout: first thing i think is probably top priority... don't expect my irc client to highlight "kev".
[21:15] <magicaltrout> i know
[21:16] <rick_h_> LoL
[21:16] <magicaltrout> not sure why i did that
[21:16] <magicaltrout> can you change your nick?
[21:16] <magicaltrout> thats the easy fix
[21:16] <rick_h_> magicaltrout: has a pet name for kwmonroe :p
[21:16] <magicaltrout> when the lovely kevin comes up in discussion in the office
[21:16] <magicaltrout> its usually kev
[21:17] <magicaltrout> I apologise
[21:17] <magicaltrout> when people refer to me, its usually dickhead, so to be honest you're a step above
[21:17] <rick_h_> Well we tech types do hate tying long variable names
[21:18] <kwmonroe> mag, second thing i would start with is right here: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L612.  we hard code hadoop_storage_dirs to those.  i feel like this would be a great place to replace with a "@when storage attached, hadoop_data_dirs = hookenv.storage_get(location)"
[21:19] <magicaltrout> cool will do
[21:19] <magicaltrout>  < kwmonroe> mag, second thing i would start with is right here: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L612.  we hard code
[21:19] <magicaltrout>                   hadoop_storage_dirs to those.  i feel like this would be a great place to replace with a "@when storage attached, hadoop_data_dirs = hookenv.storage_get(location)"
[21:19] <magicaltrout> meh
[21:19] <kwmonroe> yes yes, you have mastered the act of middle clicking your mouse
[21:19] <kwmonroe> now on to hdfs storage!
[21:20] <magicaltrout> sad times
[21:20] <kwmonroe> :)
[21:23] <kwmonroe> magicaltrout: feel free to schedule a hangout when you're ready -- i have some ideas that cory_fu and bdx have helped mull over.
[21:24] <magicaltrout> is that prior to or post embarking on storage kwmonroe ?
[21:32] <kwmonroe> magicaltrout: you mean cory_fu bdx and myself having ideas?  that's pre-embarking.  we had a meeting about what it would look like.  at its simplest, the charms that cared (namenode / datanode) would define 2 storage bits in their metadata.yaml -- data1 and data2.  the operator would attach relevant storage to those charms and we would use that location in lieu of that hard coded part in layer-apache-bigtop-base.
[21:32] <admcleod> 'big data' ?
[21:32] <kwmonroe> ban admcleod
[21:33]  * admcleod flee
[21:33] <kwmonroe> hmph, not working
[21:33] <magicaltrout> someones gotta do data
[21:33] <magicaltrout> its not all openstack :P
[21:34] <kwmonroe> magicaltrout: that approach assumes a fixed set of storage in metadata.yaml -- and that's not ideal.  what if i wanted 5 disks instead of just 2?  what if i wanted a pre-configured mdraid device?  what if i wanted xyz?
[21:34] <magicaltrout> yeah makes sense
[21:34] <kwmonroe> so it has flaws, but it gets us *at least* what we have now with the ability to provision storage outside of "mkdir -p /data/1 /data/2", which is all we do now.
[21:35] <magicaltrout> don't see the problem with that :P
[21:36] <magicaltrout> i'm moving house this week and next, but it'd be good to get a call with me, you and my 2 interns soon to be full time employees i hope, so they can look really scared and we can talk storage
[21:36] <magicaltrout> are you around the week of the 25th?
[21:37] <kwmonroe> yup, i'll iron a tie for maximum professionalism
[21:38] <kwmonroe> that gives me 2 weeks to learn how to tie a tie
[21:39] <magicaltrout> cool, i'll see what days they're around and ship over some rough ideas, i can tell you thursday is out cause it appears england will be losing to belgium in the world cup that night ;)
[23:55] <wallyworld> vino_: here is where the charm (zip) uploads are processed https://github.com/juju/juju/blob/develop/apiserver/charms.go#L205
[23:57] <wallyworld> and here is where we do the processing to update the charm doc in state https://github.com/juju/juju/blob/develop/apiserver/charms.go#L390
[23:57] <vino_> wallyworld: ok i will have a look.
[23:57] <wallyworld> in those places we look inside the zip to to get the metadata etc, so can extract version as well
[23:58] <wallyworld> and use that to update the charm doc
[23:58] <vino_> ok. nothing has to be done at client side.