[00:34] <cholcombe> firl, oh.  you might want to join #maas
[00:34] <cholcombe> firl, i don't know the answer to that question
[00:35] <firl> It’s more about the charms not working that’s why I was figuring here
[00:35] <cholcombe> ah ok
[01:15] <lazyPower> firl - there's some work on that front w/ the latest charms
[01:15] <lazyPower> firl - you can use a juju action to prepare a node for maintenance aiui
[01:55] <lborda> lazyPower, o/
[01:56] <lazyPower> o/
[01:56] <lborda> lazyPower, do you have an idea whether there's a function to return back to the cli when a juju set is invoqued with the wrong value? I thought log() was it but it's not... i'm doing this in the keystone_context.py https://pastebin.canonical.com/151407/
 i am looking at the charmhelpers here but i've no clue
[01:57] <marcoceppi> magicaltrout: nice re: tutoriala @ apachecon. I know a few of the big data team will be there, I'll work with jcastro to kick off an email with you and them to make sure we have the right amount of support for you
[01:58] <lazyPower> lborda OH! you mean extended status feature?
[01:58] <lazyPower> lborda - as in status-set blocked "Incorrect config value"
[01:58]  * lazyPower was trying to recall what you meant by return to cli
[02:00] <lborda> i mean example juju service keystone log-level=BLA where bla is not in ['WARNING'...] I want to block the config-changed behaviour before it screws the service
[02:00] <lborda> lazyPower, ^
[02:00] <lborda> lazyPower, juju service set keystone log-level=bla
[02:00] <lazyPower> yep, status-set blocked and return.
[02:04] <lborda> lazyPower, really ? do you have an example, is this supposed to be done in the context ? ex keystone_context.py ?
[02:05] <lazyPower> lborda i'm not really familiar with the innards of the keystone charm so thats a hard one for me to answer, sorry :(
[02:08] <lborda> lazyPower, tks i will do some research here on other charms that could have implemented config value checking
[02:09] <lazyPower> lborda - if thats all you're wanting to do is just validate config vs the value, you use status-set to pipeline a message back thats viewable in `juju status`, the logic you take from there varies from charm to charm.
[02:10] <lazyPower> and using log() is fine too, but the juju logs tend to be spammy and its easy to miss one liners in there
[02:11] <lborda> lazyPower, agree but i'd think that status-set or the checking I am trying to do should executed before config-changed or status-set... i just want to make sure the user does not enter a weird parameters and the service in a unknown state
[02:12] <lazyPower> unless you exit > 0, checking it before during or after isn't going to help you can still get into a nebulous state. Suggested you pick a sensible default, and if the user typos it defaults to like WARN or something.
[02:12] <lazyPower> thatway the service is always configured properly, it has a sensible default, and if they typo it, it'll do its best to hand hold htem. that or let it blow up so its obvious it happened.
[02:14] <lazyPower> lborda - when you say keystone_contexts.py - thats a template rendering context, right?
[02:14] <lborda> lazyPower, yeah I am not sure I am being too zealous but I see that checking when it's a boolean so I am assuming I can do that too...
[02:14] <lborda> $ juju service set keystone use-syslog=Tasdfasdfasdf
[02:14] <lborda> ERROR option "use-syslog" expected boolean, got "Tasdfasdfasdf"
[02:15] <lborda> lazyPower, yes it's the template
[02:15] <lazyPower> you cant do anything client side, as its a string input it accepts anything that is a string
[02:16] <lazyPower> there was a feature request to support ENUM at one point, not sure what happened there, as it doesn't exist in juju as we know it today.
[02:17] <lborda> lazyPower, hummm I see it would be nice if we could specify in the config.yaml file the acceptable values so the juju would know beforehand...
[02:18] <lazyPower> Yeah, any validation will have to be done in the charm code itself. It's not the most ideal thing, but it certainly gets the job done.
[02:20] <lborda> lazyPower, agree... the boolean check is not done by the charm itself imo... I searched by the output message and couldn't find it... it's the agent's response in this case...
[02:20] <lazyPower> lborda - right, the client will stop you from putting in string types in ints and bools, but a string data type is pretty broad and validation is up to the charm
[02:21] <lazyPower> s/charm/charm author/
[02:21] <lazyPower> There's quite a few charms that don't do validation and blindly take the config value and populate the template with it. Assuming you know what you're doing it.
[02:22] <lazyPower> s/doing it/doing with it/
[02:23] <lborda> lazyPower, agree well maybe I am being to careful... :)
[02:24] <lazyPower> If it reduces the number of papercuts end users feel with the charms, you're doing the work of a devops humanitarian and we appreciate you for the consideration :)
[02:24] <lborda> lazyPower, i will use the log() anyways at least the user will know what happened... and will look at seeing if I can modify the status-set to include the error too
[02:24] <lborda> lazyPower, lol yeah tks saving the world one bit at the time :)
[02:31] <lborda> lazyPower, tks! EOD bye
[05:43] <suchvenu> Hi
[05:44] <suchvenu> I have created a new interface by name db2 and have written the provides and requires part of it. While deplying the charm I am getting an error: AttributeError: 'db2Provides' object has no attribute 'set_ready'
[05:45] <suchvenu> Any idea what does this error mean ?
[10:31] <jacekn> hello. Can one of the charmers please tell me what is the delay on https://bugs.launchpad.net/charms/+bug/1538573 ?
[10:31] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1538573>
[13:14] <magicaltrout> i changed my launchpad id
[13:15] <magicaltrout> what happens in charm world. Will the charm change url as well once a new build goes through?
[13:15] <rick_h__> magicaltrout: it gets very upset
[13:15] <rick_h__> magicaltrout: it doesn't get info on the rename so it doesn't know who you are
[13:16] <rick_h__> magicaltrout: there's work to try to sync the two systems better, but it's in the design state atm
[13:16] <magicaltrout> well i'm not bothered about the old ones going missing
[13:16] <magicaltrout> no one uses them
[13:21] <magicaltrout> oh well whats done is done! :)
[13:22] <rick_h__> magicaltrout: sorry, you've found one of our problem spots we're just starting to work on
[13:29] <marcoceppi> magicaltrout: just reupload 'em all ;)
[13:30] <magicaltrout> marcoceppi: i pushed a new bzr push to the new repo, does that count?
[13:30] <magicaltrout> clearly not being able to see the build process, i'm just guessing :)
[13:30] <marcoceppi> magicaltrout: it should eventually count
[13:30]  * marcoceppi can't wait for charm push to exist
[13:30] <magicaltrout> also I filed a bug for cards the other day and i have another, but I forgot which repo
[13:31] <magicaltrout> which is it?
[13:32] <magicaltrout> hmm well juju-cards wasn't where I sent the bug to
[13:33] <magicaltrout> oh
[13:33] <magicaltrout> jujucharms.com
[13:33] <magicaltrout> which is right?
[13:35] <marcoceppi> magicaltrout: what's the issue? (curious while I figure out where to put it)
[13:35] <marcoceppi> magicaltrout: https://github.com/CanonicalLtd/jujucharms.com
[13:35] <magicaltrout> marcoceppi: if i click on a card the url is wrong and just dumps me on a 404
[13:35] <magicaltrout> was the original bug
[13:35] <magicaltrout> because my charm isn't in the recommended namespace i guess
[13:36] <marcoceppi> magicaltrout: that shouldn't be a problem.
[13:36] <magicaltrout> http://www.meteoriteconsulting.com/spinning-up-pentaho-data-integrations-quickly-with-juju/
[13:36] <magicaltrout> well its there
[13:36] <magicaltrout> you tell me :P
[13:36] <magicaltrout> and the same on another non wordpress page i'm working on
[13:36] <magicaltrout> so its not like WP is munging the url
[13:37] <marcoceppi> magicaltrout: oh bother.
[13:37] <marcoceppi> magicaltrout: I see, it's not respecting the username, wtf
[13:37] <magicaltrout> nope
[13:37] <magicaltrout> https://github.com/CanonicalLtd/jujucharms.com/issues/216
[13:37] <magicaltrout> but i'd assume the fix is pretty trivial
[13:37] <marcoceppi> Yeah, but it'll take a few days to get out
[13:38] <magicaltrout> well its no biggie, just something I noticed
[13:38] <marcoceppi> magicaltrout: I have an alternative implementation of cards, an earlier release, if you wanted to make sure those worked until the new release
[13:38] <magicaltrout> but also, css descriptors like footer are pretty common on exist websites, the cards css should get a prefix or something
[13:38] <magicaltrout> was bug 2
[13:39] <marcoceppi> tjat
[13:39] <marcoceppi> that's also a good point
[13:39] <magicaltrout> i'll dump it into jujucharms issues as juju-cards has none
[13:39] <magicaltrout> and you lot can figure it out
[13:40] <magicaltrout> okay 216 and 225 are my cards bugs
[13:41] <magicaltrout> ooh i lied
[13:41] <magicaltrout> 225 is a bit different
[13:42] <magicaltrout> scratch that i'll close it
[13:42] <magicaltrout> 216 it is then
[13:47] <marcoceppi> magicaltrout: I found the repo
[13:49] <urulama> marcoceppi, jamespage: fyi, we had turned off ingestion in the morning due to ceph problems. that seems to be resolved now and we'll reenable it again. just in case you're missing any new charms in the store
[13:50] <marcoceppi> urulama: thanks for the info magicaltrout ^^
[13:50] <magicaltrout> aye
[13:50] <magicaltrout> can someone answer jacekn as well
[13:50] <magicaltrout> he's asked about his charm on a couple of days and doesn't know whats happening :)
[13:51] <magicaltrout> https://bugs.launchpad.net/charms/+bug/1538573
[13:51] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1538573>
[14:46] <magicaltrout> marcoceppi: you got any idea about my leadership election q on the ML
[14:47] <magicaltrout> I'm trying to get these PDI tests ironed out so I can do other stuff as they've take 4 days or something already :)
[14:47] <marcoceppi> magicaltrout: yeah, I was looking at it thinking about how we could solve this. I think adding the feature to amulet is the way to go
[14:48] <marcoceppi> magicaltrout: basically, you should be able to at anytime query self.d.unit['service'].leader() to get back the UnitSentry for that leader
[14:48] <marcoceppi> for that service*
[14:49] <magicaltrout> yeah but in a test context, I don't see how that helps me wait for leader election be detected by juju and rerun
[14:50] <magicaltrout> its like amulet should have something like a wait_until() type function, where we could pass in various core things the platform does.
[14:55] <marcoceppi> magicaltrout: well, if you take out a leader, everytime you run the leader command in amulet it'll pull fresh again
[14:56] <marcoceppi> magicaltrout: so you'd basically: stand up environment, test it's idle, remove the leader, test it's idle, get new leader, continue with testing
[14:58] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/tests/04-test-leaderelection.py#L24
[14:58] <magicaltrout> well you can see what i've starting trying here
[14:58] <magicaltrout> I want 1 node, check the status, and get the leader ip from my message
[14:58] <magicaltrout> then stand up 2 more and make sure the charm hasn't changed the leader ip
[14:58] <magicaltrout> then i want to switch off unit 0
[14:59] <magicaltrout> and check that the ip has changed, and also verify some configs on the units
[15:06] <marcoceppi> sure
[15:06] <marcoceppi> let me see if I can add this into amulet to help smooth out the experience
[15:08] <marcoceppi> but first, breakfast
[15:11]  * magicaltrout needs to avoid the house today as I've just discovered that the 1 year old didn't fancy a nap today and the mrs is fuming.....
[16:40] <verterok> hi, I'm having a problem with the charm build step
[16:40] <verterok> created a new interface, and added it to interfaces.jujusolutions.com: http://interfaces.juju.solutions/interface/conn-check
[16:41] <verterok> but when trying to build the charm, it fails to fetch it: "build: No fetcher for url:  https://git.launchpad.net/~ubuntuone-hackers/charms/+source/conn-check-interface"
[16:43] <verterok> after looking at the code, looks like this is the problem: https://github.com/juju/charm-tools/blob/master/charmtools/build/fetchers.py#L73
[16:43] <verterok> only github is supported for hosting interfaces/layers?
[16:48] <marcoceppi> verterok: no, git is supported for lp as well, stub has some layers/interface in lp git
[16:48] <verterok> marcoceppi: yeah, saw stub layer/iface, so I used lp git
[16:49] <verterok> marcoceppi: then looks like charmtools/build/fetchers.py has a bug: https://github.com/juju/charm-tools/blob/master/charmtools/build/fetchers.py#L73 (or I'm not understanding the code)
[16:52] <verterok> marcoceppi: looks like Icey already filed a but about this: https://github.com/juju/charm-tools/issues/124
[16:54] <jcastro> stokachu: ditto the ghost charm
[16:54] <stokachu> jcastro: ok
[16:54] <marcoceppi> verterok: I'll see if we can patch that
[16:55] <stokachu> i can do those tonight probably
[16:55] <jcastro> stokachu: actually, if you could just make a general effort to submit all your layered stuff to the proper stuff that would be swell
[16:55] <jcastro> stokachu: what I'd like to do is ping each upstream for each layered charm we have
[16:55] <stokachu> jcastro: you're swell
[16:55] <jcastro> to have them check it out.
[16:55] <stokachu> jcastro: sure thing man, ill get my charms pushed up
[16:55] <stokachu> all my layers are on interfaces.juju.solutions already
[16:56] <jcastro> yeah, I mean more for the end-usery ones, like ghost, etc.
[16:56] <stokachu> jcastro: ok cool, yea ill get them submitted
[16:56] <jcastro> sort of like how you trust something when you see it on github.com/projectname/project instead of github.com/~someguyyouneverheardof/project
[16:57] <stokachu> jcastro: i have blind faith
[16:58] <verterok> marcoceppi: thanks
[16:58] <marcoceppi> verterok: interesting, it seems it should work...
[16:58] <marcoceppi> verterok: https://github.com/juju/charm-tools/blob/master/charmtools/fetchers.py#L121
[16:58] <verterok> I can workaround it with a local interfaces path
[16:59] <verterok> marcoceppi: right, but build/fetchers.py#L73 is truncating the url
[16:59] <marcoceppi> verterok: I see now, it expects a flatter namespace for gitLP, stub has his at git.launchpad.net/<project>
[16:59] <verterok> ah, will change mine and try again
[16:59] <verterok> thanks
[17:00] <marcoceppi> verterok: that line isn't used, L74 is the one that does the actual job, not sure what u is used for
[17:00] <marcoceppi> verterok: we need to make the fetch pattern smarter
[17:00] <marcoceppi> verterok: but you don't have to put layers under the charms distro
[17:23] <verterok> marcoceppi: should I move it out of charms/+source?
[17:25] <verterok> marcoceppi: also, is there a convention on where/how to put source vs built charms (when using composer)?
[17:27] <marcoceppi> verterok: yes, first it's called build not composer anymore, second, we suggest layers be their own project and built charms just get uploaded to the store. there's a new charmstore cli coming out (charm command) that allows you to just "push" a charm to the store, so no more ingestion
[17:30] <verterok> marcoceppi: ah, cool. thanks for the details
[17:31] <verterok> marcoceppi: one last question, same for interfaces? a project for each?
[17:33] <marcoceppi> verterok: basically. we've been prefixing them as layer-<NAME> and interface-<NAME> as a convention
[17:33] <marcoceppi> verterok: but yeah, they're all their own software projects
[17:33] <verterok> got it, thanks
[17:39] <jcastro> lazyPower: how finished do you consider yout redmine layered charm?
[17:39] <lazyPower> not at all
[17:39] <lazyPower> its a learning tool in its current shape
[17:42] <lazyPower> jcastro - i have one thing to call out about the status of the redmine layered charm - TAL at the readme for the  layer repository  - https://github.com/chuckbutler/redmine-layer
[17:44] <marcoceppi> lazyPower: hah!~
[19:37] <marcoceppi> cory_fu: are there any implications to this merge or is it transparent to users?
[19:37] <marcoceppi> https://github.com/juju-solutions/charms.reactive/pull/58
[19:41] <cory_fu> It should be transparent
[19:44] <marcoceppi> cory_fu: love the mirgration and unit tests <3
[19:45] <cory_fu> :)  The migration will still leave deployed charms suffering from the issue, but at least they'll work the same as they did before
[19:45] <jcastro> http://askubuntu.com/questions/743934/trying-to-upgrade-juju-from-1-25-3-to-1-25-4
[19:46] <jcastro> can someone help with this one?
[19:47] <rick_h__> jcastro: https://launchpad.net/~juju/+archive/ubuntu/proposed
[19:47] <rick_h__> jcastro: since it's just in proposed you need either upload tools or to update the streams I believe.
[19:48] <marcoceppi> jcastro: are you answering?
[19:49] <jcastro> marcoceppi: no, I am on 2.0
[19:50] <jcastro> rick_h__: he's using the stable ppa though, why would he need to use the proposed streams?
[19:51] <rick_h__> jcastro: oh hmm, not sure then
[19:54] <marcoceppi> rick_h__ jcastro he has 1.25.3 installed, he probably say the .4 email
[19:54] <marcoceppi> rick_h__: jcastro happy to reply
[19:56] <jcastro> marcoceppi: yeah I am unsure what to say
[19:56] <marcoceppi> jcastro: cool, np
[19:56] <jcastro> marcoceppi: is the solution using the proposed streams?
[19:56] <marcoceppi> jcastro: kind of
[19:56] <jcastro> I feel like users should never have to care about streams unless they want to
[19:56] <jcastro> just as a showerthought
[20:00] <alexisb> jcastro, I dont disagree with you
[20:03] <marcoceppi> jcastro: it's not a streams issue
[20:03] <marcoceppi> well, not directly
[20:19] <marcoceppi> jcastro: http://askubuntu.com/questions/743934/cant-upgrade-juju-from-1-25-3-to-1-25-4-due-to-missing-tools/743991#743991
[20:24] <marcoceppi> jcastro: we could clean that up for the docs as well
[20:34] <marcoceppi> aisrael tvansteenburgh charmhelpers 0.7.0 released with resource-get support
[20:36] <beisner> thedac, looking good on https://review.openstack.org/#/c/290032/2 ... with jamespage's +1 pending the full amulet, i think it's clear to land.  lmk if you're ready for that.
[21:03] <marcoceppi> lazyPower: I think you submitted that pr against the wrong repo
[21:03] <lazyPower> haha
[21:03] <lazyPower> WHOOPS
[21:03] <marcoceppi> <3
[21:04] <lazyPower> oh man look at that commit stream of the diff too
[21:25] <thedac> beisner: I think that is ready. What is the process for that? I thought it was automated
[21:31] <beisner> thedac, yep it is.  once it's got a code-review +2 and a workflow +1, it'll go through upstream gate and merge if all is well.
[21:32] <thedac> Oh, it *is* merged. \o/
[21:33] <beisner> yeah! :-)
[21:35] <thedac> beisner: and you should be happy about This one being merged https://review.openstack.org/#/c/289535/ Should fix your mitaka dashboard woes
[21:39] <beisner> thedac, yes, i am.  thanks for the fixes!
[21:45] <magicaltrout> aww crap my karaf submission was a tutorial as well.... 6 hours of apachecon tutorials to write
[21:45] <magicaltrout> worst day of my life
[21:49] <lazyPower> wow, that seems...excessive
[21:49] <marcoceppi> magicaltrout: hah, damn dude
[21:49] <magicaltrout> indeed
[21:50] <magicaltrout> that'll teach me to push the tutorial button
[21:50] <lazyPower> We might be able to recycle that material though
[21:50] <lazyPower> a focused big data primer
[21:50] <lazyPower> magicaltrout - would <3 a sneek peak at those as you put em together
[21:50] <magicaltrout> yeah well i suspect there will be a bunch of bundles coming out the backend of this process
[21:51] <lazyPower> no pressure, and if that request adds to it, tell me to pound sand :)
[21:51] <magicaltrout> na, i usually crowd-build my presentation/tutorial stuff anyway
[21:52] <magicaltrout> becuase there's a lot more knowledgeable people than me who wont be attending ;)
[21:55] <marcoceppi> tvansteenburgh: why was fetchers.py updated?
[21:56] <tvansteenburgh> marcoceppi: ported improvements from bundletester
[21:56] <marcoceppi> tvansteenburgh: ah, cool, it's not actually used for pull-source though?
[21:56] <tvansteenburgh> correct
[21:56] <tvansteenburgh> well actually it is, for charms
[21:57] <tvansteenburgh> the new CharmStoreDownloader will be used
[21:57] <marcoceppi> tvansteenburgh: hum, we should deprecate that for charm pull when that lands instead
[21:57] <tvansteenburgh> marcoceppi: maybe, i don't grok the difference
[21:58] <marcoceppi> tvansteenburgh: well, we're duplicating functions, charm pull will down the download and extract from the source, no need to have a method that may drift from that
[21:58] <marcoceppi> i can english.
[22:05] <tvansteenburgh> magicaltrout: fyi http://pythonhosted.org/amulet/amulet.html
[22:10] <magicaltrout> oooh bloomin marvelous
[22:10] <magicaltrout> thanks tvansteenburgh
[22:11] <tvansteenburgh> magicaltrout: sure thing. no doubt they can be improved. contributions welcome
[22:12] <marcoceppi> tvansteenburgh: I've got some feedback for you on pull-source, overall +1 but some UX stuff is rough
[22:12] <magicaltrout> yeah tvansteenburgh when I've written a life times worth of tutorials, finished the pdi and gitlab charms, released saiku 3.8 and finished that charm, remind me and i'll have a play ;)
[22:12] <tvansteenburgh> :)
[22:12] <tvansteenburgh> marcoceppi: thanks, will address
[22:13] <marcoceppi> <3
[22:16] <magicaltrout> so here's a question tech boys.... if you were doing tutorials and had content to distrubte for classroom stuff
[22:16] <magicaltrout> if virtual box still the way to go?
[22:16] <magicaltrout> (esp if wifi and stuff is dodgy)
[22:30] <lazyPower> hey cory_fu - i had an idea today that i'd like to run by you
[22:30] <cory_fu> Sure thing
[22:31] <lazyPower> in my experiences with layer:docker - i've managed to use the layer in both a principal and a subordinate role. I find that often when i use the subordinat erole i'm connecting to the principal charm anyway, and i incur a bit of a penalty running the install on each subordinate
[22:31] <lazyPower> would it be good or accepted practice to make that a layer.yaml option, and short-circuit the install routine based on that config? that way for each sub that i know already has docker installed
[22:32] <lazyPower> it just skips along, and the layer retains default behavior to run that routine - so anyone including it always gets what they're expecting?
[22:32] <cory_fu> That, or intelligently detect if docker is already installed
[22:32] <cory_fu> Since you want to be idempotent anyway
[22:33] <cory_fu> But yeah, a layer option for "don't install by default" is reasonable.
[22:33] <lazyPower> thats my thought, as using it as baseline to get layer:basic, and the libs it ships with (charms.docker, et-al) is nice, but i dont need the extra bullets of many subordinates trying to control the version of docker on the host
[22:34] <cory_fu> Yeah
[22:34] <lazyPower> i like it, its juju centric, and minimal work for me to implement. thanks o/
[22:34] <cory_fu> :)
[22:35] <marcoceppi> lazyPower: you mean as an option defined in the layer or something that the reactive framework would read?
[22:35] <lazyPower> i mean define it in layer.yaml as an option
[22:36] <lazyPower> so any subordinate you build on top of it, has a snippet in the readme you can copy/paste into your subordinate and skip the whole install bits, and override the provides/requires as needed. let me link you to the beginnings of me thinking this - it came up while i was writing the logspout subordinate
[22:37] <lazyPower> so layer:docker can be used in either a principal or subordinate setting - https://github.com/chuckbutler/layer-logspout/blob/master/layer.yaml - and this little deletes line is all thats required to make a subordinate layer connect to a principal layer:docker based charm.
[22:37] <cory_fu> lazyPower: Another option, and I don't know if this is a good one, is to check metadata.yaml and see if the current charm is defined as subordinate and drive the install based on that
[22:37] <marcoceppi> lazyPower: sounds reasonable, if it's using the existing layer options
[22:37] <cory_fu> That's kind of a terrible idea, though
[22:37] <cory_fu> Since, someday, you might want a subordinate that does install docker
[22:38] <marcoceppi> I now have more questions than answers
[22:38] <tvansteenburgh> marcoceppi: can you tell me what version of path.py you have in whatever env you tested pull-source in?
[22:38] <lazyPower> if i updated layer:docker to include layer.yaml based options, to tune that behavior - the output charm packs extra bloat in terms of files, but hte runtime doesn't incur the penalty of trying to do *anything* to manage the runtime version.
[22:38] <marcoceppi> tvansteenburgh: ugh, mf path.py
[22:38] <tvansteenburgh> lol, you can thank whit for that
[22:39] <lazyPower> :D
[22:39] <marcoceppi> tvansteenburgh: path.py==7.4
[22:39] <cory_fu> I'm a fan of path.py, but we have had a spot of bother with the versions of it, haven't we?
[22:39] <lazyPower> yep
[22:39] <marcoceppi> cory_fu: it's a constant source of pain for me.
[22:40] <lazyPower> it bit us in charm land, and its bit marco in tooling land
[22:40] <tvansteenburgh> marcoceppi: okay thanks.
[22:40] <lazyPower> and packaging come to think of it
[22:40] <tvansteenburgh> marcoceppi: are we stuck with that version for packaging reasons?
[22:41] <marcoceppi> I lose sleep over three things: pondering the meaning of life, my neighbors TV, and path.py
[22:41] <tvansteenburgh> marcoceppi: if so that's fine, i'll just patch the prob. upgrading would fix it though
[22:41] <marcoceppi> tvansteenburgh: so, 8.1 is in Xenial
[22:41] <marcoceppi> 7.4 is what I packaged for trusty, but we can move to 8.1 if it fixes this
[22:42] <tvansteenburgh> marcoceppi: yeah your last error doesn't happen with 8.1
[22:42] <tvansteenburgh> 8.1.2 is what i have
[22:42] <marcoceppi> tvansteenburgh: that's in xenial, that's what we'll use http://packages.ubuntu.com/search?keywords=python-path
[22:42] <marcoceppi> someone finally packaged it
[22:43] <marcoceppi> tvansteenburgh: please update requirements.txt
[22:43] <tvansteenburgh> marcoceppi: okay thanks, will do
[22:43]  * marcoceppi remakes and retests
[22:45] <marcoceppi> tvansteenburgh: that works