=== redir is now known as redir_afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [08:07] urulama, morning - I'm working through https://github.com/juju/charmstore-client/issues/61 whilst beisner is on leave [08:07] urulama, is there a nice way I can login once, and them propagate the usso credentials to a number of machines so that they can all push and publish charms? [08:21] jamespage: hm. if you copy the store-usso-token in ~/.local/share/juju across machines, this should do the trick [08:22] we need to provide you with better instructions how to hook CI [08:29] urulama, ok tried that, but charm whoami still thinks I'm not logged in on machines other than the one I generated that file on [08:30] jamespage: hm. seems like ~/.go-cookies are used still. try copying that file [08:33] urulama, trying that now [08:34] urulama, +1 that fixed me up [08:34] thanks for the help [08:44] tinwood, thedac, gnuoy, beisner: OK charm push on change landing for master and stable branches is now live. [08:44] jamespage, excellent! [08:45] thanks to urulama for helping get the auth sorted out [08:46] +1 ta [08:46] urulama, excellent too! :) [08:46] i think proper way would be to create a "bot" user to be used by CI [08:47] tinwood: ty, it has some rough edges still, but, we'll get there :) [08:47] gnuoy, https://review.openstack.org/#/c/320817/ [08:47] urulama, agreed - we already have a bot user [08:48] gnuoy, if you have time :-) === Guest6887 is now known as BrunoR [08:54] Hi all, I have a question about openstack charms, do you have any plan to develop charm for projects in big tent other than core projects? e.g. magnum, murano ... etc [08:57] godleon, we're working on a way to make it easy to charm said projects - and the current team may pick off a few of those, but we'd love to have other contributors who know and use those projects working on the charms :-) [08:58] godleon, find that works best rather than the 'read the docs -> write the charm' approach :-) [08:59] jamespage: thanks! I will spend some time to read the docs. [09:00] godleon, still wip but worth a look - https://github.com/openstack-charmers/openstack-community/blob/master/openstack-api-charm-creation-guide.md [09:00] jamespage: And I have another question about nova-compute with LXD, is it possible to have two virt-type(KVM & LXD) simutaneously on the same openstack platform? [09:01] godleon, yes - but not on the same servers [09:01] godleon, one sec - letme pick out the reference for that [09:04] jamespage: wow, you are so kind. :) [09:05] godleon, http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md [09:05] np :-) [09:09] jamespage: can I manage multiple hypervisor's workload in one Horizon portal? [09:09] godleon, yes [09:12] jamespage: How about the performance comparison between LXD and docker, havr you ever done this kind of test? [09:12] no [09:12] godleon, they target different spaces ... [09:12] system containers vs application containers [09:12] mutable vs immutable [09:13] jamespage: ok, I didn't have this concept, sorry about that. [09:16] jamespage: ok, I will dig into the LXD and multiple hypervisor architecture to evaluate if it can help me solve the problems in my project. Really appreciate for ur infomation. :) [09:16] godleon, I did a talk about this in austin - letme dig out the video [09:17] jamespage: great [09:17] godleon, https://youtu.be/u511z0BGnw4 [09:18] godleon, enjoy my shirt :) [09:19] godleon, the demo is missing due to some video problems - I'll ping gsamfira to see if he's re-recorded the demo for us yet... [09:19] jamespage: haha, will do. Many thanks! [09:19] jamespage: wow, good! Thanks! [09:20] godleon: should I leave my email here? === urulama is now known as urulama|swap [09:50] jamespage, do you happen to know if the juju native bundle unit placement syntax is documented anywhere? [09:50] gnuoy, probably but not sure where [09:50] gnuoy, what are you trying todo? [09:50] lxc:nova-compute=1 [09:51] jamespage, fwiw I know it's lxd now [09:51] gnuoy, that does not work [09:52] gnuoy, you have to target the actual machines [09:52] jamespage, ok, I can live with that, whats the syntax for targetting actual machines [09:52] ? [09:52] gnuoy, read this - https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml [09:52] jamespage, ta [10:44] gnuoy, could you take a peek at https://code.launchpad.net/~james-page/charm-helpers/newton-opening/+merge/295693 [10:44] ? [11:51] jamespage, I fixed up the cinder.conf commit message. I never realized you could edit it directly within gerritt. === BlackDex_ is now known as BlackDex [12:39] gnuoy, https://review.openstack.org/#/c/320817/ [12:39] all good to go [12:40] jamespage, approved your ch change too [12:41] gnuoy, ta [12:54] tinwood, I really like the clean look of the baribcan charm. The only thing that jars a little for me are the setup_amqp_req, setup_database and setup_endpoint being part of barbican.py. they seem standard and not specific to barbican at all. Can we push them down into the layer or module? [12:55] * tinwood hadn't considered that. [12:56] gnuoy, I'll take another look. I sure there's a dependency on one of them on the charm, but the other two could probably move. [13:00] tinwood, do you agree in principle to moving them? [13:04] gnuoy, setup_amqp_req() and setup_database() are both independent of the charm and could happily go elsewhere. setup_endpoint() seems more problematic, in that it needs access to some of the charm's properties. I'm wondering whether that might be better elsewhere? [13:06] tinwood, it was originally part of the charm class wasn't it? do you feel uncomfortable with it going back there? [13:11] gnuoy, I'm not sure. The `keystone` object is an interface class instance. It would be possible to have a `register_endpoints` with a sig register_endpoints(keystone : `keystone-interface`) which then turns around a calls the register_endpoints() on the interface, assuming all OpenStackCharms will have service_type, region ... admin_url. [13:13] tinwood, It's safe to say all charms calling register_endpoints will have tose attributes [13:13] s/tose/those/ [13:13] I'm actually in favour of pushing the setup_amqp_req() and setup_database() basck to the handlers file in the charm. [13:14] gnuoy, but I did want to keep all the handlers together. [13:14] hmm. It's because reactive forces us to put some functions in the reactive directory, but we're putting charm code in lib/ [13:16] gnuoy, we need a better way todo feature discovery in the cloud - https://review.openstack.org/#/c/320972/1 [13:16] I have this type of config option... [13:17] have/hate [13:17] godleon - i've done some benchmarking in terms of starting containers but thats not much of a telling story. we both launch containers silly fast if you have the images cached. but jamespage was spot on with them targeting different spaces so its comparing apples to pineapples. [13:18] jamespage, I take it cinder-backup doesn't register an endpoint in keystone? [13:18] gnuoy, hmm [13:18] it might [13:18] gnuoy, however... [13:18] if only there was some sort of service catalogue... [13:18] gnuoy, I'd not want to query the service catalog for this; it should be done using charm semantics [13:49] cory_fu, kjackal, kwmonroe: I've got a question about Bigtop automagic, using the layer-apache-bigtop-spark charm as an example: Does Bigtop know to tell puppet to install spark simply because there is a "spark" entry in the "hosts" dict that we pass to render_site_yaml? Or is there additional configuration info in the charm that I'm missing? [13:51] petevg: It's actually the roles that define what gets installed: https://github.com/juju-solutions/layer-apache-bigtop-spark/blob/master/lib/charms/layer/bigtop_spark.py#L28 [13:52] cory_fu: got it. That makes sense, now that I think about it. [13:52] Thank you :-) [13:54] petevg: The Bigtop Puppet scripts have two methods for selecting what gets installed: components or roles. Roles are more fine-grained, and let you sepcify things more precisely, while components infer a lot more based on what hosts are provided [13:56] petevg: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/site.yaml has a list of the components you can choose from, while https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/manifests/cluster.pp has a (not 100% complete, it seems) listing of roles [13:57] Ooh, useful. Will bookmark that, and also link in the README. [13:58] Unfortunately, there doesn't seem to be much in the way of documentation of this stuff outside of the Puppet scripts themselves. [13:59] Adding some would be a good patch to submit, I think [14:32] Is it possible to deploy an openstack bundle on a Juju with manual provider? I can see from store that the default bundle requires MAAS. [14:35] marcoceppi stop spamming me! :P [14:40] arosales: they've fixed my CS login, if it makes you feel any better, I'm also listed in 0 teams ;) [15:02] magicaltrout: stop opening bugs on the wrong project ;) [15:02] magicaltrout: glad to they got you sorted [15:02] just doing what arosales told me :P [15:02] arosales: stop opening bugs on the wrong project ;) [15:02] magicaltrout: I hear the ~charmer group is a good group to be in [15:02] * marcoceppi labored over that issue template [15:03] marcoceppi: ? [15:03] Ctrl -a [15:03] magicaltrout: :sadpanda: [15:03] hehe [15:04] arosales: charm-tools is only for proof, build, inspect, layers, create, and a few other things. Everything else (login, whoami, push, pull, grant, publish) is charmstore-client [15:04] magicaltrout: you should at least be in charm contributor and apachefoundation [15:05] marcoceppi: how is any normal human suppose to know that? [15:05] marcoceppi: I install charm tools [15:05] arosales: well, because I created an issue template that tells people this [15:05] I look at charm tools [15:05] templates are for wimps [15:05] Version [15:05] arosales: https://github.com/juju/charm-tools/issues/new [15:05] * marcoceppi tries so ahrd [15:06] marcoceppi: you know that rule about website loading speeds [15:06] it also applies to placeholder text ;) [15:06] okay, so trim the fat, got it [15:06] remove the first para for a start [15:06] I know you like thanking people, but there is a time and a place... namely a juju charmer summit [15:07] yeah, I was just thinking that as well [15:07] you also have a typo in para 2 [15:07] agains isnt' a word [15:08] marcoceppi: how hard is it for us to move the bug? [15:08] arosales: I already did [15:08] hyper links don't work, so just remove the hyperlink markup [15:08] marcoceppi: so not very hard in general then [15:09] arosales: it is very very annoying, because gh doesn't have a way to "move" issues [15:09] and only admins of both repos can do it, so a select few in general [15:09] But something a person can do [15:10] arosales: it's super SUPER dirty, ugly, and not friendly [15:10] I am +1 for the template [15:10] But [15:10] magicaltrout: I've been mulling over the idea of `charm bug` or something that can collect 90% of this from the command line [15:10] Let's not make it cumbersome on someone giving feedback [15:10] magicaltrout: not sure if people would actually use it [15:11] I updated the issue template as well [15:11] ideally we have one place like Juju to summit all bugs and then triage from there [15:11] Make it easy for [15:11] Contributors [15:12] But that's ideal [15:12] I don't think charm bug is a bad idea, if i'm already in the command line, copying that stuff out of my terminal is clearly harder than me typing charm bug [15:12] arosales: sure, if we do that on Launchpad. [15:12] because you can not move issues around in gh [15:13] but we can in lp, but now you're subjecting people to lp [15:13] Well gh repo admin can I think [15:13] But that's technical [15:14] All I am saying is take the burden off someone trying to give feedback [15:14] don't subject people to LP, most people will have a GH login, not so with LP. Finding anything in LP, including your own code is a pain in the ass :) [15:14] +1 on the template helping them [15:14] arosales: you can not move issues between repos, pretend I ever said that [15:14] Get to the right spot [15:15] * marcoceppi spins up a third, bugzilla site ;) [15:16] juj deploy cs:bugzilla juju-charm-website-massive-bug-aggregator [15:16] + [15:16] u [15:16] My feedback is make it easy on contributors even if it isore back end woek [15:17] .. If it is more back end work [15:18] +1 for templates to help folks get to the right place, but if that fails let's just take care of it on the back end, even if copy paste [15:24] magicaltrout: would you mind if I demo'ed your mesos bundle? [15:24] not at all arosales [15:24] just bear in mind currently it doesn't support > 1 master [15:25] which i appreciate defeats the point slightly, but there is a fix in the works for that, its just wiring as opposed to anything technical [15:25] magicaltrout: attending mesoscon in Denver next week and would like to present your bundle at a lighting talk [15:25] yeah i was thinking about showing up to MesosCon europe with a talk if they fancied it [15:26] magicaltrout: could you point me at your latest bundle? [15:26] ah yeah i've not published it yet :P [15:26] should probably do that [15:27] https://github.com/buggtb/dcos-master-charm / https://github.com/buggtb/dcos-agent-charm / https://github.com/buggtb/dcos-interface [15:27] currently [15:27] magicaltrout: thanks [15:28] i'll try and get the multi-master finished this week and get it published [15:28] magicaltrout: oh no worries on working on it this weekend on my account [15:29] https://www.irccloud.com/pastebin/7DJdv9mG [15:30] its like you're talking to me in a webpage.... [15:30] magicaltrout: have you seen the work SaMnCo and data art have done on mesos [15:30] I've not seen it, but he tried to hook us all up pre-apachecon and we all said hi then it went quiet [15:30] Dang phone client [15:31] i mailed SaMnCo the other day to try and reboot it, but i've noticed if I mail him with more than one issue a day, the others don't get a response, so that went unanswered ;) [15:31] I'll try to follow up with SaMnCo [15:31] magicaltrout: could you [15:31] Add marcoceppi and I to cc? [15:31] I'm not sure what they are upto , but it would be cool to get all of this stuff aligned, hipster tech and all that [15:33] will do [15:34] i also need to get a talk submitted to Oslo Devops Days, so I'll probably submit something similar to what I did in that ApacheCon talk with some more hadoop-y stuff to pad it out [16:00] magicaltrout: good stuff, thanks [16:20] petevg: Thank you for the README PR. Docs is something (at least) I have to pay more attention. Good work! [16:21] kjackal: thanks. Nice to hear that it's appreciated :-) === frankban is now known as frankban|afk [16:44] cory_fu: what's wrong with me? why can't i see an issues tab here? https://github.com/juju-solutions/layer-apache-bigtop-nodemanager [16:45] and why does going to https://github.com/juju-solutions/layer-apache-bigtop-nodemanager/issues redirect me to PRs? [16:45] That repo doesn't have issues enabled, apparently. Probably some weirdness because of how it was forked [16:45] We should probably re-own it anyway, so it's easier to make PRs [16:46] kwmonroe: Anyway, issues are enabled now [16:46] gracias! === redir_afk is now known as redir [17:38] openstack-charmers: when using nova-lxd, am I confined to using local storage, or does ceph work in some way as a backend for nova-lxd that I'm unaware of? [17:49] gnuoy, you about or eod? [17:55] Is there a way to get the full log from a unit from the Juju CLI? [18:00] hatch - yep [18:00] juju debug-log -i --replay [18:00] you'll need to pass the unit in the -i flag [18:00] hatch if you dont want to tail after its done, pass -F [18:00] or, juju debug-log --help for the full list of awesome [18:01] lazyPower: ahhh that's it - I didn't read the docs for --replay because I didn't want to 'replay' them, I just wanted to see them [18:01] heh [18:01] thanks :) [18:01] ye :) that'll get ya sorted [18:01] cheers [18:20] how do I destroy a service with units in error? [18:20] hatch juju remove-machine --force # [18:21] hatch then juju destroy-service will cleanly remove the service/charm from the controller [18:21] that or juju resolved mything/0 all the way down as it fail-cycles until its removed [18:21] thanks lazyPower [18:22] saying to destroy a service and getting no feedback is not very good [18:22] i dont know that i'm following what you're calling attention to. How did you not get feedback? [18:22] I can spam destroy-service and because units are in error nothing happens [18:22] and I get no feedback [18:23] so I just sit here wondering why Juju is broken :) [18:23] ah, well, do you expect the command to block until its removed? [18:23] at the very least I'd expect a message telling me why it's not doing what I told it to do [18:23] its doing the right thing in my mind. its a one-shot declaring a state. "Remove this thingy!!" and its trying its best to get there, and if it fails to do so it reports that. [18:24] thats a disconnect between blocking commands and the fire/forget style state change you've defined. [18:24] no it doesn't [18:24] it doesn't report anything [18:24] juju status sure does [18:24] I guarantee it doesn't [18:25] https://gist.github.com/hatched/9a374d20d007e019d3ec2045cf7edc1f [18:25] thats funny. workload-status: error [18:25] where there says that I have destroyed the service about 10 times? [18:25] message: hook failed 'install' [18:25] yep the hook failed - so why can't I destroy the service? [18:26] lazyPower: I'm with hatch. juju destroy-service, IMO, should just take down errored units. who cares if a unit is in an error state, I'm explicitly removing it [18:26] you're making an assumption about what it should be doing [18:26] and i dont agree with your assumptions [18:26] "i said destroy service, and its still here D:" [18:26] what if you want to debug that while its in life: dying? [18:26] so you prefer for the command to just return with no message to the user [18:26] and root out what the cause was? [18:26] no, i want you to admit that you're conflating two issues [18:27] I definitely agree that if we KNOW that destroy-service is not going to work, we tell the user [18:27] I said to Juju to destroy the service - I want it to destroy the service [18:27] thing is [18:27] juju status --format=yaml [18:27] the life is going to be dying [18:27] but it's not dying [18:27] it received and is working towards that destructive state [18:27] it's going to sit there forever [18:27] juju destroy-service foo [18:27] the fact there's a hook error is regardless of what state change you just told it to take [18:27] ERROR: can't destroy service, unit in error state: foo/0 [18:27] ^^^ this [18:27] 100x this [18:28] if we want to be pedantic and conservative and not destroy an errored unit automatically.... at least tell the user we're not going to do it [18:28] thing is, it IS going to do it if you resolve it and it doesn't further error [18:28] if you're not doing to do what the user intends to do then at least tell them why [18:28] its not "uncommitting" that destroy directive [18:28] WARNING: unit foo/0 in errored state, will not be destroyed until resolved. [18:29] that makes more sense [18:29] i'm +1 to that [18:29] throw me a bone, FFS. The user shouldn't need to know the internal details of exactly how everything works... we should help them to use juju [18:29] YES! [18:29] but i still stand that its 2 sep. issues. [18:30] I'm fine with it being two separate issues - do we or do we not destroy errored units automatically? and, if we don't, we need to tell the user explicitly. [18:30] And honestly, I wish destroy-service were synchronous, like destroy-model is now [18:31] maybe with a flag to make it async... or vice versa.... I shouldn't ever need to type watch juju status in order to figure out WTF is going on in juju [18:31] +1 [18:31] i'm going to leave this alone [18:31] lol [18:31] what? [18:31] this is going off the rails into a really bad gripe session [18:31] lol true, sorry :) [18:32] I'm going to file a bug about the destroy-service messaging [18:32] natefinch: I watch juju status all the damn time [18:32] It is the only way to figure out what is going on behind the curtain [18:32] That wizard is up to some crazy stuff folks, I constantly watch juju status and tail the debug log [18:32] mbruzek: that's my point. You shouldn't have to. Like the way destroy-model is synchronous and actually tells you what it's doing. [18:32] whats fun is something errors you destroy it, then figure out it was like a race condition and the charm can recover... it goes to started, then immediately destroys itself [18:32] thats the best [18:33] lazyPower: sorry I told it to do something - I expect it to listen :) [18:33] I don't much care what it wants [18:33] again. it listened [18:33] you weren't prepared for the sequence of papercuts afterwords [18:34] this is like going to a restaurant, placing an order with the server and then the server walks away [18:35] you sit there forever waiting for your food [18:35] it was sitting in the kitchen, but you forgot to tell them to deliver it [18:35] not that you knew you had to say that [18:35] because the server didn't tell you that [18:35] look i'm not saying you're wrong, i'm saying teh way you're conveying it and griping is wrong, beause you're telling me it didnt do something that it is celary doing [18:35] *clearly [18:35] how is it doing it? It isn't destroying the service [18:35] did you juju remove-machine # --force? [18:36] I did [18:36] teh service should have gone then [18:36] its dead [18:36] has no units [18:36] it did [18:36] is in state: dying [18:36] then how did it not destroy? [18:36] so destroy-service isn't destroy service [18:36] it's "mark for destroy sometime in the future when a pretermined list of requirements have been satisfied - oh but you don't know what those are" [18:37] that's quite the command ;) [18:37] it seems to me that if destroy-service had a --force flag, it would help a lot. [18:37] thats not true either [18:38] hatch - this. x1000 is this conversation [18:38] https://xkcd.com/386/ [18:38] lol except this _is_ important [18:38] users need to be informed about what's going on [18:38] and if it's not doing what they said to do they need to know why [18:40] Has anyone opened a bug about this problem? With the problem statement, and expected result. I think others would like to see this and possibly comment on either it is working as designed or a problem that will get fixed next release. [18:40] mbruzek: here is one I filed earlier https://bugs.launchpad.net/juju-core/+bug/1568160 [18:40] Bug #1568160: destroying a unit with an error gives no user feedback [18:41] I actually totally forgot I filed that one [18:41] haha [18:41] I file a lot of bugs [18:41] :) [18:42] mbruzek: I think there are really two points here - 1) why errors block removal actions and 2) lack of user feedback in any case [18:52] hatch: natefinch: lazyPower: I commented on the bug ^. I think the actual problem is the charm code had a bug, and would not destroy because a hook returned non zero and that is literally how Juju hooks works. [18:52] * lazyPower smirks [18:52] https://bugs.launchpad.net/juju-core/+bug/1585750 [18:52] Bug #1585750: Destroying a service in error gives no feedback [18:55] mbruzek: your comment doesn't really apply [18:56] mbruzek: if you want to fix the error then fix the error [18:56] you wouldn't destroy it if you wanted to fix it would you? [18:56] in the charm code rather than fix juju [18:56] stop isn't executed without first destroying the service or removing a unit [18:56] marcoceppi: ping [18:56] you may not know its a problem until you've already issued the destroy stanza, and by your proposal - there is no way to really debug it. its just LOL bye. [18:56] magicaltrout: yo [18:56] Actually hatch that is a valid test case when developing a charm. I want to make sure that the charm goes down cleanly [18:57] hey, 2 things [18:57] a) warning: bugs-url and homepage are not set. See set command. [18:57] hmm nm [18:57] hatch: There is no other way to call the stop hook that I know of. [18:57] ignore that one [18:57] b) https://jujucharms.com/u/apachesoftwarefoundation/ why did the charm I publish earlier vanish? [18:57] magicaltrout: are you logged in [18:58] i am logged in [18:58] mbruzek: This is the first time I've ever heard of any reason to not clean up on error [18:58] hatch: If the charm *-broken relations, or stop hooks, did something really important, I would TOTALLY want to know if they worked or not. [18:59] mbruzek: but put yourself in the users shoes - how are they supposed to know any of this? [18:59] Like backing up data to S3 or doing other important clean up things [18:59] i fully had a charm there earlier that I went to look at and stuff [18:59] magicaltrout - its likely permissions [18:59] mbruzek: there is literally 0 feedback or help given to the user [18:59] hatch: I *am* thinking of the users. I would propose that you are using the wrong command. [18:59] magicaltrout - pastebin teh output of charm show cs:~apachefoundation/mything [18:59] mbruzek: so what command am I supposed to use? [18:59] If there are errors in the hook, remove-machine [19:00] or resolve the errors [19:00] mbruzek: In the long term, 99.9999% of users will not be charm authors [19:00] ok so I'm supposed to know that when I try to destroy a service, if there is an error I need to remove the machines [19:00] but I have to actually check juju status [19:00] often [19:00] to know what commands to run [19:00] marcoceppi: http://paste.ubuntu.com/16691257/ [19:01] magicaltrout Read: [19:01] - apachesoftwarefoundation [19:01] thats the issue [19:01] magicaltrout charm grant cs:~apachesoftwarefoundation/trusty/joshua-full --acl=read everyone [19:01] the reason you saw it was because you were logged into the store, and had access to "read" the charm [19:01] if you checked it in private mode, you would not have seen it. [19:02] hatch you and natefinch, when a hook returns a non zero return code Juju pauses to let the admin fix the stuff. If the stop hooks were doing some legit backup sequence, the admin would totally want to know if that is not working [19:02] you are correct lazyPower it is perms related.... that said jujucharms.com says I'm logged in [19:02] ooo [19:02] weird [19:02] mbruzek: fine, but how do they know that? [19:02] and once I granted permissions it appeared, but i'm on the homepage and can see my face [19:02] which tends to indicate i am logged in [19:02] how do they know to resolve the units? [19:02] anyhoo, weird, but works, thanks a lot [19:03] hatch - clairvoyance obviously [19:03] lazyPower: this is how my experience has been, yes [19:03] as i said, feedback is fine [19:03] hatch: Charms throw all kinds of errors, resolve is the mechanism to force Juju to proceed [19:03] I think that if the unit is in an error state *before* calling juju destroy-unit, we should warn the user and ask them if they want to destroy it anyway [19:03] bu ti do take issue with wholesale deletion of a service in error state [19:03] unless you make it flag based where thats not default behavior [19:03] as cmars suggested which is nice middle ground [19:04] cmars thanks for being the voice of reason in this [19:04] reason? me? what? :) [19:04] you're the only one who piped in with adding --i-mean-it [19:04] :) [19:04] which is nice middle ground [19:04] keep default behavior, give the hatches of the world an option to wholesale delete [19:04] I really don't understand why there is so much resistance to working hard on user experience with the CLI [19:04] there is a problem, lets work on a fix [19:05] not drag our heals because this is how it is [19:05] hatch - because your campaigning for something thats really destructive and ops people will not appreciate it [19:05] i think mbruzek has a good point, but i've certainly screwed up a charm with repeated `juju upgrade-charm --force-units ... ; juju resolved --retry` that I have no confidence its state reflects any kind of debuggable reality [19:05] yup [19:05] we've all been there but thats an edge case [19:06] there's also the case where you're trying out different versions of charms, not really developing on them, just evaluating them [19:06] I think the problem might just be that a hook error can mean two wildly different things - one is "something external is wrong, you have to do this concrete step to fix it before I can continue" and the other is "there's a bug in the hook, good luck!" [19:06] oops, i got the old precise version.. nooo [19:07] lazyPower: I strongly disagree that leaving the user to "just have to know" what to do is a good idea [19:07] with reactive its easier to be resilient in the face of externalities [19:07] you should always do what they expect to do [19:07] hatch - i didnt say dont give the user feedback [19:07] i said dont wholesale delete my thing unless you are absolutely sure thats what i want [19:07] again, conflating 2 issues [19:08] please stop and re-read my exact statement of my stance in teh 3 lines above [19:08] because its really exhausting repeating myself on this [19:08] hmm [19:08] lazyPower: I just want user feedback with instructions on how do what the user is intending [19:08] lazyPower: what do you think the user wants to do when they type 'destroy-service' ? [19:08] it's certainly not sit there [19:08] ERROR: unit is in an errored state, can't be removed. To remove anyway, use juju destroy-unit foo/0 --force. [19:08] ^ this [19:28] lazyPower, good! thanks for your information. And..... the images caches is for LXD or for both? [19:28] lxd requires you to import the image before you can even launch the container, so to be fair, both [19:33] lazyPower, hmm...........it makes sense. [19:42] marcoceppi, lazyPower - what's the right way to fix this? https://github.com/juju-solutions/layer-basic/pull/70 [19:42] ryebot: why are we just seeing this now? [19:43] marcoceppi still trying to figure that out - best guess is some package deps changed and are pulling in 8.1.1 now [19:43] cynerva, you experimented with this - can you comment? [19:44] marcoceppi ryebot - not much info, i'm seeing the issue too, on xenial but not trusty [19:44] Cynerva: sure, but when I did a xenial deploy last week I didn't get this error [19:45] marcoceppi yeah - seems like something changed today [19:45] so weird [19:45] marcoceppi yeah, last few hours, happened to both of us independently at roughly the same time [19:45] xenial has always had 8.1.1 [19:46] marcoceppi yeah, but I think it wasn't being pulled in before [19:46] marcoceppi so we pulled in 7.1.2 first [19:46] perhaps you used virtualenv: true, and that locked your pip version in a virtualenv instead of using system pip? [19:46] charm build and layer-basic haven't changed ina bit [19:46] or perhaps thats the path forward? [19:47] * lazyPower is not positive, but thinks that would have some good results, isolating from system deps and creating your own python tree of goodness [19:47] lazyPower: it's annoying to have to do that, but possible owrk around [19:47] well we are getting kind of funky with python across series these days [19:47] considering xenial is py3 default, trusty is py2 default, and we're somewhere in the middle of all that [19:48] * marcoceppi does tests [19:48] lazyPower: py3 is on both trusty and xenial [19:48] that's not a problem, it's the versions of pip that's problematic [19:48] I can't remember why we had to upper bound pip for trusty, but we did [19:50] huh, maybe not. It looks like I just upper bound it for no good reason [19:50] ryebot: I think your pull request is OK actually' [19:51] * ryebot reopens his PR victoriously! [19:58] \o/ [20:01] marcoceppi - i think at the time there was a dependency issue like the frequent path.py woes. i dont recall the exact details, but it was a temporary solution to a temporary problem [20:03] lazyPower: possibly [20:08] marcoceppi: whats the latest beta? [20:08] beta7 [20:09] is there a ppa for that? [20:11] ppa:juju/devel [20:11] aye ta [20:11] or grab charmbox:devel === mramm_ is now known as mramm === natefinch is now known as natefinch-afk