=== redir is now known as redir_afk | ||
=== scuttle|afk is now known as scuttlemonkey | ||
=== scuttlemonkey is now known as scuttle|afk | ||
jamespage | urulama, morning - I'm working through https://github.com/juju/charmstore-client/issues/61 whilst beisner is on leave | 08:07 |
---|---|---|
jamespage | urulama, is there a nice way I can login once, and them propagate the usso credentials to a number of machines so that they can all push and publish charms? | 08:07 |
urulama | jamespage: hm. if you copy the store-usso-token in ~/.local/share/juju across machines, this should do the trick | 08:21 |
urulama | we need to provide you with better instructions how to hook CI | 08:22 |
jamespage | urulama, ok tried that, but charm whoami still thinks I'm not logged in on machines other than the one I generated that file on | 08:29 |
urulama | jamespage: hm. seems like ~/.go-cookies are used still. try copying that file | 08:30 |
jamespage | urulama, trying that now | 08:33 |
jamespage | urulama, +1 that fixed me up | 08:34 |
jamespage | thanks for the help | 08:34 |
jamespage | tinwood, thedac, gnuoy, beisner: OK charm push on change landing for master and stable branches is now live. | 08:44 |
tinwood | jamespage, excellent! | 08:44 |
jamespage | thanks to urulama for helping get the auth sorted out | 08:45 |
jamespage | +1 ta | 08:46 |
tinwood | urulama, excellent too! :) | 08:46 |
urulama | i think proper way would be to create a "bot" user to be used by CI | 08:46 |
urulama | tinwood: ty, it has some rough edges still, but, we'll get there :) | 08:47 |
jamespage | gnuoy, https://review.openstack.org/#/c/320817/ | 08:47 |
jamespage | urulama, agreed - we already have a bot user | 08:47 |
jamespage | gnuoy, if you have time :-) | 08:48 |
=== Guest6887 is now known as BrunoR | ||
godleon | Hi all, I have a question about openstack charms, do you have any plan to develop charm for projects in big tent other than core projects? e.g. magnum, murano ... etc | 08:54 |
jamespage | godleon, we're working on a way to make it easy to charm said projects - and the current team may pick off a few of those, but we'd love to have other contributors who know and use those projects working on the charms :-) | 08:57 |
jamespage | godleon, find that works best rather than the 'read the docs -> write the charm' approach :-) | 08:58 |
godleon | jamespage: thanks! I will spend some time to read the docs. | 08:59 |
jamespage | godleon, still wip but worth a look - https://github.com/openstack-charmers/openstack-community/blob/master/openstack-api-charm-creation-guide.md | 09:00 |
godleon | jamespage: And I have another question about nova-compute with LXD, is it possible to have two virt-type(KVM & LXD) simutaneously on the same openstack platform? | 09:00 |
jamespage | godleon, yes - but not on the same servers | 09:01 |
jamespage | godleon, one sec - letme pick out the reference for that | 09:01 |
godleon | jamespage: wow, you are so kind. :) | 09:04 |
jamespage | godleon, http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md | 09:05 |
jamespage | np :-) | 09:05 |
godleon | jamespage: can I manage multiple hypervisor's workload in one Horizon portal? | 09:09 |
jamespage | godleon, yes | 09:09 |
godleon | jamespage: How about the performance comparison between LXD and docker, havr you ever done this kind of test? | 09:12 |
jamespage | no | 09:12 |
jamespage | godleon, they target different spaces ... | 09:12 |
jamespage | system containers vs application containers | 09:12 |
jamespage | mutable vs immutable | 09:12 |
godleon | jamespage: ok, I didn't have this concept, sorry about that. | 09:13 |
godleon | jamespage: ok, I will dig into the LXD and multiple hypervisor architecture to evaluate if it can help me solve the problems in my project. Really appreciate for ur infomation. :) | 09:16 |
jamespage | godleon, I did a talk about this in austin - letme dig out the video | 09:16 |
godleon | jamespage: great | 09:17 |
jamespage | godleon, https://youtu.be/u511z0BGnw4 | 09:17 |
jamespage | godleon, enjoy my shirt :) | 09:18 |
jamespage | godleon, the demo is missing due to some video problems - I'll ping gsamfira to see if he's re-recorded the demo for us yet... | 09:19 |
godleon | jamespage: haha, will do. Many thanks! | 09:19 |
godleon | jamespage: wow, good! Thanks! | 09:19 |
godleon | godleon: should I leave my email here? | 09:20 |
=== urulama is now known as urulama|swap | ||
gnuoy | jamespage, do you happen to know if the juju native bundle unit placement syntax is documented anywhere? | 09:50 |
jamespage | gnuoy, probably but not sure where | 09:50 |
jamespage | gnuoy, what are you trying todo? | 09:50 |
gnuoy | lxc:nova-compute=1 | 09:50 |
gnuoy | jamespage, fwiw I know it's lxd now | 09:51 |
jamespage | gnuoy, that does not work | 09:51 |
jamespage | gnuoy, you have to target the actual machines | 09:52 |
gnuoy | jamespage, ok, I can live with that, whats the syntax for targetting actual machines | 09:52 |
gnuoy | ? | 09:52 |
jamespage | gnuoy, read this - https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml | 09:52 |
gnuoy | jamespage, ta | 09:52 |
jamespage | gnuoy, could you take a peek at https://code.launchpad.net/~james-page/charm-helpers/newton-opening/+merge/295693 | 10:44 |
jamespage | ? | 10:44 |
coreycb | jamespage, I fixed up the cinder.conf commit message. I never realized you could edit it directly within gerritt. | 11:51 |
=== BlackDex_ is now known as BlackDex | ||
jamespage | gnuoy, https://review.openstack.org/#/c/320817/ | 12:39 |
jamespage | all good to go | 12:39 |
gnuoy | jamespage, approved your ch change too | 12:40 |
jamespage | gnuoy, ta | 12:41 |
gnuoy | tinwood, I really like the clean look of the baribcan charm. The only thing that jars a little for me are the setup_amqp_req, setup_database and setup_endpoint being part of barbican.py. they seem standard and not specific to barbican at all. Can we push them down into the layer or module? | 12:54 |
* tinwood hadn't considered that. | 12:55 | |
tinwood | gnuoy, I'll take another look. I sure there's a dependency on one of them on the charm, but the other two could probably move. | 12:56 |
gnuoy | tinwood, do you agree in principle to moving them? | 13:00 |
tinwood | gnuoy, setup_amqp_req() and setup_database() are both independent of the charm and could happily go elsewhere. setup_endpoint() seems more problematic, in that it needs access to some of the charm's properties. I'm wondering whether that might be better elsewhere? | 13:04 |
gnuoy | tinwood, it was originally part of the charm class wasn't it? do you feel uncomfortable with it going back there? | 13:06 |
tinwood | gnuoy, I'm not sure. The `keystone` object is an interface class instance. It would be possible to have a `register_endpoints` with a sig register_endpoints(keystone : `keystone-interface`) which then turns around a calls the register_endpoints() on the interface, assuming all OpenStackCharms will have service_type, region ... admin_url. | 13:11 |
gnuoy | tinwood, It's safe to say all charms calling register_endpoints will have tose attributes | 13:13 |
gnuoy | s/tose/those/ | 13:13 |
tinwood | I'm actually in favour of pushing the setup_amqp_req() and setup_database() basck to the handlers file in the charm. | 13:13 |
tinwood | gnuoy, but I did want to keep all the handlers together. | 13:14 |
tinwood | hmm. It's because reactive forces us to put some functions in the reactive directory, but we're putting charm code in lib/ | 13:14 |
jamespage | gnuoy, we need a better way todo feature discovery in the cloud - https://review.openstack.org/#/c/320972/1 | 13:16 |
jamespage | I have this type of config option... | 13:16 |
jamespage | have/hate | 13:17 |
lazyPower | godleon - i've done some benchmarking in terms of starting containers but thats not much of a telling story. we both launch containers silly fast if you have the images cached. but jamespage was spot on with them targeting different spaces so its comparing apples to pineapples. | 13:17 |
gnuoy | jamespage, I take it cinder-backup doesn't register an endpoint in keystone? | 13:18 |
jamespage | gnuoy, hmm | 13:18 |
jamespage | it might | 13:18 |
jamespage | gnuoy, however... | 13:18 |
gnuoy | if only there was some sort of service catalogue... | 13:18 |
jamespage | gnuoy, I'd not want to query the service catalog for this; it should be done using charm semantics | 13:18 |
petevg | cory_fu, kjackal, kwmonroe: I've got a question about Bigtop automagic, using the layer-apache-bigtop-spark charm as an example: Does Bigtop know to tell puppet to install spark simply because there is a "spark" entry in the "hosts" dict that we pass to render_site_yaml? Or is there additional configuration info in the charm that I'm missing? | 13:49 |
cory_fu | petevg: It's actually the roles that define what gets installed: https://github.com/juju-solutions/layer-apache-bigtop-spark/blob/master/lib/charms/layer/bigtop_spark.py#L28 | 13:51 |
petevg | cory_fu: got it. That makes sense, now that I think about it. | 13:52 |
petevg | Thank you :-) | 13:52 |
cory_fu | petevg: The Bigtop Puppet scripts have two methods for selecting what gets installed: components or roles. Roles are more fine-grained, and let you sepcify things more precisely, while components infer a lot more based on what hosts are provided | 13:54 |
cory_fu | petevg: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/hieradata/site.yaml has a list of the components you can choose from, while https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/manifests/cluster.pp has a (not 100% complete, it seems) listing of roles | 13:56 |
petevg | Ooh, useful. Will bookmark that, and also link in the README. | 13:57 |
cory_fu | Unfortunately, there doesn't seem to be much in the way of documentation of this stuff outside of the Puppet scripts themselves. | 13:58 |
cory_fu | Adding some would be a good patch to submit, I think | 13:59 |
deanman | Is it possible to deploy an openstack bundle on a Juju with manual provider? I can see from store that the default bundle requires MAAS. | 14:32 |
magicaltrout | marcoceppi stop spamming me! :P | 14:35 |
magicaltrout | arosales: they've fixed my CS login, if it makes you feel any better, I'm also listed in 0 teams ;) | 14:40 |
marcoceppi | magicaltrout: stop opening bugs on the wrong project ;) | 15:02 |
arosales | magicaltrout: glad to they got you sorted | 15:02 |
magicaltrout | just doing what arosales told me :P | 15:02 |
marcoceppi | arosales: stop opening bugs on the wrong project ;) | 15:02 |
arosales | magicaltrout: I hear the ~charmer group is a good group to be in | 15:02 |
* marcoceppi labored over that issue template | 15:02 | |
arosales | marcoceppi: ? | 15:03 |
magicaltrout | Ctrl -a <backspace> | 15:03 |
marcoceppi | magicaltrout: :sadpanda: | 15:03 |
magicaltrout | hehe | 15:03 |
marcoceppi | arosales: charm-tools is only for proof, build, inspect, layers, create, and a few other things. Everything else (login, whoami, push, pull, grant, publish) is charmstore-client | 15:04 |
arosales | magicaltrout: you should at least be in charm contributor and apachefoundation | 15:04 |
arosales | marcoceppi: how is any normal human suppose to know that? | 15:05 |
arosales | marcoceppi: I install charm tools | 15:05 |
marcoceppi | arosales: well, because I created an issue template that tells people this | 15:05 |
arosales | I look at charm tools | 15:05 |
magicaltrout | templates are for wimps | 15:05 |
arosales | Version | 15:05 |
marcoceppi | arosales: https://github.com/juju/charm-tools/issues/new | 15:05 |
* marcoceppi tries so ahrd | 15:05 | |
magicaltrout | marcoceppi: you know that rule about website loading speeds | 15:06 |
magicaltrout | it also applies to placeholder text ;) | 15:06 |
marcoceppi | okay, so trim the fat, got it | 15:06 |
magicaltrout | remove the first para for a start | 15:06 |
magicaltrout | I know you like thanking people, but there is a time and a place... namely a juju charmer summit | 15:06 |
marcoceppi | yeah, I was just thinking that as well | 15:07 |
magicaltrout | you also have a typo in para 2 | 15:07 |
magicaltrout | agains isnt' a word | 15:07 |
arosales | marcoceppi: how hard is it for us to move the bug? | 15:08 |
marcoceppi | arosales: I already did | 15:08 |
magicaltrout | hyper links don't work, so just remove the hyperlink markup | 15:08 |
arosales | marcoceppi: so not very hard in general then | 15:08 |
marcoceppi | arosales: it is very very annoying, because gh doesn't have a way to "move" issues | 15:09 |
marcoceppi | and only admins of both repos can do it, so a select few in general | 15:09 |
arosales | But something a person can do | 15:09 |
marcoceppi | arosales: it's super SUPER dirty, ugly, and not friendly | 15:10 |
arosales | I am +1 for the template | 15:10 |
arosales | But | 15:10 |
marcoceppi | magicaltrout: I've been mulling over the idea of `charm bug` or something that can collect 90% of this from the command line | 15:10 |
arosales | Let's not make it cumbersome on someone giving feedback | 15:10 |
marcoceppi | magicaltrout: not sure if people would actually use it | 15:10 |
marcoceppi | I updated the issue template as well | 15:11 |
arosales | ideally we have one place like Juju to summit all bugs and then triage from there | 15:11 |
arosales | Make it easy for | 15:11 |
arosales | Contributors | 15:11 |
arosales | But that's ideal | 15:12 |
magicaltrout | I don't think charm bug is a bad idea, if i'm already in the command line, copying that stuff out of my terminal is clearly harder than me typing charm bug | 15:12 |
marcoceppi | arosales: sure, if we do that on Launchpad. | 15:12 |
marcoceppi | because you can not move issues around in gh | 15:12 |
marcoceppi | but we can in lp, but now you're subjecting people to lp | 15:13 |
arosales | Well gh repo admin can I think | 15:13 |
arosales | But that's technical | 15:13 |
arosales | All I am saying is take the burden off someone trying to give feedback | 15:14 |
magicaltrout | don't subject people to LP, most people will have a GH login, not so with LP. Finding anything in LP, including your own code is a pain in the ass :) | 15:14 |
arosales | +1 on the template helping them | 15:14 |
marcoceppi | arosales: you can not move issues between repos, pretend I ever said that | 15:14 |
arosales | Get to the right spot | 15:14 |
* marcoceppi spins up a third, bugzilla site ;) | 15:15 | |
magicaltrout | juj deploy cs:bugzilla juju-charm-website-massive-bug-aggregator | 15:16 |
magicaltrout | + | 15:16 |
magicaltrout | u | 15:16 |
arosales | My feedback is make it easy on contributors even if it isore back end woek | 15:16 |
arosales | .. If it is more back end work | 15:17 |
arosales | +1 for templates to help folks get to the right place, but if that fails let's just take care of it on the back end, even if copy paste | 15:18 |
arosales | magicaltrout: would you mind if I demo'ed your mesos bundle? | 15:24 |
magicaltrout | not at all arosales | 15:24 |
magicaltrout | just bear in mind currently it doesn't support > 1 master | 15:24 |
magicaltrout | which i appreciate defeats the point slightly, but there is a fix in the works for that, its just wiring as opposed to anything technical | 15:25 |
arosales | magicaltrout: attending mesoscon in Denver next week and would like to present your bundle at a lighting talk | 15:25 |
magicaltrout | yeah i was thinking about showing up to MesosCon europe with a talk if they fancied it | 15:25 |
arosales | magicaltrout: could you point me at your latest bundle? | 15:26 |
magicaltrout | ah yeah i've not published it yet :P | 15:26 |
magicaltrout | should probably do that | 15:26 |
magicaltrout | https://github.com/buggtb/dcos-master-charm / https://github.com/buggtb/dcos-agent-charm / https://github.com/buggtb/dcos-interface | 15:27 |
magicaltrout | currently | 15:27 |
arosales | magicaltrout: thanks | 15:27 |
magicaltrout | i'll try and get the multi-master finished this week and get it published | 15:28 |
arosales | magicaltrout: oh no worries on working on it this weekend on my account | 15:28 |
arosales | https://www.irccloud.com/pastebin/7DJdv9mG | 15:29 |
magicaltrout | its like you're talking to me in a webpage.... | 15:30 |
arosales | magicaltrout: have you seen the work SaMnCo and data art have done on mesos | 15:30 |
magicaltrout | I've not seen it, but he tried to hook us all up pre-apachecon and we all said hi then it went quiet | 15:30 |
arosales | Dang phone client | 15:30 |
magicaltrout | i mailed SaMnCo the other day to try and reboot it, but i've noticed if I mail him with more than one issue a day, the others don't get a response, so that went unanswered ;) | 15:31 |
arosales | I'll try to follow up with SaMnCo | 15:31 |
arosales | magicaltrout: could you | 15:31 |
arosales | Add marcoceppi and I to cc? | 15:31 |
magicaltrout | I'm not sure what they are upto , but it would be cool to get all of this stuff aligned, hipster tech and all that | 15:31 |
magicaltrout | will do | 15:33 |
magicaltrout | i also need to get a talk submitted to Oslo Devops Days, so I'll probably submit something similar to what I did in that ApacheCon talk with some more hadoop-y stuff to pad it out | 15:34 |
arosales | magicaltrout: good stuff, thanks | 16:00 |
kjackal | petevg: Thank you for the README PR. Docs is something (at least) I have to pay more attention. Good work! | 16:20 |
petevg | kjackal: thanks. Nice to hear that it's appreciated :-) | 16:21 |
=== frankban is now known as frankban|afk | ||
kwmonroe | cory_fu: what's wrong with me? why can't i see an issues tab here? https://github.com/juju-solutions/layer-apache-bigtop-nodemanager | 16:44 |
kwmonroe | and why does going to https://github.com/juju-solutions/layer-apache-bigtop-nodemanager/issues redirect me to PRs? | 16:45 |
cory_fu | That repo doesn't have issues enabled, apparently. Probably some weirdness because of how it was forked | 16:45 |
cory_fu | We should probably re-own it anyway, so it's easier to make PRs | 16:45 |
cory_fu | kwmonroe: Anyway, issues are enabled now | 16:46 |
kwmonroe | gracias! | 16:46 |
=== redir_afk is now known as redir | ||
bdx | openstack-charmers: when using nova-lxd, am I confined to using local storage, or does ceph work in some way as a backend for nova-lxd that I'm unaware of? | 17:38 |
tinwood | gnuoy, you about or eod? | 17:49 |
hatch | Is there a way to get the full log from a unit from the Juju CLI? | 17:55 |
lazyPower | hatch - yep | 18:00 |
lazyPower | juju debug-log -i --replay | 18:00 |
lazyPower | you'll need to pass the unit in the -i flag | 18:00 |
lazyPower | hatch if you dont want to tail after its done, pass -F | 18:00 |
lazyPower | or, juju debug-log --help for the full list of awesome | 18:00 |
hatch | lazyPower: ahhh that's it - I didn't read the docs for --replay because I didn't want to 'replay' them, I just wanted to see them | 18:01 |
hatch | heh | 18:01 |
hatch | thanks :) | 18:01 |
lazyPower | ye :) that'll get ya sorted | 18:01 |
lazyPower | cheers | 18:01 |
hatch | how do I destroy a service with units in error? | 18:20 |
lazyPower | hatch juju remove-machine --force # | 18:20 |
lazyPower | hatch then juju destroy-service will cleanly remove the service/charm from the controller | 18:21 |
lazyPower | that or juju resolved mything/0 all the way down as it fail-cycles until its removed | 18:21 |
hatch | thanks lazyPower | 18:21 |
hatch | saying to destroy a service and getting no feedback is not very good | 18:22 |
lazyPower | i dont know that i'm following what you're calling attention to. How did you not get feedback? | 18:22 |
hatch | I can spam destroy-service and because units are in error nothing happens | 18:22 |
hatch | and I get no feedback | 18:22 |
hatch | so I just sit here wondering why Juju is broken :) | 18:23 |
lazyPower | ah, well, do you expect the command to block until its removed? | 18:23 |
hatch | at the very least I'd expect a message telling me why it's not doing what I told it to do | 18:23 |
lazyPower | its doing the right thing in my mind. its a one-shot declaring a state. "Remove this thingy!!" and its trying its best to get there, and if it fails to do so it reports that. | 18:23 |
lazyPower | thats a disconnect between blocking commands and the fire/forget style state change you've defined. | 18:24 |
hatch | no it doesn't | 18:24 |
hatch | it doesn't report anything | 18:24 |
lazyPower | juju status sure does | 18:24 |
hatch | I guarantee it doesn't | 18:24 |
hatch | https://gist.github.com/hatched/9a374d20d007e019d3ec2045cf7edc1f | 18:25 |
lazyPower | thats funny. workload-status: error | 18:25 |
hatch | where there says that I have destroyed the service about 10 times? | 18:25 |
lazyPower | message: hook failed 'install' | 18:25 |
hatch | yep the hook failed - so why can't I destroy the service? | 18:25 |
natefinch | lazyPower: I'm with hatch. juju destroy-service, IMO, should just take down errored units. who cares if a unit is in an error state, I'm explicitly removing it | 18:26 |
lazyPower | you're making an assumption about what it should be doing | 18:26 |
lazyPower | and i dont agree with your assumptions | 18:26 |
lazyPower | "i said destroy service, and its still here D:" | 18:26 |
lazyPower | what if you want to debug that while its in life: dying? | 18:26 |
hatch | so you prefer for the command to just return with no message to the user | 18:26 |
lazyPower | and root out what the cause was? | 18:26 |
lazyPower | no, i want you to admit that you're conflating two issues | 18:26 |
natefinch | I definitely agree that if we KNOW that destroy-service is not going to work, we tell the user | 18:27 |
hatch | I said to Juju to destroy the service - I want it to destroy the service | 18:27 |
lazyPower | thing is | 18:27 |
lazyPower | juju status --format=yaml | 18:27 |
lazyPower | the life is going to be dying | 18:27 |
hatch | but it's not dying | 18:27 |
lazyPower | it received and is working towards that destructive state | 18:27 |
hatch | it's going to sit there forever | 18:27 |
natefinch | juju destroy-service foo | 18:27 |
lazyPower | the fact there's a hook error is regardless of what state change you just told it to take | 18:27 |
natefinch | ERROR: can't destroy service, unit in error state: foo/0 | 18:27 |
hatch | ^^^ this | 18:27 |
hatch | 100x this | 18:27 |
natefinch | if we want to be pedantic and conservative and not destroy an errored unit automatically.... at least tell the user we're not going to do it | 18:28 |
lazyPower | thing is, it IS going to do it if you resolve it and it doesn't further error | 18:28 |
hatch | if you're not doing to do what the user intends to do then at least tell them why | 18:28 |
lazyPower | its not "uncommitting" that destroy directive | 18:28 |
natefinch | WARNING: unit foo/0 in errored state, will not be destroyed until resolved. | 18:28 |
lazyPower | that makes more sense | 18:29 |
lazyPower | i'm +1 to that | 18:29 |
natefinch | throw me a bone, FFS. The user shouldn't need to know the internal details of exactly how everything works... we should help them to use juju | 18:29 |
hatch | YES! | 18:29 |
lazyPower | but i still stand that its 2 sep. issues. | 18:29 |
natefinch | I'm fine with it being two separate issues - do we or do we not destroy errored units automatically? and, if we don't, we need to tell the user explicitly. | 18:30 |
natefinch | And honestly, I wish destroy-service were synchronous, like destroy-model is now | 18:30 |
natefinch | maybe with a flag to make it async... or vice versa.... I shouldn't ever need to type watch juju status in order to figure out WTF is going on in juju | 18:31 |
hatch | +1 | 18:31 |
lazyPower | i'm going to leave this alone | 18:31 |
hatch | lol | 18:31 |
mbruzek | what? | 18:31 |
lazyPower | this is going off the rails into a really bad gripe session | 18:31 |
natefinch | lol true, sorry :) | 18:31 |
hatch | I'm going to file a bug about the destroy-service messaging | 18:32 |
mbruzek | natefinch: I watch juju status all the damn time | 18:32 |
mbruzek | It is the only way to figure out what is going on behind the curtain | 18:32 |
mbruzek | That wizard is up to some crazy stuff folks, I constantly watch juju status and tail the debug log | 18:32 |
natefinch | mbruzek: that's my point. You shouldn't have to. Like the way destroy-model is synchronous and actually tells you what it's doing. | 18:32 |
lazyPower | whats fun is something errors you destroy it, then figure out it was like a race condition and the charm can recover... it goes to started, then immediately destroys itself | 18:32 |
lazyPower | thats the best | 18:32 |
hatch | lazyPower: sorry I told it to do something - I expect it to listen :) | 18:33 |
hatch | I don't much care what it wants | 18:33 |
lazyPower | again. it listened | 18:33 |
lazyPower | you weren't prepared for the sequence of papercuts afterwords | 18:33 |
hatch | this is like going to a restaurant, placing an order with the server and then the server walks away | 18:34 |
hatch | you sit there forever waiting for your food | 18:35 |
hatch | it was sitting in the kitchen, but you forgot to tell them to deliver it | 18:35 |
hatch | not that you knew you had to say that | 18:35 |
hatch | because the server didn't tell you that | 18:35 |
lazyPower | look i'm not saying you're wrong, i'm saying teh way you're conveying it and griping is wrong, beause you're telling me it didnt do something that it is celary doing | 18:35 |
lazyPower | *clearly | 18:35 |
hatch | how is it doing it? It isn't destroying the service | 18:35 |
lazyPower | did you juju remove-machine # --force? | 18:35 |
hatch | I did | 18:36 |
lazyPower | teh service should have gone then | 18:36 |
lazyPower | its dead | 18:36 |
lazyPower | has no units | 18:36 |
hatch | it did | 18:36 |
lazyPower | is in state: dying | 18:36 |
lazyPower | then how did it not destroy? | 18:36 |
hatch | so destroy-service isn't destroy service | 18:36 |
hatch | it's "mark for destroy sometime in the future when a pretermined list of requirements have been satisfied - oh but you don't know what those are" | 18:36 |
hatch | that's quite the command ;) | 18:37 |
natefinch | it seems to me that if destroy-service had a --force flag, it would help a lot. | 18:37 |
lazyPower | thats not true either | 18:37 |
lazyPower | hatch - this. x1000 is this conversation | 18:38 |
lazyPower | https://xkcd.com/386/ | 18:38 |
hatch | lol except this _is_ important | 18:38 |
hatch | users need to be informed about what's going on | 18:38 |
hatch | and if it's not doing what they said to do they need to know why | 18:38 |
mbruzek | Has anyone opened a bug about this problem? With the problem statement, and expected result. I think others would like to see this and possibly comment on either it is working as designed or a problem that will get fixed next release. | 18:40 |
hatch | mbruzek: here is one I filed earlier https://bugs.launchpad.net/juju-core/+bug/1568160 | 18:40 |
mup | Bug #1568160: destroying a unit with an error gives no user feedback <destroy-unit> <observability> <juju-core:Triaged> <https://launchpad.net/bugs/1568160> | 18:40 |
hatch | I actually totally forgot I filed that one | 18:41 |
hatch | haha | 18:41 |
hatch | I file a lot of bugs | 18:41 |
hatch | :) | 18:41 |
hatch | mbruzek: I think there are really two points here - 1) why errors block removal actions and 2) lack of user feedback in any case | 18:42 |
mbruzek | hatch: natefinch: lazyPower: I commented on the bug ^. I think the actual problem is the charm code had a bug, and would not destroy because a hook returned non zero and that is literally how Juju hooks works. | 18:52 |
* lazyPower smirks | 18:52 | |
hatch | https://bugs.launchpad.net/juju-core/+bug/1585750 | 18:52 |
mup | Bug #1585750: Destroying a service in error gives no feedback <juju-core:New> <https://launchpad.net/bugs/1585750> | 18:52 |
hatch | mbruzek: your comment doesn't really apply | 18:55 |
hatch | mbruzek: if you want to fix the error then fix the error | 18:56 |
hatch | you wouldn't destroy it if you wanted to fix it would you? | 18:56 |
mbruzek | in the charm code rather than fix juju | 18:56 |
lazyPower | stop isn't executed without first destroying the service or removing a unit | 18:56 |
magicaltrout | marcoceppi: ping | 18:56 |
lazyPower | you may not know its a problem until you've already issued the destroy stanza, and by your proposal - there is no way to really debug it. its just LOL bye. | 18:56 |
marcoceppi | magicaltrout: yo | 18:56 |
mbruzek | Actually hatch that is a valid test case when developing a charm. I want to make sure that the charm goes down cleanly | 18:56 |
magicaltrout | hey, 2 things | 18:57 |
magicaltrout | a) warning: bugs-url and homepage are not set. See set command. | 18:57 |
magicaltrout | hmm nm | 18:57 |
mbruzek | hatch: There is no other way to call the stop hook that I know of. | 18:57 |
magicaltrout | ignore that one | 18:57 |
magicaltrout | b) https://jujucharms.com/u/apachesoftwarefoundation/ why did the charm I publish earlier vanish? | 18:57 |
marcoceppi | magicaltrout: are you logged in | 18:57 |
magicaltrout | i am logged in | 18:58 |
hatch | mbruzek: This is the first time I've ever heard of any reason to not clean up on error | 18:58 |
mbruzek | hatch: If the charm *-broken relations, or stop hooks, did something really important, I would TOTALLY want to know if they worked or not. | 18:58 |
hatch | mbruzek: but put yourself in the users shoes - how are they supposed to know any of this? | 18:59 |
mbruzek | Like backing up data to S3 or doing other important clean up things | 18:59 |
magicaltrout | i fully had a charm there earlier that I went to look at and stuff | 18:59 |
lazyPower | magicaltrout - its likely permissions | 18:59 |
hatch | mbruzek: there is literally 0 feedback or help given to the user | 18:59 |
mbruzek | hatch: I *am* thinking of the users. I would propose that you are using the wrong command. | 18:59 |
lazyPower | magicaltrout - pastebin teh output of charm show cs:~apachefoundation/mything | 18:59 |
hatch | mbruzek: so what command am I supposed to use? | 18:59 |
mbruzek | If there are errors in the hook, remove-machine | 18:59 |
mbruzek | or resolve the errors | 19:00 |
natefinch | mbruzek: In the long term, 99.9999% of users will not be charm authors | 19:00 |
hatch | ok so I'm supposed to know that when I try to destroy a service, if there is an error I need to remove the machines | 19:00 |
hatch | but I have to actually check juju status | 19:00 |
hatch | often | 19:00 |
hatch | to know what commands to run | 19:00 |
magicaltrout | marcoceppi: http://paste.ubuntu.com/16691257/ | 19:00 |
lazyPower | magicaltrout Read: | 19:01 |
lazyPower | - apachesoftwarefoundation | 19:01 |
lazyPower | thats the issue | 19:01 |
lazyPower | magicaltrout charm grant cs:~apachesoftwarefoundation/trusty/joshua-full --acl=read everyone | 19:01 |
lazyPower | the reason you saw it was because you were logged into the store, and had access to "read" the charm | 19:01 |
lazyPower | if you checked it in private mode, you would not have seen it. | 19:01 |
mbruzek | hatch you and natefinch, when a hook returns a non zero return code Juju pauses to let the admin fix the stuff. If the stop hooks were doing some legit backup sequence, the admin would totally want to know if that is not working | 19:02 |
magicaltrout | you are correct lazyPower it is perms related.... that said jujucharms.com says I'm logged in | 19:02 |
lazyPower | ooo | 19:02 |
lazyPower | weird | 19:02 |
hatch | mbruzek: fine, but how do they know that? | 19:02 |
magicaltrout | and once I granted permissions it appeared, but i'm on the homepage and can see my face | 19:02 |
magicaltrout | which tends to indicate i am logged in | 19:02 |
hatch | how do they know to resolve the units? | 19:02 |
magicaltrout | anyhoo, weird, but works, thanks a lot | 19:02 |
lazyPower | hatch - clairvoyance obviously | 19:03 |
hatch | lazyPower: this is how my experience has been, yes | 19:03 |
lazyPower | as i said, feedback is fine | 19:03 |
mbruzek | hatch: Charms throw all kinds of errors, resolve is the mechanism to force Juju to proceed | 19:03 |
natefinch | I think that if the unit is in an error state *before* calling juju destroy-unit, we should warn the user and ask them if they want to destroy it anyway | 19:03 |
lazyPower | bu ti do take issue with wholesale deletion of a service in error state | 19:03 |
lazyPower | unless you make it flag based where thats not default behavior | 19:03 |
lazyPower | as cmars suggested which is nice middle ground | 19:03 |
lazyPower | cmars thanks for being the voice of reason in this | 19:04 |
cmars | reason? me? what? :) | 19:04 |
lazyPower | you're the only one who piped in with adding --i-mean-it | 19:04 |
cmars | :) | 19:04 |
lazyPower | which is nice middle ground | 19:04 |
lazyPower | keep default behavior, give the hatches of the world an option to wholesale delete | 19:04 |
hatch | I really don't understand why there is so much resistance to working hard on user experience with the CLI | 19:04 |
hatch | there is a problem, lets work on a fix | 19:04 |
hatch | not drag our heals because this is how it is | 19:05 |
lazyPower | hatch - because your campaigning for something thats really destructive and ops people will not appreciate it | 19:05 |
cmars | i think mbruzek has a good point, but i've certainly screwed up a charm with repeated `juju upgrade-charm --force-units ... ; juju resolved --retry` that I have no confidence its state reflects any kind of debuggable reality | 19:05 |
lazyPower | yup | 19:05 |
lazyPower | we've all been there but thats an edge case | 19:05 |
cmars | there's also the case where you're trying out different versions of charms, not really developing on them, just evaluating them | 19:06 |
natefinch | I think the problem might just be that a hook error can mean two wildly different things - one is "something external is wrong, you have to do this concrete step to fix it before I can continue" and the other is "there's a bug in the hook, good luck!" | 19:06 |
cmars | oops, i got the old precise version.. nooo | 19:06 |
hatch | lazyPower: I strongly disagree that leaving the user to "just have to know" what to do is a good idea | 19:07 |
cmars | with reactive its easier to be resilient in the face of externalities | 19:07 |
hatch | you should always do what they expect to do | 19:07 |
lazyPower | hatch - i didnt say dont give the user feedback | 19:07 |
lazyPower | i said dont wholesale delete my thing unless you are absolutely sure thats what i want | 19:07 |
lazyPower | again, conflating 2 issues | 19:07 |
lazyPower | please stop and re-read my exact statement of my stance in teh 3 lines above | 19:08 |
lazyPower | because its really exhausting repeating myself on this | 19:08 |
cmars | hmm | 19:08 |
hatch | lazyPower: I just want user feedback with instructions on how do what the user is intending | 19:08 |
hatch | lazyPower: what do you think the user wants to do when they type 'destroy-service' ? | 19:08 |
hatch | it's certainly not sit there | 19:08 |
natefinch | ERROR: unit is in an errored state, can't be removed. To remove anyway, use juju destroy-unit foo/0 --force. | 19:08 |
hatch | ^ this | 19:08 |
godleon | lazyPower, good! thanks for your information. And..... the images caches is for LXD or for both? | 19:28 |
lazyPower | lxd requires you to import the image before you can even launch the container, so to be fair, both | 19:28 |
godleon | lazyPower, hmm...........it makes sense. | 19:33 |
ryebot | marcoceppi, lazyPower - what's the right way to fix this? https://github.com/juju-solutions/layer-basic/pull/70 | 19:42 |
marcoceppi | ryebot: why are we just seeing this now? | 19:42 |
ryebot | marcoceppi still trying to figure that out - best guess is some package deps changed and are pulling in 8.1.1 now | 19:43 |
ryebot | cynerva, you experimented with this - can you comment? | 19:43 |
Cynerva | marcoceppi ryebot - not much info, i'm seeing the issue too, on xenial but not trusty | 19:44 |
marcoceppi | Cynerva: sure, but when I did a xenial deploy last week I didn't get this error | 19:44 |
Cynerva | marcoceppi yeah - seems like something changed today | 19:45 |
marcoceppi | so weird | 19:45 |
ryebot | marcoceppi yeah, last few hours, happened to both of us independently at roughly the same time | 19:45 |
marcoceppi | xenial has always had 8.1.1 | 19:45 |
ryebot | marcoceppi yeah, but I think it wasn't being pulled in before | 19:46 |
ryebot | marcoceppi so we pulled in 7.1.2 first | 19:46 |
lazyPower | perhaps you used virtualenv: true, and that locked your pip version in a virtualenv instead of using system pip? | 19:46 |
marcoceppi | charm build and layer-basic haven't changed ina bit | 19:46 |
lazyPower | or perhaps thats the path forward? | 19:46 |
* lazyPower is not positive, but thinks that would have some good results, isolating from system deps and creating your own python tree of goodness | 19:47 | |
marcoceppi | lazyPower: it's annoying to have to do that, but possible owrk around | 19:47 |
lazyPower | well we are getting kind of funky with python across series these days | 19:47 |
lazyPower | considering xenial is py3 default, trusty is py2 default, and we're somewhere in the middle of all that | 19:47 |
* marcoceppi does tests | 19:48 | |
marcoceppi | lazyPower: py3 is on both trusty and xenial | 19:48 |
marcoceppi | that's not a problem, it's the versions of pip that's problematic | 19:48 |
marcoceppi | I can't remember why we had to upper bound pip for trusty, but we did | 19:48 |
marcoceppi | huh, maybe not. It looks like I just upper bound it for no good reason | 19:50 |
marcoceppi | ryebot: I think your pull request is OK actually' | 19:50 |
* ryebot reopens his PR victoriously! | 19:51 | |
lazyPower | \o/ | 19:58 |
lazyPower | marcoceppi - i think at the time there was a dependency issue like the frequent path.py woes. i dont recall the exact details, but it was a temporary solution to a temporary problem | 20:01 |
marcoceppi | lazyPower: possibly | 20:03 |
magicaltrout | marcoceppi: whats the latest beta? | 20:08 |
marcoceppi | beta7 | 20:08 |
magicaltrout | is there a ppa for that? | 20:09 |
lazyPower | ppa:juju/devel | 20:11 |
magicaltrout | aye ta | 20:11 |
lazyPower | or grab charmbox:devel | 20:11 |
=== mramm_ is now known as mramm | ||
=== natefinch is now known as natefinch-afk |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!