stokachu | catbus1, i should be around for a bit just ping me if you come back to it | 00:46 |
---|---|---|
stokachu | balloons, around? | 01:45 |
pranav_ | Hi. I have a question regarding Cinder charm. | 06:06 |
pranav_ | If i locally modify the cinder.conf through the shell and then modify config through the juju-gui | 06:07 |
pranav_ | the local changes get overridden. Any workaround this? | 06:07 |
junaidali | pranav_: I haven't used juju gui for deployment. Have you tried modifying the template file in /var/lib/juju/agents/unit-cinder-0/charm/templates/ and then writing changes using gui? | 07:13 |
pranav_ | junaidali: No haven't tried that. We tried to see if any relation hooks get fired on modification but no good. Let me check the templates | 07:20 |
pranav_ | One question, to modify the template file our application has to be running in the same unit, correct? Or is there a way to change the templates via relations? | 07:22 |
junaidali | imo, if you want some changes to be retained, you probably look for changing template file because whenever you change some config, all the template files will be re-written | 07:36 |
junaidali | if you've more than one units of charm, update template file in each unit | 07:37 |
junaidali | should probably* | 07:38 |
pranav_ | I see. In which case our application always has to co-exist with the cinder charm. | 07:41 |
junaidali | sorry, just to correct what i said earlier, whenever we change any config, all the file are re-written **from the template file** | 07:43 |
junaidali | pranav_: | 07:43 |
junaidali | files* | 07:44 |
pranav_ | Cool! we are gonna try out changing the templates, which i feel should work for us. | 07:50 |
pranav_ | Thanks for the help Junaid :) | 07:51 |
=== aluria` is now known as aluria | ||
anrah | Hi! I'm starting to write tests to my charms, however I seem to have a problem: | 08:33 |
anrah | 2017-02-07 10:32:51 Error getting env api endpoints, env bootstrapped? | 08:33 |
anrah | 2017-02-07 10:32:51 Command (juju api-endpoints -e localhost-localhost:admin/default) Output: | 08:33 |
anrah | http://paste.ubuntu.com/23946536/ | 08:35 |
anrah | juju version is 2.0.2 | 08:35 |
anrah | and amulet is 1.18.2 | 08:37 |
anrah | somehow it looks like the amulet? is using old juju commands that are not available on 2.0 ? | 08:37 |
junaidali | anrah: what's the version of juju-deployer you have? | 08:47 |
anrah | 0.6.4 | 08:48 |
anrah | I upgraded it but no help | 08:51 |
junaidali | upgraded to which version? i think juju-deployer doesn't yet support 2.0. You can get a 2.0 supported package from Tim Van's ppa https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa | 08:54 |
anrah | 0.10.0 is the new version | 08:54 |
junaidali | oh okay, you've the newest version.. | 08:54 |
anrah | I think the problem is not with the juju-deployer but the at the very first lines where the amulet? is trying to get information about the controller | 08:55 |
anrah | as it trys to run this command: Command (juju api-endpoints -e localhost-localhost:admin/default) | 08:56 |
anrah | and there is no juju api-endpoints command on juju 2.0 | 08:56 |
junaidali | yes, you're right | 08:56 |
anrah | so.. Amulet is useless with Juju 2.0 ? | 08:57 |
junaidali | wait, someone will help you. It is probably late for many people here | 09:03 |
anrah | Thanks! | 09:05 |
anrah | I think I mixed the python versions.. As amulet uses python3 and juju-deployer python2 so had old version of the juju-deployer on python2 packages | 09:19 |
lazyPower | Zic heyo, i'm starting to hack on that issue you uncovered lastnight now. | 09:22 |
lazyPower | Zic - should have something fo ryou to look at before we head into the kubernetes hacking sessions later today. This might take a bit, but i'm hoping to have something for you to setup in a staging capacity shortly. | 09:23 |
Zic | lazyPower: hello, I just come to my office, you're so synced :) | 09:25 |
lazyPower | Zic :) Blame it on Belgium | 09:25 |
Zic | lazyPower: what jetlag does it cause to you? you came from London/Canonical? | 09:26 |
lazyPower | Zic - i came from KC MO, so much further than London | 10:24 |
Zic | lazyPower: so "jetlag" was the right word :D I was not so sure "jetlag" is applicable if you just... take the boat or the train :) | 10:37 |
lazyPower | indeed. i'll be leaving tomorrow so onw that i'm adjusted to the timezone, its time to leave soon | 10:41 |
lazyPower | @stub hey stub, question for you re: layer-leadership if you're around | 10:42 |
stub | yo | 10:42 |
lazyPower | stub, can i just invoke it once and give it a dict? or do i need to call it everytime with key=val pairs for each data point i want the leader to push? | 10:42 |
lazyPower | charms.leadership.leader_set({'service_key': '/etc/kubernetes/serviceaccount.key', 'foo': 'bar'}) | 10:43 |
stub | that method accepts keyword arguments or a dictionary, so either leader_set({'foo': 'bar','baz':False}) or leader_set(foo='bar', baz=False) | 10:43 |
lazyPower | oh fantastic | 10:44 |
lazyPower | stub <3 | 10:44 |
stub | although now I look at the signature, leader_set(settings='hello') will fail, so don't call your leadership setting 'setting' until I fix that :-P | 10:46 |
=== rogpeppe1 is now known as rogpeppe | ||
Zic | lazyPower: as you are at Europe time now, I will be available if you need any test to 19:00 :) | 11:37 |
lazyPower | Zic - i'm just now deploying the first attempt at fixing, currently waiting on convergence | 11:37 |
lazyPower | Zic - but as i'm at a conference its hard to dedicate any reasonable amount of time, so thank you for being patience | 11:37 |
lazyPower | *patient | 11:37 |
Zic | I understand, no worries :) | 11:37 |
Zic | my customer continue to adapt their code to k8s with minikube installed locally for now | 11:38 |
magicalt1out | kwmonroe: have you come up with an apachecon talk yet? | 13:09 |
magicalt1out | dont want to end up sumitting something similar to you guys | 13:09 |
=== magicalt1out is now known as magicaltrout | ||
xnox | let the fun begin | 14:12 |
xnox | $ juju deploy easyrsa | 14:12 |
xnox | ERROR cannot resolve URL "cs:easyrsa": charm or bundle not found | 14:12 |
xnox | juju deploy cs:~containers/easyrsa-6 works. | 14:14 |
marcoceppi | xnox: yup, it's not yet in a promoted namespace | 14:20 |
xnox | ok | 14:20 |
marcoceppi | xnox: you can expect it to complete review in time for the 1.6.0 release of kubernetes | 14:20 |
xnox | working of instructions from https://jujucharms.com/u/containers/easyrsa/ | 14:21 |
xnox | I cannot do "juju deploy tls-client" and the relation name is wrong too? | 14:21 |
marcoceppi | tls-client is a place holder for connecting to something that consumes tls | 14:21 |
marcoceppi | xnox: what's your end goal? | 14:21 |
xnox | currently testing that this charm (by itself) works on s390x. | 14:22 |
xnox | assuming since i have managed to deploy easyrsa, it does. | 14:22 |
marcoceppi | xnox: you'll probably want something that connects | 14:22 |
marcoceppi | it's python so it'll probably work | 14:23 |
marcoceppi | I don't think it uses anything that's not interprutted | 14:23 |
xnox | i am getting a lot of _juju_complete_2_0: command not found | 14:23 |
marcoceppi | xnox: where? in your terminal? | 14:24 |
xnox | yes | 14:24 |
xnox | when i try to tab complete things | 14:25 |
xnox | (i'm running zesty) | 14:25 |
marcoceppi | xnox: are you using the snap? | 14:25 |
xnox | no | 14:25 |
xnox | whatever is in zesty as deb | 14:25 |
xnox | where is source code for https://jujucharms.com/u/containers/kubernetes-master/11 ? | 14:36 |
marcoceppi | xnox: https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes-master | 14:39 |
xnox | marcoceppi, how would I fund that from the charmstore? | 14:39 |
marcoceppi | xnox: you wouldn't unless it's in th ereadme | 14:40 |
xnox | marcoceppi, but that's the source code, rather than published charm revision? | 14:41 |
xnox | does that correspond to the v11 of the charm? | 14:41 |
marcoceppi | xnox: no, that's the upstream code | 14:41 |
xnox | marcoceppi, how can I clone the actual charm? | 14:41 |
marcoceppi | xnox: just download the .zip or use `charm pull ~containers/kubernetes-master` | 14:41 |
xnox | it's missing resources and binaries for my architecture, so i need to modify it and submit merge proposals back. But also test before I do all that. | 14:42 |
marcoceppi | nope | 14:42 |
xnox | what do you mean nope? | 14:42 |
marcoceppi | resources aren't delivered in the source code | 14:43 |
marcoceppi | you can deploy kubernetes-master from the store, and attach your kubernetes rsource during that time from a local mirror | 14:43 |
xnox | =) | 14:43 |
xnox | sure, but i want to fix it for everyone on s390x, not just me. | 14:43 |
marcoceppi | xnox: well, that's a slight change, but it's on our roadmap. s390x either just landed in upstream for support or will be soon | 14:44 |
marcoceppi | once it does we can include it in the charm officially | 14:44 |
xnox | landed | 14:44 |
marcoceppi | xnox: either way, you'll want to submit pull requests to the repo mentioned above | 14:44 |
xnox | ok | 14:45 |
marcoceppi | xnox: as the charms are built from that source, using `charm build` | 14:45 |
xnox | and after charm build, i can test my merge proposal locally, right? | 14:45 |
xnox | (last time i did charms, there was no `charm build` and no reactive anything) | 14:45 |
marcoceppi | xnox: modify the code; charm build, juju deploy builds/kubernetes-master --resource kubernetes=s390x.tar.gz; test | 14:46 |
marcoceppi | xnox: I'm a bit busy at the moment, but i'd love to help you more | 14:46 |
xnox | ok, thanks for your time. | 14:46 |
marcoceppi | Very interested in getting s390x support (and ppc64el) added to the charms | 14:46 |
marcoceppi | we have two routes we're chasing for that, so we should chat more | 14:46 |
admcleod | it appears my 'juju deploy' is using a proxy, although i dont have http_proxy or https_proxy set - any ideas? | 14:47 |
lazyPower | stokachu - both the snap and deb packages are illustrating some odd behavior. its no longer waiting for the deployment | 14:52 |
lazyPower | stokachu - i'll try to get more details about this and get a proper bug filed, but just FYI'ing you that there's one in flight incoming. | 14:52 |
stokachu | lazyPower, thanks im debugging right now | 14:52 |
lazyPower | oh has someone already reported? | 14:52 |
stokachu | yea arosales | 14:53 |
lazyPower | Zic - i've got a branch in flight, not going to be ready by EOD today. | 14:53 |
lazyPower | Zic - however, i will def get some additinoal testing on this and ping you in the PR so you can track its progress | 14:53 |
xnox | i am confused where and how a resource ends up packaged and uploaded into the charm store. | 14:59 |
rick_h | xnox: how so? using the charm command? | 15:02 |
xnox | rick_h, but i'm failing to pin point where the resource is assembled/downloaded from, e.g. a makefile or some declarative syntax. | 15:03 |
rick_h | xnox: https://jujucharms.com/docs/2.0/developer-resources ? | 15:05 |
xnox | aha! thank you | 15:06 |
xnox | rick_h, is there $ charm resource-get command? | 15:08 |
arosales | lazyPower: in case your interested the bug I filled was https://github.com/conjure-up/spells/issues/45 | 15:08 |
lazyPower | arosales - thats consistent with whats happened in the k8s flavors of spells | 15:08 |
lazyPower | thanks, i'll sub to this | 15:08 |
* arosales thought it was the deploy_done step, but seems more prevalent if your hitting it as well. Good news is stokachu is aware and working on a fix. | 15:09 | |
rick_h | xnox: so there's a resource-get hook for the charm itself. I don't see a charm pull-resource like there is a charm pull command for outside of a running unit :( | 15:10 |
xnox | rick_h, also that makes little sense...... so the resource has binaries, but they are x86-64 binaries. | 15:10 |
xnox | rick_h, how does one envision having arch specific resources? e.g. similar to how juju-tools are arch specific. | 15:10 |
xnox | rick_h, can I register s390x resource; amd64 resource; etc? with the same name? | 15:10 |
xnox | or shall I have everything inside the resource tarball and do it myself in the charm (and keep just one resource name) | 15:11 |
marcoceppi | xnox: well we're doing one of two things | 15:11 |
xnox | it seems like it's already an API that a tarball with toplevel binaries must be supported | 15:11 |
marcoceppi | xnox: the first is you can add a named resource like `kubernetes-s390x` and rename kubernetes to `kubernetes-x86` | 15:12 |
rick_h | xnox: no, so I think you'd need to define them as different resources and upload them at publish time and then the hooks would have to have an if block that states "if arch = xxx resource-get xxx" | 15:12 |
xnox | eeewwww | 15:12 |
marcoceppi | xnox: packing every arch in a tarbal is expensive for everyone | 15:12 |
Zic | lazyPower: just saw your message, ACK, thanks for your work while travelling anyway... it's tough :) | 15:12 |
marcoceppi | xnox: to be honest we're moving ot snaps for deliverying kubernetes | 15:12 |
xnox | marcoceppi, renaming a resource breaks all deployments that use custom resource. | 15:13 |
marcoceppi | which handles architecture specific payloads (and adds security / process confinement) | 15:13 |
marcoceppi | xnox: we'll we'd deprecate the kubernetes resource | 15:13 |
xnox | ack. | 15:13 |
marcoceppi | keep it, but deliver an empty blob | 15:13 |
marcoceppi | the charm would say if the blobs empty, look for an arch named resource key | 15:13 |
marcoceppi | that way we wouldn't break exising custom stuff | 15:14 |
marcoceppi | we can do that tastefully, but I think we're going to move away from charm resources | 15:14 |
xnox | about expensive, the tarball is currently 78MB, if we explode it 3 times. It's ok, if we download that; but then remove unused arches. | 15:14 |
xnox | ./foo -> | 15:14 |
xnox | ./x86_64/foo | 15:15 |
xnox | ./s390x/foo | 15:15 |
xnox | ./ppc64el/foo | 15:15 |
xnox | as a resource should be backwards enough compatible. E.g. look for ./arch/foo, failing that take ./foo and run with it. | 15:15 |
Zic | lazyPower: do you have any recommendation to do manually the work of the fix while waiting for the fix? my customer/devs works on their minikube locally while the cluster is b0rken so it's fine, but I need myself to do some testing (like connecting our k8s cluster with our Nagios) | 15:16 |
lazyPower | Zic - you can manually sync those files, but i'd rather have this run during a charm upgrade so its consistent if thats ok | 15:17 |
lazyPower | Zic - otherwise i'm encourgaing you to make a snowflake | 15:17 |
Zic | lazyPower: bonus question, how can I explain to my customer that this problem was not discovered in your testing environment, as for him, the step-to-reproduce is just a reboot | 15:21 |
Zic | I don't want to put all the fault on your shoulders :/ | 15:21 |
lazyPower | Zic - we're in beta and HA testing hasn't been part of the release process. #honestanswer | 15:21 |
Zic | what can cause this in our environment and not in your? | 15:21 |
lazyPower | Zic - we've been more focused on single master with in-place upgrades, once that's rock solid, we were going to do HA testing. | 15:22 |
lazyPower | you just happened to beat us to it :) | 15:22 |
Zic | \o/ | 15:22 |
lazyPower | Zic and thanks to that testing, we should get HA bits done before we're done with the upgrade testing that we're still working through. A lot of the plumbing is there | 15:22 |
lazyPower | but the minor release updates is different from major release updates | 15:22 |
lazyPower | so we're workign through *that* story this cycle, is 1.5.x => 1.6.x | 15:23 |
stokachu | lazyPower, are you guys seeing errors like (ServerError(...), 'json: cannot unmarshal string into Go value of type uint64') | 15:27 |
stokachu | arosales, ^ | 15:27 |
arosales | stokachu: checking | 15:29 |
stokachu | arosales, lazyPower, can you guys also try with `sudo snap refresh conjure-up --edge` | 15:30 |
stokachu | i got some more debugging in there | 15:30 |
arosales | stokachu: not seeing that in my stack trace, where else should I look? | 15:30 |
stokachu | arosales, ~/.cache/conjure-up/conjure-up.log | 15:30 |
arosales | stokachu: not finding that string in my conjure-up.log | 15:33 |
stokachu | arosales, ok give that --edge version a go | 15:33 |
* arosales now going to try with "sudo snap refresh conjure-up --edge | 15:33 | |
stokachu | thanks | 15:33 |
Zic | hmm, what exactly "juju enable-ha" do? especially as I'm on manual-cloud provisionning with no MaaS, does it pop a new juju controller machine? | 15:40 |
Zic | jcastro: hello, the k8s meeting we talked the last week is tomorrow right? | 15:45 |
Zic | I will have time to come this time normally :) | 15:46 |
stokachu | arosales, how's it going? | 15:48 |
marcoceppi | Zic: on manual nothing much, however if you `juju switch controller; juju add-machine <user>@<ip>; juju add-machine <user>@<ip>; juju enable-ha --to <machine-id1>,<machine-id2>;` you can get ha for contorller. However, there are some caveats with manual provider | 15:48 |
Zic | marcoceppi: thanks! was wat I'm looking for :) | 15:49 |
Zic | what* | 15:49 |
arosales | stokachu: hitting a snap issue atm | 15:51 |
arosales | working through that | 15:51 |
stokachu | arosales, ok | 15:51 |
Zic | marcoceppi: I think we're going to deploy a MaaS anyway if we have other infra like the one we are preparing with CDK, as our interest for Juju will also be on OpenStack in coming month | 15:52 |
marcoceppi | Zic: I think it'll help with a lot of the headaches you've had | 15:52 |
Zic | the main reason I didn't deploy a MaaS was because we have a kind of like prudct internally-developed, but with less features, and for only one infra, even if MaaS has more features, it's a bit redundant (and our internal system is so strongly tied to other tools we used here...) | 15:56 |
Zic | but as we're planning to deploy more CDK, and test Juju for OpenStack in coming month... I will give it a try :) | 15:56 |
Zic | s/prudct/product/ what happened to my keyboard! :] | 15:57 |
arosales | stokachu: got a stack at a different point now | 16:01 |
arosales | http://paste.ubuntu.com/23948408/ | 16:01 |
stokachu | arosales, paste ~/.cache/conjure-up/conjure-up.log | 16:01 |
SimonKLB | when having reset:false in the bundletester yaml manifest, are charms already in the model supposed to be upgraded if the test is running a newer version of the charms or not? | 16:02 |
SimonKLB | i suppose this is up to amulet? | 16:02 |
marcoceppi | Zic: sure, understood. If that system has an api, you could probably bridge the two, by adding your thing as a cloud provider, if you're comfortable with golang | 16:03 |
arosales | stokachu: http://paste.ubuntu.com/23948419/ | 16:03 |
arosales | stokachu: odd its calling out not registering a controller | 16:03 |
arosales | conjure should make one for me if I don't have one, but I going to bootstrap and try again just for a data point | 16:03 |
stokachu | arosales, yea it can't find your aws creds for some reason | 16:04 |
stokachu | do you have ~/.local/share/juju/credentials.yaml file? | 16:04 |
arosales | stokachu: http://paste.ubuntu.com/23948429/ <--- with a controller bootstrapped | 16:06 |
Zic | marcoceppi: Go is one of my "needed skills of 2017", as I'm only skilled in C today (and Bash/Python of course, but I do not place Go in the scripting category) | 16:06 |
stokachu | arosales, ok remove ~/.local/share/juju/credentials.yaml | 16:06 |
arosales | stokachu: and yes I have a ~/.local/share/juju/credentials.yaml file | 16:06 |
arosales | ? | 16:06 |
stokachu | arosales, look in that file is one of the accounts for aws incomplete? | 16:07 |
arosales | I still get a stack when I conjure-up with a controller ready | 16:07 |
stokachu | i wonder if we're failing on that | 16:07 |
arosales | I bootstrapped just fine | 16:07 |
stokachu | arosales, pastebin ~/.cache/conjure-up/conjure-up.log | 16:07 |
arosales | http://paste.ubuntu.com/23948438/ | 16:08 |
stokachu | arosales, ah! ok that was the bug i was wrestling | 16:08 |
stokachu | arosales, i pushed an update `sudo snap refresh conjure-up --edge` | 16:08 |
stokachu | there is an issue with the constraints parameters we're doing | 16:08 |
arosales | snap refresh conjure-up --edge | 16:09 |
stokachu | yea | 16:09 |
arosales | previously didn't give me an update | 16:09 |
arosales | said I was at the latest | 16:09 |
arosales | ran it again . . . | 16:09 |
stokachu | what about now? | 16:09 |
arosales | and looks to be downloading | 16:09 |
stokachu | arosales, should be rev56 | 16:09 |
arosales | snap refresh conjure-up --edge | 16:09 |
arosales | conjure-up (edge) 2.1.0 from 'canonical' refreshed | 16:09 |
arosales | any way to tell which rev? | 16:09 |
stokachu | whats snap list show? | 16:09 |
arosales | conjure-up 2.1.0 56 canonical classic | 16:09 |
stokachu | yea | 16:09 |
arosales | rev 56 | 16:09 |
stokachu | try that | 16:09 |
arosales | ok | 16:10 |
Zic | marcoceppi: in the first stage of Go, I preferred Rust as it's more like C (no garbage collector or runtime), but Go is everywhere today, so I'm beginning to train at it :) | 16:11 |
arosales | stokachu: good that it is now waiting for applications to deploy | 16:11 |
arosales | still getting an odd message about the size of my terminal though | 16:11 |
stokachu | arosales, ok good, yea need to debug the constraints | 16:12 |
arosales | but if the spell works . . . . | 16:12 |
stokachu | arosales, yea you gotta be 134x42 | 16:12 |
stokachu | arosales, you using gnome-terminal | 16:12 |
arosales | what ever ships with xenial | 16:12 |
arosales | gnome 3.18.3 | 16:13 |
stokachu | arosales, so if you go to the menu item 'Terminal' and select 132x43 | 16:13 |
stokachu | that's what we test the UI on | 16:13 |
stokachu | that size | 16:13 |
arosales | I def have a terminal > 132x43 | 16:13 |
stokachu | hmm ok i just tested it and it works as expected | 16:14 |
stokachu | arosales, those are rows and columns | 16:14 |
stokachu | nothing to do with the resolution | 16:14 |
arosales | but I did go ahead and try to select Terminal --> 132x43 and then `conjure-up bigdata` and I still get the warning | 16:15 |
arosales | stokachu: no stack and waiting for the deploy though . . . | 16:15 |
stokachu | arosales, ok cool | 16:15 |
arosales | stokachu: got to reclocate | 16:19 |
arosales | but looks to be coming up | 16:19 |
stokachu | arosales, ill be here | 16:19 |
stokachu | arosales, cool thanks | 16:19 |
arosales | not sure how conjure will handle a suspend | 16:19 |
arosales | we'll see | 16:19 |
arosales | getting kicked out of this room | 16:19 |
stokachu | are you mid deploy? | 16:19 |
arosales | thanks for the speedy work on the bug | 16:19 |
stokachu | np | 16:19 |
arosales | yes deploying to aws atm. . . . | 16:19 |
stokachu | ok yea that'd be interesting to see | 16:20 |
Madarafv | Like teh video plssss!!! https://youtu.be/wFhaJaY4pqU | 16:35 |
Madarafv | PLSSS | 16:35 |
Madarafv | shur up and like, no DISLIKES | 16:35 |
cholcombe | gnuoy, i'm having trouble finding the ceph nerpe scripts. Do you know where they are? | 16:46 |
lutostag | cory_fu: kwmonroe: any chance I can get write access/collab to https://github.com/juju/juju-crashdump when you all get a chance? | 17:14 |
cory_fu | lutostag: You should already have admin access. Let me get you the invite link again | 17:15 |
cory_fu | lutostag: https://github.com/juju/juju-crashdump/invitations | 17:16 |
lutostag | cory_fu: indeed, thanks! | 17:17 |
catbus1 | stokachu: conjure-down command not found, how do I destroy the deployment? | 17:55 |
=== frankban is now known as frankban|afk | ||
mmcc | catbus1: all conjure-down does is 'juju destroy-model -y' under the hood. If you can tell which model it created, you can do it and nothing else needs to happen | 18:02 |
catbus1 | mmcc: got it, thanks. | 18:03 |
=== frankban|afk is now known as frankban | ||
=== frankban is now known as frankban|afk | ||
hml | help please: juju ssh nova-compute/0 is failing with an ssh key failure… however then ssh-keygen to remove the line can’t find the ssh_known_hosts file listed. | 19:36 |
hml | where would it reside? | 19:37 |
=== scuttle|afk is now known as scuttlemonkey | ||
stokachu | catbus1, you install via snap? | 19:43 |
catbus1 | stokachu: no, via apt, added ppa:conjure-up/next first. 16.04 | 19:43 |
catbus1 | stokachu: the snap install with the --classic flag doesn't work, it says --classic flag is unknown | 19:44 |
stokachu | catbus1, ok if you remove that package and do `sudo snap install conjure-up --classic --beta` | 19:44 |
stokachu | catbus1, you need to be using snap 2.21 | 19:44 |
stokachu | which is in the xenial-updates | 19:44 |
catbus1 | hm, snap is not installed on the node. | 19:45 |
catbus1 | ok, I will remove conjure-up and start from snap. | 19:45 |
stokachu | catbus1, thanks | 19:45 |
catbus1 | stokachu: to confirm, by snap 2.21, you mean 'snap' package, or 'snapd'? I do have snapd and snap-confine 2.21 installed. | 19:47 |
stokachu | catbus1, yea snapd version 2.21 | 19:47 |
catbus1 | weird, snap install conjure-up with --classic flag works now. | 19:49 |
stokachu | catbus1, :) | 19:50 |
catbus1 | stokachu: conjure-up is installed fine. Will redeploy openstack soon. | 19:52 |
stokachu | catbus1, \o/ | 19:52 |
stokachu | catbus1, ill be around lemme know how it goes | 19:52 |
catbus1 | stokachu: Thank you! | 19:52 |
catbus1 | stokachu: Running 'conjure-up' says -bash: /usr/bin/conjure-up: No such file or directory | 21:16 |
catbus1 | conjure-up 2.1.0 56 canonical classic | 21:17 |
catbus1 | it works on another machine. reinstalling | 21:29 |
stokachu | catbus1, you probably need to logout and back in | 21:37 |
stokachu | if you apt removed previously deb package | 21:37 |
catbus1 | got it | 21:38 |
stormmore | anyone played with running Nexus3 in a container? | 22:56 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!