=== thumper is now known as thumper-dogwalk | ||
huhaoran | hey, charmers, my juju seems to get stuck, it there someway to restart juju itself? | 02:58 |
---|---|---|
huhaoran | hey, charmers, my juju seems to get stuck, is there someway to restart juju itself? | 02:59 |
=== thumper-dogwalk is now known as thumper | ||
magicaltrout | I'VE FINALLY FOUND A DONALD TRUMP SUPPORTER! | 03:43 |
magicaltrout | my life is complete | 03:43 |
Mian | Hi, does anyone here know something about the Xenial version of mongodb charm? it's not available in the charm store right now | 04:09 |
Mian | is there a schedule or calendar as to when we will release mongodb charm on Xenial? appreciate any advice/clue on it | 04:10 |
magicaltrout | there was some movement on it not too long a go | 04:11 |
magicaltrout | a go / ago | 04:11 |
magicaltrout | but clearly hasn't landed | 04:11 |
magicaltrout | i needs rewriting and updating | 04:11 |
magicaltrout | i think marcoceppi was hacking on it | 04:11 |
Mian | I'd like to deploy Openstack totally with Xenial release, Ceilometer has Xenial charm but it will be great if the underlying mongodb charm is available | 04:12 |
Mian | magicaltrout++ | 04:15 |
magicaltrout | can't help you directly, but if you ask on the juju mailing list you'd get a response | 04:15 |
magicaltrout | it appears the lazy folk aren't around, but on the mailing list it give them a chance for an offline response and also registers some interest in making it happen | 04:15 |
Mian | appreciate your advice :), that's what I need and it's helpful | 04:16 |
Mian | will inquiry the mailing list | 04:17 |
magicaltrout | no problem | 04:17 |
=== fginther is now known as fginther|away | ||
=== fginther|away is now known as fginther | ||
=== menn0 is now known as menn0-afk | ||
pragsmike | greets | 06:12 |
marcoceppi | o/ pragsmike | 06:24 |
marcoceppi | Mian: I have the start of a new Xenial MongoDB charm, I'm just unable to finish it | 06:25 |
pragsmike | morning. or is it evening? | 06:25 |
marcoceppi | it's morning where I am | 06:25 |
pragsmike | back home yet? | 06:25 |
pragsmike | i'm just writing up my notes from the charmer's summit | 06:26 |
pragsmike | it's taking me days to unpack all that | 06:26 |
Mian | marcoceppi: thanks for this update, it's great to know that we're working on it | 06:26 |
Mian | marcoceppi: A customer is trying to deploy with MongoDB of Xenial series in their lab, it will be even greater that if we have an estimation on when the Xenial MongoDB charm might be released :) | 06:28 |
=== redir is now known as redir-afk | ||
marcoceppi | Mian: well, I have 0 time to work on it, despite my desire to build a better mongodb charm. If somone wants to push it the rest of the way, it's on GH https://github.com/marcoceppi/layer-mongodb | 06:29 |
Mian | marcoceppi: I can understand it :) | 06:30 |
Mian | I guess encouraging the customer to involve into the tinkering of it in their lab might be a win-win strategy if they are desiring it | 06:35 |
Mian | marcoceppi: thanks very much for this update | 06:35 |
junaidali | has the set-config option changed in juju 2.0rc1? I'm not able to set config use juju set-config command | 06:42 |
junaidali | also juju get-config is not working | 06:42 |
junaidali | got it.. the command is now $ juju config | 06:43 |
pragsmike | junaidali: reading the scrollback it looks like you were having the same problem with container networking that i am | 06:49 |
junaidali | yes pragsmike, rc1 has the fix. | 06:50 |
pragsmike | hey, released 4 hours ago! I'm going to go try it! | 06:51 |
=== frankban|afk is now known as frankban | ||
=== menn0-afk is now known as menn0 | ||
pragsmike | hot damn! The openstack bundle is deploying with containers on the correct network, for the first time on my NUC cluster! Thanks to all! | 08:11 |
magicaltrout | boom! | 08:19 |
marcoceppi | pragsmike: nice! | 08:21 |
pragsmike | Now I can sleep :) | 08:31 |
pragsmike | I find that I have to go kick a container or two on each deploy, as they get stuck "pending", but logging into the container host and doing lxc stop / lxc start fixes that | 08:34 |
=== jamespag` is now known as jamespage | ||
fernari | Hello all! Quite new charmer here and one question came to mind while writing my own charms | 11:16 |
fernari | I am using reactive charms and building my custom software and it's dependencies with juju | 11:16 |
fernari | one key dependency is consul and I've managed to build base layer which configures consul and redis cluster correctly | 11:17 |
fernari | then i'm using that layer on my actual charm and that works correctly, let's call that "control-software" | 11:18 |
fernari | Now I have a problem as "client-software" needs to join the consul cluster and I can't get the consul-layer's peer information out which I need to correctly join the existing cluster | 11:19 |
fernari | I'm trying to get the related units like this: cluster_rid = hookenv.relation_ids('control') | 11:22 |
fernari | that returns nothing on the client-charm and cluster:0 on the control-charm and after that I'm able to get ip-addresses of the peers | 11:23 |
marcoceppi | fernari: howdy! it might help if you pastebin (paste.ubuntu.com or gist.github.com) some of the metadata.yaml files for your control-software and client (redacted of senstaive information) | 11:34 |
fernari | marcoceppi: cheers mate! http://paste.ubuntu.com/23211107/ | 11:50 |
fernari | I think that the problem is that I am building the peer-relationship on the layer below the "control-plane" | 11:51 |
marcoceppi | fernari: so, peers, as I imagine you've found it is just for the units deployed in your charm. So if you have three units of control-plane, they'll all talk to each other | 11:51 |
marcoceppi | fernari: you'll also need to "provide" the raft interface, so it can be connected to worker/client | 11:51 |
fernari | yep | 11:51 |
marcoceppi | and in doing so, you can publish on the relation wire the data you need for it to join the cluster | 11:52 |
marcoceppi | fernari: http://paste.ubuntu.com/23211109/ | 11:53 |
marcoceppi | something like that, you can't use the relation name "control" again, since it's already declared as a peer, but if you can think of another name for the relation replace my placeholder with it | 11:53 |
fernari | Right, I'll try that! | 11:54 |
fernari | Learning curve for those relations is quite steep, but slowly getting forward :) | 11:55 |
marcoceppi | fernari: it's the most powerful aspect of juju, but also one of the hardest to wrap a head around | 11:58 |
marcoceppi | fernari: let us know if we can be of any additional assistance! | 11:58 |
fernari | sure thing :) | 11:59 |
=== petevg_afk is now known as petevg | ||
kjackal | kwmonroe: cory_fu: The hadoop bundle deployed on trusty seems pretty stable http://imgur.com/a/amvac lets talk at the sync what are we going to do with it | 14:06 |
jamespage | marcoceppi, hey - could I request a release of charm-helpers? | 14:37 |
jamespage | I've pushed a few changes to support use of application-version across the our openstack charms, and need to work on pulling the samefeature into reactive charms | 14:37 |
rick_h_ | jamespage: he was on site at a customier thing this morning so might be best to go email on that | 14:37 |
jamespage | rick_h_, ack | 14:38 |
jamespage | marcoceppi, alternatively I'd be happy to RM charm-helpers releases if you wanna give me perms | 14:38 |
geetha | Hi, `juju attach` command is taking long time to upload resources in s390x machine. Can anybody please suggest what would be the issue here? | 15:17 |
kwmonroe | hi geetha -- how large is the resource? | 15:22 |
rick_h_ | geetha: just the normal networking type issues one might see tbh. | 15:24 |
geetha | kwmonroe:Resource size is 1.5 G. But in x86 machine it's not taking that much time. | 15:25 |
junaidali | Hey guys, how can we set an install_key in juju. I want to pass a GPG-KEY. I've tried "juju set <charm-name> install_keys="$(cat GPG-KEY)"", also tried using a yaml but erroring out " yaml.scanner.ScannerError: mapping values are not allowed here" | 15:26 |
rick_h_ | geetha: can/have you tested the network between the different machines? maybe checking mb/s on a curl or rsync or something? | 15:27 |
junaidali | I'm able to manually add the GPG-key using apt-key add command | 15:27 |
junaidali | on the node | 15:27 |
rick_h_ | junaidali: what charm is this? | 15:28 |
junaidali | Hey rick_h_, its a plumgrid-charm http://bazaar.launchpad.net/~junaidali/charms/trusty/plumgrid-director/trunk/files | 15:30 |
junaidali | install_key config is available in charmhelpers | 15:31 |
junaidali | so instead of writing my own code, i'm using charmhelper to parse and add the key | 15:31 |
junaidali | rick_h_, fyi PLUMgrid is an SDN solution provider | 15:33 |
rick_h_ | junaidali: hmm, yea so it's a string, not sure what the command turns into with newlines/etc | 15:33 |
geetha | rick_h_: I kept rsources in same host machine. | 15:33 |
geetha | EX: juju attach ibm-was-base ibm_was_base_installer=/root/repo/WAS_BASE/was.repo.9000.base.zip | 15:35 |
rick_h_ | geetha: ? the client is on the same machine as the controller? | 15:35 |
junaidali | this is what 'juju get' outputs for install_keys when i pass the value using a yaml file: http://paste.ubuntu.com/23211867/ | 15:38 |
kjackal | kwmonroe: how did you find out about this healthcheck resource manager is doing on the namenode? Is it on the RM's logs? | 15:38 |
junaidali | rick_h_: this is when i pass the value using the actual GPG key file ' juju set <charm> install_keys="$(cat GPG-KEY)"' http://paste.ubuntu.com/23211872/ | 15:39 |
kjackal | kwmonroe: I was thinking about the other thing that you said about not timing the start of the service correctly. That should affect trusty as well, but since I do not see it now, perhaps there is some timing issue there that is not always present | 15:40 |
rick_h_ | beisner: do you recall who worked on that charm and any idea how to package that up for the key to go through there? ^ | 15:40 |
kwmonroe | kjackal: i learned about it here: http://johnjianfang.blogspot.com/2014/10/unhealthy-node-in-hadoop-two.html and then i saw in the nodemanager logs stuff like "received SIGTERM, shutting down", which seems to be what happens when the RM detects that the NM is unhealthy. | 15:41 |
kjackal | kwmonroe: thanks, interesting | 15:42 |
kwmonroe | kjackal: and "unhealthy" means it either failed to respond to a heartbeat, or it violated resource constraints (like "using 2.6GB of 2.1GB vmem", which i also saw in the NM logs) | 15:42 |
kjackal | kwmonroe: Ah I see where you are getting with that | 15:43 |
kwmonroe | kjackal: so i hypothesized that either the network connectivity was failing the heartbeat, or our yarn.nodemanager.vmem-check-enabled: false setting was not taking effect when the NM started up (hence failing the resource constraint) | 15:44 |
geetha | rick_h_, We are using local lxd provider. | 15:44 |
kwmonroe | kjackal: both of which seem to be resolved by rebooting the slave | 15:44 |
kwmonroe | geetha: what does 'juju version' say? | 15:45 |
geetha | kwmonroe: Tried with beta-15, beta-18 and now using juju 2.0-rc1. | 15:48 |
kwmonroe | ok geetha, but what does the actual output return? for example, mine is: | 15:50 |
kwmonroe | $ juju version | 15:50 |
kwmonroe | 2.0-beta18-xenial-amd64 | 15:50 |
geetha | It's 2.0-rc1-xenial-s390x | 15:51 |
kwmonroe | cool geetha -- i just wanted to double check the series and arch were xenial-s390x | 15:52 |
beisner | hi rick_h_ /me searches backscroll for context of 'the charm' | 15:53 |
geetha | ok kevin :) | 15:54 |
kwmonroe | geetha: i'm not sure what would be causing a significant slowdown on s390 vs x64. 1.5GB is a really large resource though.. how long as the 'juju attach' been running? | 15:55 |
beisner | rick_h_, i can't tell where the code repo is from the cs: charm pages for plumgrid. hence, not able to view authors/contributors. sorry i don't have more info on that. | 15:57 |
kwmonroe | hey dooferlad -- you had the good insights into resources being slow prior to beta-12 (https://bugs.launchpad.net/juju/+bug/1594924). can you think of any reason why geetha's 1.5GB resource would take significantly longer to attach on s390x vs x86? (both using 2.0-rc1 in lxd) | 15:58 |
mup | Bug #1594924: resource-get is painfully slow <2.0> <resources> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1594924> | 15:58 |
geetha | I have 5 resources to be attached as part of WAS deployment, 1.5GB is the largest one. it's taking more than 15 min. and other resources are around 1 GB..Totally it's taking around 1 hr for deployment. | 16:02 |
kwmonroe | geetha: and how long does the same deploy typically take on x86? | 16:02 |
geetha | On x86, I have tried with beta-18 and it's taking around 20 min for deployment. | 16:06 |
=== CyberJacob is now known as zz_CyberJacob | ||
junaidali | Hi beisner: value from install_key is actually handled by charmhelpers. But passing a value gives error while setting value using a yaml file or using the key file itself. It seems like, either there might be another way to setting value or the charmhelper should have a better way of handling this. | 16:09 |
junaidali | for a multi-line value, i usually use a yaml file to set a config but that isn't working in the present case. | 16:10 |
kwmonroe | gotcha geetha.. so 1.5GB in 15 minutes is roughly 13Mbps. perhaps the s390x is not tuned very well to xfer that much data over the virtual network or to dasd. can you benchmark the s390x as rick_h_ suggested? perhaps try to do a normal scp of the local resource to the controller's /tmp directory. you can find the controller ip with "juju show-status -m controller", and then "scp </path/to/resource.tgz> ubuntu@< | 16:12 |
kwmonroe | controller-ip>/tmp" | 16:12 |
beisner | junaidali, unfortunately, i'm not familiar with those specific charms or using that config option with them. | 16:12 |
junaidali | thanks beisner, np. I will try again to figure out a way. | 16:13 |
kwmonroe | geetha: forgot a colon in that scp.. it would be something like "scp resource.tgz ubuntu@1.2.3.4:/tmp" | 16:14 |
beisner | junaidali, i see install_keys is a string type option, so it should be usable in a bundle yaml by passing a string value. what that string value should be, i'm not sure. | 16:14 |
junaidali | beisner, key usually looks like this one http://paste.ubuntu.com/23212091/ but fails at this line in charmhelpers http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/__init__.py#L120 | 16:19 |
geetha | ok kevin, let me try that. | 16:20 |
beisner | junaidali, so are you saying this doesn't work? http://paste.ubuntu.com/23211872/ | 16:23 |
junaidali | yes, that was the juju get command output | 16:24 |
beisner | junaidali, is there a specific error you're seeing? | 16:26 |
junaidali | let me share you the output please | 16:27 |
junaidali | http://paste.ubuntu.com/23212121/ | 16:28 |
junaidali | beisner: http://paste.ubuntu.com/23212121/ | 16:28 |
thedac | This feels like a quoting problem | 16:29 |
thedac | stub: if you are around do you know the proper way to send a GPG key for charmhelpers.fetch.configure_sources? | 16:29 |
thedac | the yaml load is choking on the ':' in Version: GnuPG v1.4.11 (GNU/Linux) | 16:30 |
thedac | junaidali: I can tell you add_source has had much more developer attention than configure_source. It might be a big ask but moving to add_source is probably the best option. http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/ubuntu.py#L212 | 16:38 |
thedac | meaning using add_source directly rather than configure_sources. | 16:40 |
=== frankban is now known as frankban|afk | ||
CorvetteZR1 | hello. i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 22 | 16:54 |
CorvetteZR1 | google turned up some old posts with people having similar issues, but i haven't found any solution | 16:54 |
CorvetteZR1 | i can see the container is running, but bootstrap can't auth using ssh key-auth. any suggestions? | 16:55 |
=== redir-afk is now known as redir | ||
cory_fu | @kjackal Per your comment on https://github.com/juju-solutions/layer-apache-kafka/pull/13 I realized that I did it the other way for Apache Zeppelin, and I'm leaning toward changing Apache Kafka to have a single, arch-independent resource, since we expect it to work on most platforms. If you want to deploy to a platform that requires a custom build for some reason, then you would have to attach it at deploy time. Seem reasonable? | 19:05 |
cory_fu | kwmonroe: ^ | 19:05 |
cory_fu | Downside is if we find a platform that we *do* want to support that always requires a custom build, but the upside is that we are more optimistic about platforms and open it up to deploy on, e.g., Power | 19:06 |
kjackal | ok cory_fu | 20:18 |
kjackal | cory_fu: sorry for the delay, I am in the scala world | 20:18 |
kwmonroe | kjackal: cory_fu and i chatted about layer-hadoop-client (https://github.com/juju-solutions/layer-hadoop-client/pull/10). i think we've agreed that the silent flag is no longer needed, and layer-hadoop-client will *only* report status when deploying the 'hadoop-client' charm. since you authored the 'silent' flag in the first place, care to weigh in? | 20:40 |
cory_fu | ha, I already weighed in by merging it. :p If you have a -1, we can talk about reverting it | 20:41 |
kjackal | cory_fu: kwmonroe: looks good! Thank you | 20:42 |
kwmonroe | lol cory_fu.. i think the only place we'd be exposed is if we had a charm that included layer-hadoop-client and did not set the 'silent' flag. that charm might rely on hadoop-client to set the 'waiting for plugin' | 20:42 |
cory_fu | Except that it should now be optional anyway, so it's moot | 20:42 |
kwmonroe | your optimism inspires me | 20:44 |
kwmonroe | (and i was talking about plugin, not java, which is not optional for hadoop-client) | 20:45 |
cory_fu | :) | 20:45 |
cory_fu | Oh, the plugin | 20:45 |
cory_fu | Huh | 20:45 |
cory_fu | Yeah, that's probably going to bite us. | 20:45 |
kwmonroe | well, i did some scouring, and all the charms that include hadoop-client have something like this: https://github.com/juju-solutions/layer-apache-flume-hdfs/blob/master/reactive/flume_hdfs.py#L8 | 20:45 |
cory_fu | But I think those charms should be fixed. I think status messages should really only be set in the charm layer | 20:45 |
kwmonroe | so they're doing their own blocking if/when hadoop isn't ready | 20:46 |
cory_fu | Well, that's nice | 20:46 |
kwmonroe | and when i say "i did some scouring", i mean i grepped the 5 charms on my laptop. | 20:46 |
kwmonroe | but i'm sure the other 50 are fine | 20:46 |
kwmonroe | optimism. | 20:47 |
=== natefinch is now known as natefinch-afk | ||
theoeprator | struggling to bootstrap, log here: http://paste.ubuntu.com/23213182/ | 21:04 |
surtin | anyone around that can explain why quickstart gives me an error that it can't find the default model, even though it's clearly there? running 16.04. just trying to install landscape-scalable | 21:20 |
surtin | just end up with juju-quickstart: error: environment land-test:admin@local/default not found | 21:23 |
theoeprator | bueller? | 21:24 |
surtin | and when i first ran the command i got juju-quickstart: error: error: flag provided but not defined: -e | 21:24 |
tvansteenburgh | surtin: you don't need quickstart, just `juju deploy landscape-scalable | 21:25 |
petevg | theoeprator: it looks like juju is having trouble talking to the controller -- see the error connecting to port 22 in the first few lines of your log. If you do "lxc list", what do you get in response? | 21:26 |
surtin | alright | 21:26 |
theoeprator | theoperator@hype:~$ lxc list | 21:26 |
theoeprator | +-------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:26 |
theoeprator | | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | 21:26 |
theoeprator | +-------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:26 |
theoeprator | | admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0 | | 21:26 |
theoeprator | +-------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:26 |
theoeprator | | vpn | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0 | | 21:26 |
theoeprator | +-------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:26 |
theoeprator | theoperator@hype:~$ | 21:26 |
theoeprator | those are both containers I set up separately | 21:27 |
surtin | hmm nope, says the charm or bundle not found. guess the instructions on the page are outdated or something | 21:28 |
petevg | theoeprator: It looks like juju wasn't able to create the container. Are you running out of disk space? Do you have any interesting messages in /var/log/lxd/lxd.log? | 21:29 |
tvansteenburgh | surtin: which page? | 21:29 |
surtin | https://help.landscape.canonical.com/LDS/JujuDeployment16.06 | 21:29 |
tvansteenburgh | surtin: dpb1 is the landscape guy, he might be able to help. i'm not sure which bundle you should use | 21:30 |
tvansteenburgh | surtin: the only one i see in the store is this https://jujucharms.com/q/landscape?type=bundle | 21:30 |
surtin | yeah i saw that as well | 21:31 |
dpb1 | surtin: http://askubuntu.com/search?q=deploy+landscape+bundle+andreas | 21:31 |
dpb1 | blah | 21:31 |
dpb1 | surtin: http://askubuntu.com/questions/549809/how-do-i-install-landscape-for-personal-use | 21:31 |
dpb1 | that one | 21:31 |
dpb1 | notice the higher rated answer. talks about using juju as well. | 21:32 |
theoeprator | nothing interesting in the lxd log. all just lvl=info stuff | 21:33 |
surtin | ok will try it out | 21:33 |
surtin | ty | 21:34 |
theoeprator | i don’t think I’m out of disk space, i don’t have any quotas setup and this is a clean box with ~2TB free | 21:34 |
petevg | surtin: are you using juju 1, or juju 2? (juju --version should tell you). | 21:34 |
theoeprator | the last error on the bootstrap output is 2016-09-21 21:34:20 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.0.1:8443/1.0: Unable to connect to: 192.168.0.1:8443 | 21:35 |
theoeprator | where’s it getting that 192.168.0.1 from? | 21:35 |
theoeprator | that’s not the address of the host | 21:35 |
theoeprator | it appears to have the container running at one point, as it runs several apt commands inside it | 21:37 |
petevg | theoeprator: it looks like your lxd machines are setup to use your physical network, rather than using a private network. Is that something you setup when you setup the other lxd machines? | 21:39 |
petevg | We're moving beyond my grade level as far as juju and lxd internal go, but juju may be getting confused by that network config. | 21:41 |
theoeprator | sorry, wifi failed | 21:45 |
petevg | theoeprator: no worries. Did you see my last message? (I was puzzled to see lxd machines talking directly to what looks like your physical network, rather than using virtual ips, and asked whether that was something that you had setup when you setup the other containers.) | 21:46 |
theoeprator | yep, the containers are bridged to the host’s network | 21:47 |
petevg | theoeprator: that might be what's breaking things. We're unfortunately moving beyond the realm of things that I know a lot about, but juju typically expects to be able to herd its machines about on a private/virtual network, and then selectively expose them to the real world. | 21:48 |
petevg | I could be very wrong, though ... you might try bootstrapping again, and watching "lxd list", to see if the machine gets created. | 21:48 |
petevg | theoeprator: another possibility is that you've setup your router to block port 22 -- do you have strict security rules setup for your local network? | 21:49 |
petevg | *lxc list, I mean. | 21:50 |
theoeprator | runing bootstrap again, the container is up right now | 21:51 |
theoeprator | theoperator@hype:~$ lxc list | 21:51 |
theoeprator | +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:51 |
theoeprator | | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | 21:51 |
theoeprator | +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:51 |
theoeprator | | admin | RUNNING | 192.168.0.36 (eth0) | fd00:fc:8df3:eb62:216:3eff:feae:96f2 (eth0) | PERSISTENT | 0 | | 21:51 |
theoeprator | +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:52 |
theoeprator | | juju-f71b38-0 | RUNNING | 192.168.0.41 (eth0) | fd00:fc:8df3:eb62:216:3eff:fe20:8e53 (eth0) | PERSISTENT | 0 | | 21:52 |
theoeprator | +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:52 |
theoeprator | | vpn | RUNNING | 192.168.0.29 (eth0) | fd00:fc:8df3:eb62:216:3eff:fef0:29c9 (eth0) | PERSISTENT | 0 | | 21:52 |
theoeprator | +---------------+---------+---------------------+---------------------------------------------+------------+-----------+ | 21:52 |
theoeprator | theoperator@hype:~$ | 21:52 |
theoeprator | now got the same error as before and the container is gone again | 21:52 |
blahdeblah | theoeprator: Please use pastebin.ubuntu.com for more than 2 lines | 21:53 |
theoeprator | my apologies | 21:53 |
blahdeblah | No problem - it just makes it a bit noisy for those monitoring a lot of channels | 21:53 |
petevg | theoeprator: hmmm ... my guess is that juju is having trouble with the network settings, but I'm not sure quite what. If I were troubleshooting, my next steps would be the ssh into the container, and check out the logs (check /var/log/juju). | 22:01 |
petevg | theoeprator: failing that, you might try posting to the mailing list -- someone who knows more than I do should get back to you. | 22:04 |
theoeprator | ok, thanks Pete | 22:05 |
petevg | you're welcome! Sorry that I didn't have an immediate fix for you. | 22:07 |
surtin | petevg: juju 2 | 22:24 |
petevg | surtin: you def. want to avoid quickstart then. It's a juju 1 tool. Sounds like those instructions are in need of an update ... | 22:37 |
surtin | yeah seems that way | 22:52 |
magicaltrout | definately going to get to play with some Juju stuff @ JPL and Darpa after a few meetings this week | 22:58 |
magicaltrout | might also get some buyin for JPL to extend marathon for LXC/LXD support, although thats a longer shot | 22:58 |
petevg | magicaltrout: awesome news :-) | 23:47 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!