=== kadams54 is now known as kadams54-away === brandon is now known as Guest50566 === Guest50566 is now known as web === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa === blr_ is now known as blr === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa === Guest71470 is now known as schiatto === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa [13:36] Any reason a charm would show an error in juju-gui but not on the command line? I reboot the juju gui machine and it goes away for a while and then comes back [13:39] bleepbloop: what error? [13:41] rick_h_: 1 hook failed: "shared-db-relation-changed" for the mysql/0 charm, however saying juju resolved --retry mysql/0 [13:41] gives back ERROR unit "mysql/0" is not in an error state [13:42] bleepbloop: hmm, no. The GUI gets error statuses by asking juju about things. If you reload the juju-gui (just reload the browser window) it'll reask juju. [13:43] bleepbloop: if it keeps coming back then it would seem Juju is telling one thing to the GUI and another at the CLI. [13:45] rick_h_: I I just completely closed the tab and reopened it in my browser and tried incognito mode and both still report the same error in the gui though so it seems to be reporting two different things, is there a way to debug and see what is being reported to the gui? [13:46] bleepbloop: what browser are you using? [13:46] rick_h_: chrome, tried safari but the page wouldn't load [13:46] bleepbloop: so in chrome, if you open the developer tools (ctrl-shift-j in ubuntu) [13:47] bleepbloop: and go to the network tab there's a filter icon that allows you to filter all traffic by a type. You're looking for "WebSockets" [13:47] bleepbloop: once you have that selected, reload the page with ctrl-r and you should see a single item there. That lets you investigate the data juju is sending the GUI [13:53] rick_h_: okay I see where its sending "Data":{"hook":"shared-db-relation-changed","relation-id":54,"remote-unit":"nova-compute-lxc/1"}},"AgentStatus":{"Err":null,"Current":"idle","Message":"","Since":"2015-06-24T13:06:53Z","Version":"","Data":{}}}] [13:53] bleepbloop: ok, so that looks like no error there [13:54] bleepbloop: so the thing is that if it comes up ok, to watch that because it's a continious live updating channel from juju to the GUI [13:54] bleepbloop: and see if/when something comes in from Juju that makes the GUI think an error is there [13:54] rick_h_: Sorry that was the data on the data element on "WorkloadStatus":{"Err":null,"Current":"error","Message":"hook failed: \"shared-db-relation-changed\"" [13:54] bleepbloop: ah yea, so there Juju is telling the GUI that the hook failed [13:57] rick_h_: might removing the relation that is giving the error and re-adding it help? [13:58] bleepbloop: possibly [14:01] rick_h_: okay a couple points of interest in the mysql charm log, "juju-log shared-db:66: This charm doesn't know how to handle 'shared-db-relation-joined'.", and ""Access denied for user 'root'@'localhost' (using password: YES)")" [14:02] bleepbloop: hmm, yea not sure on the mysql charm. I've not used it myself. [14:05] rick_h_: no problem, thanks for your help anyway, it seems that the mysql gem has managed to lose its password, the password in the mysql.passwd file doesn't work [14:07] bleepbloop: :( [14:40] rick_h_: Thanks anyway for helping, seems to be a bug with juju honestly, just not sure whats causing it and probably couldn't provide enough details to be useful on this one [14:43] I'm having a problem with upgrade from 1.23.3 to 1.24.0 - I triggered the upgrade and now the jujud process stops listening on port 17070 and a message in the log says "fatal "api": must restart: an agent upgrade is available". Any ideas how I can recover from this? [14:44] I can get it to listen for a very short time by restarting the service, but it fails after a few seconds or so with the same message each time. [14:54] o/ [14:55] When using high availability is the vip just a random IP on your network or is there a specific thing it should be set to? [14:55] specifically with the hacluster gem [14:55] charm* [16:17] bleepbloop: as i understand it, the VIP interface should be set to your management interface [16:19] lazyPower: so an IP on the management interface of my choosing? [16:19] I do believe so [16:19] I'm 60% sure thats correct [16:19] if that helps :) [16:20] lazyPower: lol I'll give it a go since its more than I know about it [16:33] Is there any way to have two juju environments on one host [16:39] Bialogs: very soonish === lukasa is now known as lukasa_away [16:44] Bialogs: Multi Environment state server is coming soon, there's been some buzz about that in our recent office hours [16:44] marcoceppi: was that last weeks we touched on it briefly while thumper was there? [16:45] lazyPower: yes [16:55] marcoceppi: lazyPower: thanks for the info === lukasa_away is now known as lukasa === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [17:59] Odd_Bloke: ping [18:03] aisrael: Pong. [18:05] Odd_Bloke: the three ubuntu-repository-charm mps you have -- should they be squashed and tested together? === kadams54 is now known as kadams54-away [18:12] aisrael: The plan was to land the charm helper update first, then the other branches; but we want them all in so I think you can test them all together. === natefinch is now known as natefinch-afk [18:23] Odd_Bloke: ok, thanks. Did you see rcj's comment on the charm-helper update? Any thoughts about that? [18:27] aisrael: It's fixed by the handle_mounted_ephemeral_disk branch. [18:27] Odd_Bloke: ok, cool. I hoped that was the case. Thanks! [18:29] :) [18:33] Is there any way to specify machines in a bundle? Like using something like to: 5 in the bundles.yaml? === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa [19:55] Bialogs: there is, let me find the docs [19:55] Hey, I'm sat looking at my openstack web interface trying to configure my environments.yaml for juju. I keep getting weird error messages which have no suggestions on what to do. currently staring at [19:55] ERROR failed to bootstrap environment: index file has no data for cloud {RegionOne http://192.168.5.92:5000/v2.0/} not found [19:56] Besides it not making grammatical sense I don't know what/where the index file is and how I can put "data for cloud" in there [20:03] And weirdly I get 2 different errors if I try to use keypair auth vs userpass auth === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === natefinch-afk is now known as natefinch [20:21] jackweirdy: you'll need to follow this guide [20:21] jackweirdy: https://jujucharms.com/docs/stable/howto-privatecloud [20:22] Thanks marcoceppi :) [20:22] The tools mirror seems to be dead :/ https://streams.canonical.com/tools [20:22] (or the docs outdated) [20:23] jackweirdy: the first half of that doc is a lot of pre knowledge, skip down towards the bottom [20:23] also, https://streams.canonical.com/juju/tools/ [20:23] Ah, seems to be /juju/tools [20:23] thanks. Is there a way I can file a PR against the docs? [20:23] jackweirdy: https://github.com/juju/docs [20:23] Awesome, thanks :) [20:24] jackweirdy: https://streams.canonical.com/juju/tools/releases/ that's the directory you probably want, it took me far too long to find so I figured I would help spare the pain [20:24] Thanks :) [20:38] jackweirdy: someone from our docs team will review your merge soon, thanks for the fix! [20:38] No worries :) === kadams54 is now known as kadams54-away [21:07] marcoceppi: ping [21:10] Bialogs: pong [21:10] marcoceppi: Did you get around to finding those docs? [21:12] Bialogs: you'll have to refresh my memory on which docs [21:13] Having a brainfart when it comes to amulet. Where does juju_agent.py live and what's responsible for putting it there? [21:14] marcoceppi: Specifying the machines that juju deploys to in a bundle [21:14] aisrael: I think it gets dumped in /tmp, and .setup() does it iirc [21:14] Bialogs: ah, one moment [21:15] marcoceppi: marcoceppi ta, thx === natefinch is now known as natefinch-afk [21:15] Bialogs: https://jujucharms.com/docs/1.18/charms-bundles#bundle-placement-directives [21:19] marcoceppi: any idea why a setup() would timeout when the deployment stands up? [21:19] aisrael: deployer freaking? hooks not ready [21:19] which reminds me, we should update amulet to use extended status for 1.24 and greater [21:20] marcoceppi: I'm kind of wondering if I'm hitting a bug due to 1.24 and extended status [21:20] aisrael: you shouldn't [21:20] extended status is 1.24 compat [21:20] extended status is backwards compat [21:20] http://pastebin.ubuntu.com/11770001/ [21:20] agent-state still remains in juju status output [21:21] what's amulet doing? [21:21] amulet just hangs on d.sentry.wait() [21:22] interesting [21:22] I suspect it's something this test or the charm is doing, which may be causing amulet some trouble. [21:29] marcoceppi: Thanks, that clarifies a lot but I still don't see one type of example...how to specify multiple machines when deploying two services. Would the syntax look like "to: 1 to: 2"? [21:40] Bialogs: two services, or two units? [21:40] also, why force which unit one goes to if it's just two services? [21:41] juju will just create two machines for those services if they each only have one unit [22:03] marcoceppi: Lets say I'm deploying mysql with the bundle and I need two units of mysql. One unit on machine 1, the other on machine 2 [22:04] marcoceppi: All of my machines are not the same and sometimes Juju selects incorrectly from what I need [22:05] Bialogs: could you expand more on your setup? [22:09] marcoceppi: sure... I'm trying to deploy the kubernetes bundle and the documentation says to deploy to machines, I have specific machines I do not want docker running on because they are running openstack services [22:10] Bialogs: are you using maas? [22:10] marcoceppi: yes [22:10] Bialogs: what you'll want to do is tag the machines in mass that you want for each service [22:10] then you can use constraints instead of explicit placement [22:11] marcoceppi: would you mind explaining? [22:11] by doing `constraints: [tags=tag-in-maas]` [22:11] you want to set a constraint on the service, and placement won't do that for you since the machiens maas gives juju are arbitrary [22:12] you want to make sure services always either end up in the same pool of machines, or the same exact machine [22:12] you can tag these machines in juju [22:12] err [22:12] in maas [22:12] either add one tag to all machines you want juju to use for kubernetes [22:12] ie, set a kubes tag on them, or tag each service individually "use this machine for docker, this for etcd, etc" [22:13] then you can set the bundle to use those constraints for the services and MAAS will only give machines that match that constraint [22:13] Amazing! Thank you so much for this information [22:14] `juju help constraints` on the command line for more infromation, and here is constraints in bundles: https://jujucharms.com/docs/1.18/charms-bundles#service-constraints-in-a-bundle [22:14] Bialogs: no worries! Hopefully this will help streamline what you're trying to do [22:19] I wish I knew what I was trying to do ;) [22:55] Bialogs: hey there [22:55] Bialogs: i'm one of the Kubernetes charm developers o/ [23:01] lazyPower: Oh hey! - Saw some strange issue earlier today where Pod object wasn't defined but we have been having all sorts of issues today and I'm writing that off as a fluke. Hope you won't mind if I ping you if I run into anything more... [23:01] Bialogs: certainly. Can I also make a recommendation? [23:01] Yeah go ahead [23:01] we haven't backported an update to the charms/bundle in a bit, we've got a lot of active work tracking kubes - and our charms were accepted to the kubernetes repository (i have an outstanding todo to update the store copy of the charms) [23:02] if you clone this repository: https://github.com/GoogleCloudPlatform/kubernetes [23:03] and follow the docs here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/juju.md#launch-kubernetes-cluster [23:03] you'll be running our current reference implementation of k8s [23:05] I stumbled across this documentation earlier today and that probably explains why I got my earlier issue as I had deployed the implementation in the charm store [23:14] Bialogs: If you have any issues with any of it feel free to ping me and let me know :) I'll be in and out over the next couple days due to Conference + travel