=== natefinch-afk is now known as natefinch [08:09] hi all, i have to use juju-reboot command in the end of install hook. but i got error - juju.worker.uniter.context context.go:559 updating agent status: cannot set invalid status "rebooting" [08:09] in the log file [08:09] is it ok? [08:10] gennadiy: that sounds more like you are calling "status-set rebooting" before you call "juju-reboot". Is that true? [08:10] no [08:10] the syntax for status-set would be: status-set maintenance "rebooting" [08:10] i think this command tries to setup it [08:11] hm, nm, it does look like rebooting is one of the status codes [08:11] gennadiy: [08:11] sorry, can you do "juju status" and pastebin it? [08:11] also if you try to use `juju-reboot -now` - install process will be infinite [08:12] It sounds like there might be a version mismatche. [08:12] juju version - 1.25.0-trusty-amd64 [08:13] currently juju status of my service is started. but i see the error in log of unit [08:14] 2015-11-05 08:07:49 ERROR juju.worker.uniter.context context.go:559 updating agent status: cannot set invalid status "rebooting" 2015-11-05 08:07:49 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying 2015-11-05 08:07:49 ERROR juju.worker runner.go:212 fatal "uniter": machine needs to reboot 2015-11-05 08:07:49 ERROR juju.api.watcher watcher.go:84 error trying to stop watcher: connection is shut down 2015-11-05 08:07:49 ERROR juju.api.watcher watcher.g [08:15] it's agent.conf grom machine [08:16] tag: machine-162 datadir: /var/lib/juju logdir: /var/log/juju nonce: machine-0:3538ecd9-69aa-4c7f-840f-77857c0dc303 jobs: - JobHostUnits upgradedToVersion: 1.24.6 [08:16] seems we use different version of juju master and juju agents [08:16] am i right? [09:23] jamespage, fwiw I can't create the openvswitch-odl to neutron-gateway relationship with standard amulet, amulet refuses to create a relation when the interface isn't specified and juju refuses to create a relation when the interface is specified on both sides if that interface is juju-info [11:58] I'm trying to deploy openstack liberty with juju and my installation is unable to finish because I can't configure keystone... my problem is the same as the one described in https://bugs.launchpad.net/charms/+source/keystone/+bug/1509382 [11:58] Bug #1509382: shared-db-relation-changed hook failure, unable to establish connection http://localhost:35347/v2.0/tenants [11:58] Any hints? === jog_ is now known as jog [15:34] Hi - I'm working on some OpenStack charms, and I've got a problem with the nova-compute charm where I'm not sure what the right solution is [15:37] I need to install nova-api-metadata (in another charm), but the neutron_plugin_changed hook on nova-compute is purging it when metadata-shared-secret isn't set [15:43] anyone on that knows how to set the http proxy for apt with maas 1.8? [15:50] matt_dupre: this sounds systemic with managing config in the wrong charm. Are you sending the data you need over the neutron_plugin interface so it can be rendered? [15:51] matt_dupre: if not, can you send a quick mail to the list that I can direct the OpenStack charmers team at to follow up on? I know that most of the openstack charms manage the config files so there's no conditions like you're outlining above, where nova is wiping your plugin config [16:07] actually matt_dupre - a bug would be best [16:07] matt_dupre - https://bugs.launchpad.net/charms/+source/nova-compute/+filebug [16:10] lazypower: Thanks - can I just check exactly what you think is wrong? [16:10] I'm still trying to figure out exactly which components have responsibility for which bits of metadata [16:11] matt_dupre There's nova-api/neutron-api interfaces to send config data over, so the openstack charm in question manages that template file [16:11] if it doesn't get the bits it needs to manage, it will overwrite anything you do in your subordinate, when it re-runs its template generator [16:41] stokachu i'm moving us here, as we're talking about some stuff that the public will find interesting (hopefully) - looking @ interface layers, and writing reactive charms [16:42] lazypower: ok cool [16:43] going to try and reproduce the add-unit issue with global relation scope [16:43] hi all, i know if we try to deploy bundle by juju-gui it will not expose services(juju gui ignore expose: true). a few days ago it was confirmed in this channel too. but i can't find link to this bug. can somebody help me with this? [16:45] o/ gennadiy [16:46] gennadiy: it doesn't appear that it was filed - https://bugs.launchpad.net/juju-gui?field.searchtext=expose [16:48] stokachu cory_fu - so for reference, we're talking about https://pythonhosted.org/charms.reactive/charms.reactive.relations.html#charms.reactive.relations.scopes [16:50] stokachu: Regardless of the scope, handlers on each unit should still fire. Also, handlers should still fire for each unit that's added to the conversation, though the data on the conversation may not change (and if the interface layer or handlers skip on no change, they could be filtered out that way) [16:50] cory_fu: ok, im going to try to reproduce this on a clean environment [16:50] i was getting install hook failures because my nodejs event wasn't being acted on [16:51] Right. Conversation scopes are only about how the units data are grouped and not how handlers are triggered. [16:51] so something else is going on? [16:51] ah, cory_fu - i thought you could negate some of the bheavior with scopes [16:51] my mistake [16:58] Conversation scopes are definitely confusing. Each state set by an interface layer is associated with a conversation. The conversation can include one or more related units. When a state handler is triggered, the Relation class contains a list of all the conversations that the state applies to, and each conversation has a list of related units that it includes. Any data sent out to a conversation has the same data sent to all related units, and [16:58] the data coming in from a conversation is aggregated across all units (expected to be the same, or not set for all but one of the units) [17:01] cory_fu : we need you + a white board + video/audio feed [17:01] The idea is that the relation is a two-way evolving conversation between the two endpoints. And different units can be at different stages of the conversation. But sometimes it makes sense to group units of a service into a single conversation. And sometimes you fully expect to only be having a single conversation. [17:01] Very true And I like that we now have sopes for relation conversations [17:02] having an illustration of what units are participating, and how, would be helpful to illustrate the scopes [17:03] One important point to note is that conversations are about the communication protocol, and thus should be entirely handled by the Relation class in the interface layer. They should *never* be exposed outside of the interface layer. [17:03] Right, at that oint you're only working with a databag that has the data points [17:03] or is waiting to receive the data points to relay over the wire [17:08] To the charm layers, i.e., the state handlers, the Relation class should provide a meaningful API. So you should be able to ask it things like "What databases are being requested and by whom" or "What datanodes are currently registered" [17:09] ok looks like i can't reproduce the add-unit issue [17:09] so far all units are handling my nodejs reaction properly [17:11] stokachu: What layers is this involving? [17:11] cory_fu: includes: ['layer:basic', 'layer:nodejs', 'layer:nginx', 'interface:mysql'] [17:12] cory_fu: and this https://github.com/battlemidget/juju-layer-ghost/blob/master/reactive/ghost.py [17:12] is the top level one [17:16] stokachu: Which handler were you seeing not trigger? [17:16] And what were you doing an add-unit on? The ghost charm? [17:16] cory_fu: well it looks like it is triggering on this run, but my upstart service is in a infinite loop [17:16] cory_fu: yea add-unit -n 3 ghost [17:16] the nodejs.install_runtime wasn't being called on my previous deploy [17:17] and node wasn't available [17:29] stokachu: I'm a little confused by how the nodejs and nginx layers are structured. Is there ever a case where you would be using those layers but not want nginx or nodejs installed? [17:30] cory_fu: dont quite follow [17:31] The point of these layers is to provide nginx and nodejs. Why do you have them blocking install until the lower layer explicitly requests it? [17:32] I would expect each of those layers to just handle the install hook and install the software. [17:32] i was having issues where the install hook was being requested in a loop [17:33] so i put the install hooks in the topmost (ghost) layer [17:33] and just left reactive states in the other layers [17:33] also if i have an install hook in the nginx layer and an install hook in the ghost layer [17:34] how will they react? [17:34] As long as you're using charms.reactive.hook in a file under reactive/, they should work together just fine, though you won't be able to determine which runs first [17:35] so that's my problem as ghost was running prior to nginx install hook [17:35] Also, a given @hook block should only get called once per Juju hook invocation [17:35] so if i have 2 install hooks in 2 seperate layers does that count as 1 juju hook invocation? [17:37] without being able to call install hooks for each layer seperately i can't build on the previous layer [17:37] so i have to rely on calling out reactive states [17:38] @hook handlers are just like state handlers, except that they run before any state handlers. [17:38] You can have multiple, from any layers, and they will all be called [17:38] but if my ghost hook handler runs before the nginx one [17:38] how will i know when nginx.available is set [17:39] In general, you probably don't want to have @hook handlers in your charm layer, with the exception of config-changed [17:39] You just want to have a @when('nginx.available') [17:39] so my charm layer being ghost? [17:39] Right [17:39] and what about nginx layer [17:39] that has the install hook? [17:40] Sure. As can the nodejs layer [17:40] ok lemme try that [17:40] Have you looked at how I did the apache-php layer? https://github.com/johnsca/apache-php [17:40] yea that's what i was basing it off of [17:40] but i was hitting problems as described above [17:41] things firing and reinstalling the ghost application ina loop etc [17:41] My guess is that you need to add a @when_not('ghost.installed') or @only_once to prevent that [17:41] ah ok [17:42] cory_fu: one last thing, can i have 2 seperate @when decorators for a single method [17:42] to reprenset or? [17:42] Because state handlers will get re-run every time if their conditions are met. [17:42] You can, but it's always and. There's no way to say @when(A or B) :( [17:43] ok that may be a nice feature to request [17:43] I'll probably be adding a @when_any [17:43] cory_fu: and are conditions persisted [17:43] stokachu: Yes, states are persisted until being removed. This is so that states set by relations are persisted until all required relation states are met [17:44] You can remove a state at any time, though, with remove_state. But a given layer should only remove states that it set [17:44] ok makes sense [17:44] that may be my problem to with things running in a loop [17:44] (Or that it is using as a communication channel, possibly) [17:45] so if i run juju deploy, juju upgrade-charm [17:45] any state that isn't removed is persisted [17:45] ? [17:45] Yes [17:45] ok good to know, is that documented? [17:45] i may file a bug on it [17:54] https://plus.google.com/hangouts/_/hoaevent/AP36tYfzoBSPiWHfP96qjEEawO6jFVkMeJg1WPdT6LbMApFNuQEvMA?hl=en&authuser=0 [17:54] we're having an office hours in 5 minutes if anyone wants to join! [17:55] stokachu: I guess it's not explicitly called out, no [18:31] marcoceppi_, where were all the benchmarking charms you used in the demo, e.g. siege, collector? [18:33] dweaver - here's the siege charm itself - https://github.com/juju-solutions/siege [18:34] lazypower, thanks [18:34] dweaver - The collector + gui for benchmark ui (bui) - will be released soon. I'm sure we will announce to the list when that happens [19:08] bootstrapping juju errors out with "/var/lib/juju/tools/1.25.0-trusty-amd64/tools.tar.gz: No such file or directory\nERROR failed to bootstrap environment: subprocess encountered error code 1" http://paste.ubuntu.com/13114841/ [19:08] Can anyone help me with this issue? [19:12] urthmover: (7) Failed to connect to streams.canonical.com port 443: Connection timed out\ntools from https://streams.canonical.com/juju/tools/releases/juju-1.25.0-trusty-amd64.tgz [19:12] make sure you've setup networking properly for MAAS [19:12] a common problem is not enabling IP Forwarding on the MAAS server [19:13] https://wiki.ubuntu.com/OpenStack/Installer/debugging/multi-install [19:13] stokachu: thank you...checking [19:37] stokachu: I'm applied that nat'ing retrying the juju bootstrapping. Is there a way to test if it's working properly before I wait around for the install to finish? [19:38] urthmover: you can do a simple juju boostrap outside of the installer [19:38] urthmover: https://jujucharms.com/docs/1.25/config-maas [19:38] stokachu: ok...(looking up that syntax) [19:38] stokachu: awesome thanks for the link [19:38] np [19:38] run juju boostrap --debug [19:45] cory_fu: you mind doing a quick lookover my two layers, https://github.com/battlemidget/juju-layer-nginx and https://github.com/battlemidget/juju-layer-node [19:45] i did some refactoring [19:45] Sure [19:48] cory_fu: i also updated https://github.com/battlemidget/juju-layer-ghost/blob/master/reactive/ghost.py to make use of the 2 emitted states from nginx and nodejs [19:48] but it doesn't seem to be firing, the config-changed hook is run for the ghost charm though [19:48] stokachu: dude you're a genius....I think it' just went farther....scheeeet [19:49] schWeeeet [19:49] urthmover: haha [19:49] :) [19:49] stokachu: Your nodejs layer is setting 'nodejs.available' but your ghost layer is looking for 'nodejs.installed' [19:49] urthmover: if that works then you can destroy the environment and re-run the installer [19:49] cory_fu: crap [19:50] But otherwise it looks really good [19:50] stokachu: actually this is re-running the installer......goes pretty fast cause it's all on a local vmware workstation instance (locally) [19:50] cory_fu: ok cool thanks :) [19:50] Although [19:50] stokachu: the only goofy part (as I test this) is I have to manually start the machines...cause wakeonlan doesn't work right [19:50] You also need to add a state to indicate that ghost is installed, or add @only_once to the install handler [19:51] cory_fu: so https://github.com/battlemidget/juju-layer-ghost/blob/master/reactive/ghost.py#L49 add @only_once decorator additionally? [19:51] Yep [19:51] ok cool [19:52] cory_fu: do you have any example on using only_once? [19:53] urthmover: yea that sucks, if you use straight bare metal with KVM it supports virsh so you don't have to manually power on/off those vm's [19:53] assuming you aren't on awindows machine [19:56] cory_fu: like this: http://paste.ubuntu.com/13115254/ [19:56] ? [19:57] stokachu: Here;s a highler level question for you then, I use a vmware cluster that is controlled by a hosting provider. I have fullish vcenter access, but not api or cli access. I'm able to build guests at will. How would you reccomend I build out this environment to support openstack and containers(docker, rkt)? [19:59] urthmover: so MAAS supports VMware as a power type so that may be your best bet for utilizing that [19:59] stokachu: my only mission is to support self contained *nix "server/containers" so that developers can have root install their software, run their apps, and probably break'm often. I'm trying to move away from monolithic web servers that I'm not willing to give up root access [19:59] as for the layout just make sure your VMS have 2 disks and 2 nics [20:00] i think autopilot is a minimum of 5-6 servers all having 2 disks and 2nics [20:00] stokachu: from what little that I've read though, you need cli or api access to control vmware from maas [20:00] urthmover: sec lemme ask [20:01] stokachu: if building a case to request this from the hosting provider is necessary to gain cli/api access, I'm willing to do that, but the process is long and turn around on those changes take months with this big stupid provider [20:01] stokachu: thank you [20:02] stokachu: I'm not opposed to keeping all the time if that makes a difference [20:03] stokachu: Yep! [20:03] cory_fu: perfect thanks [20:03] urthmover: so far maas supports vmware workstations, esxi [20:03] urthmover: im still waiting on what is supported vcenter wise [20:04] and what the requirements are (if its only cli atm) [20:04] stokachu: ok [20:04] stokachu: perfect..thank you very much for asking [20:05] stokachu: I joined #juju-dev thinking maybe the conversation is in there [20:05] urthmover: nah you want #maas [20:05] stokachu: ok cool I'll jump in there [20:05] and montpillo is the guy who knows [20:06] stokachu: I'll watch for him to speak up thank you a bunch for your help thus far === ajmitch_ is now known as ajmitch