[00:00] paste.ubuntu.com [00:00] http://paste.ubuntu.com/7196567/ [00:01] ghartmann: ok, i'll log a bug [00:01] it will be related to your config [00:01] backup .juju [00:01] delete it [00:01] juju init -w [00:01] and try again [00:02] error: flag provided but not defined: -w [00:03] it seems that force is the only flag there [00:03] sorry, that has changed as well [00:03] the whole thing might have changed to be juju generate-config [00:03] https://bugs.launchpad.net/juju-core/+bug/1301663 [00:03] <_mup_> Bug #1301663: cmd/juju: panic during bootstrap [00:04] ah thanks [00:05] ghartmann: with a fresh config [00:05] does the panic still happen ? [00:05] ghartmann: what I would like to do is find the specific yaml stanza thta is causing the panic [00:05] so we can make a test case [00:06] I get ERROR environment has no access-key or secret-key [00:07] sorry it's on amazon provider [00:07] ghartmann: hang on, you just said you were using local [00:07] ok [00:07] it seems to work [00:07] ok, so lets deal with these problems one at at time [00:08] ghartmann: juju destroy-environment -y local [00:08] then put the old .juju back [00:08] or just the local: stanza from your config [00:08] and see if it happens agian [00:08] I installed juju, ran init, switched to local, created the ssh keys and bootstrap [00:10] this time I ran already had the sshkeys created, ran generate config, switched to local and bootrapped [00:13] ghartmann: can you put your original .juju back and see what happens ? [00:14] I have removed it before [00:15] what I have noticed is that beforehand the ssh folder in .juju didn't have the keys there [00:15] and now it has [00:15] it seems that when the config is generated it copies the folder in [00:16] yes, i think it does that [00:16] so the problem seems to be related with the sequence [00:16] anyway it works now [00:16] I just need to configure the bridge and should be good to go [00:16] thanks ! [00:17] ok [00:18] by the way, to setup the bridge. is there a better way to expose lxc services to the internal lan ? I am going through setting up the bridge manually [00:58] if im gonna use maas for virtual machines and juju i have to install LXC on my MAAS server correct? Not to familiar with this yet, but running lxc-create now for node, hope im on the right track [00:59] interesting [01:00] webbrandon: false [01:02] wha, wha. damn. need to find some more straight forward docs [01:02] webbrandon: if you want to use maas for virtual machines the way we deploy openstack is this [01:03] 1. deploy maas controller [01:03] 2. enroll all the machines with maas [01:03] 3. deploy a juju environment on your maas cluster [01:03] 4. use juju to deploy openstack [01:03] and scale out the various services like storage and compute to consume all the available macines on your maas cluster [01:04] 5. optional, use juju to deploy environments on your openstack cloud [01:06] webbrandon: you +could+ use maas and manually call add-machine to create virtual machines on those physical maas nodes [01:06] but you'd probably find that the networking gets all screwed up [01:18] hook failed : config changed unexpected token near fi [01:19] err unexpected token 'fi' [01:20] davechen1y thank you [01:20] Valduare: this is coming from your hook [01:20] which is written in bash [01:21] which has a syntax error [01:21] win8 [01:21] how many times am i going to do that today ? [01:21] it didnt throw this error last time I did the same charm [01:21] Valduare: that may be true, but that doesn't mean there isn't a syntax error [01:22] how do I find out which line from the logs [01:22] oh it did tell me [01:22] line 33 [01:22] looking now [01:23] ok so how can I fix the syntax and re-try deploying [01:23] Valduare: this is a local charm ? [01:23] do I use the juju tools and copy it locally? [01:24] Valduare: this is a local charm ? [01:24] ya [01:24] juju help upgrade-charm [01:28] http://pastebin.com/83r1Uwn2 I dont see a syntax error on line 33 [01:28] Ola. I see that subordinate services are used for things such as AWS ELB or RDS. What I don't understand is how can I model an RDS instance that can be used by a number of services [01:41] ah and oddly it just turned green all by itself === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob === vladk is now known as vladk|offline === vladk|offline is now known as vladk [08:47] marcoceppi did I get some time to work on the wordpress charm? === CyberJacob is now known as CyberJacob|Away [10:15] is there a recommended way to setup lxc provider network as a bridge ? [10:39] Once a peer relation is joined, will a unit remain in it until the unit is destroyed? Even if it is the only unit in the service for a period of time? [10:40] It seems to be, but I'm unsure if that can be relied on [10:41] If it *can* be relied on, then I might be able to work around the split-brain problem in Bug #1258485 [10:41] <_mup_> Bug #1258485: support leader election for charms [10:42] (if a unit joins a relation that has never had a leader, then unit 0 will become the leader even if it hasn't joined yet) [10:46] no, that still fails in a pathalogical case (dropping unit 0 before the other units can see each other in the peer relation) [12:22] stub: i'm not sure i understand the question [12:22] are you asking if the peer relationship membership remains after the peer'ing units are destroyed? [12:23] It doesn't matter... but I was asking if it remained when all-but-one units were destroyed [12:23] Create a single unit service, the unit isn't in the peer relation. Add a new unit, both get added to the peer relation. Remove one unit, and the remaining unit remains in the peer relation. [12:55] stub: its in the pool, but the hooks dont fire [12:55] so yeah its still in the relationship, but wit only one unit, the hooks wont fire against themselves. === vladk is now known as vladk|offline [14:18] is there documentation that lists what hooks are called in what order when? [14:18] so juju upgrade-charm calls upgrade-charm and then config-changed? [14:19] actually, that's the other way round I think - config then upgrade [14:20] bloodearnest: you were right the first time [14:20] upgrade-charm, *then* config-changed [14:20] then any peering hooks on peer-config-changed [14:20] lazyPower: so, interesting question then: [14:22] upgrade-charm deploys my new payload, and restarts the server. This fails because the config format has changed slightly, and the new code can't process the old config format, so the hook fails [14:22] lazyPower: this is a "fat" charm, as currently being discussed on the ML [14:23] bloodearnest: charms have to retain backwords compat, what you're prposing would fail charm store review [14:23] even if the config value is not used, it should be marked in the config description as depreciated, and handled appropriately [14:24] lazyPower: yeah, but it's never going in the charmstore... [14:24] wait i may be goig overboard, the config of what? [14:24] you mean the charm config? or do you mean the service config? [14:24] lazyPower: by config format I mean application config format, not charm config, which hasn't changed [14:24] ahhhh ok, sorry i went overboard :) [14:24] heh [14:24] a bit preoccupied with my active sprint [14:24] morning [14:25] so in this case, an app under rapid development, a config value changed from a list to a dict [14:25] well there are 2 things you can do. Either fully depreciate the old config of the application, and do a full upgrade with a backup of the environment dropped in /mnt, or have a precondition check with 2 paths to handle the application version, one for the old app + config, and one for the new app+config [14:25] right [14:26] this is trivial when using config management solutions like ansible, you write 2 sep. roles and assign the role based on the predicate configuration value [14:26] Valduare: o/ [14:26] or just juju resolved, as the config-changed hook will fix everything :) [14:26] lazyPower: am using ansible [14:27] bloodearnest: i would suggest not restarting the service until config-changed has run then [14:27] so it upgrades the app along with the configuration before the restart [14:27] lazyPower: ah yes - that sounds better [14:37] marcoceppi: if you exited a relation gracefully would you see relation broken? [14:37] or does one _always_ see relation broken and departed with gracefully severing a relation? [14:37] arosales: yes, you'll always get a relation-broken event [14:37] ah ok [14:37] marcoceppi: thanks for the clearification [14:37] tvansteenburgh ^ [14:38] tvansteenburgh: that was my misinterpretation [14:52] lazyPower: turns out, that was exactly what I was doing, problem was the config-changed hook not doing *all* configs [15:10] bloodearnest: but you have a clear path moving forward. *thumbsup* === vladk|offline is now known as vladk [15:23] does anyone have time to review/fix up the issues with owncloud? [15:24] jcastro: is that something i can do? [15:25] what command removes a dead machine from juju [15:26] juju destroy-machine [15:27] tvansteenburgh, yeah but post-sugar [15:28] we also have seafile, which is apparently awesome: http://manage.jujucharms.com/~negronjl/precise/seafile [15:30] tvansteenburgh: that just puts it to life: dead mode [15:30] im on manual provision [15:31] ValDuare: ok sorry, i don't know anything about manual provisioning [15:40] does remove-machine not work? [15:42] let me see [15:42] nope [15:44] what does juju say the status is of the machine in `juju status`? [16:10] rbasak: I erroneously marked bug 1301464 as committed in quickstart, switched back to confirmed. could you please re-triage it? thanks [16:10] <_mup_> Bug #1301464: The mega-watcher for machines does not include containers addresses [16:19] frankban: no problem. DOne. [16:23] Hi #juju [16:23] question for you [16:23] According to the log config-get returns an int in scientific notation and that makes it hard to do math on the value. [16:23] 2014-04-03 16:04:47 INFO config-changed ++ config-get upload-maxsize [16:23] 2014-04-03 16:04:47 INFO config-changed + upload_maxsize=3e+07 [16:24] Is config get supposed to return 3e+07 ? [16:24] rbasak: thanks! also I have a question for you and jamespage: we have a branch introducing joyent support in quickstart, not yet landed. it's few code but it's also a nice new feature (for quickstart and juju-core 1.18 itself). Should we keep that out for now? I mean, until we fix the bug introduced by core changes and we rebuild the distro package? We'd like to include that, but if you prefer otherwise it's not a prob [16:24] lem [16:26] mbruzek: not really, I've never seen this [16:26] mgz: have you seen juju truncate long numbers? ^^ [16:27] 30000000 is less than max int for 32 bit values, but in scientific notation I can not get bc to use it. [16:28] here are my solutions: http://pastebin.ubuntu.com/7199425/ [16:28] marcoceppi: is that just the log though? [16:29] frankban: as you have probably guessed that branch would need a feature freeze exception. It's easiest for me if you don't land it and release without it. [16:29] frankban: the alternative would be for me to cherry-pick the specific fixes we need in a distro patch taken from bzr commits, without bumping the upstream version number. [16:30] mgz I don't think so, I get a syntax error when using bc and can repeat the syntax error when I use scientific notation [16:30] frankban: that's only slightly more work for me, if the number of commits I need to cherry-pick are small, and they all apply correctly without conflicts. [16:30] rbasak: ack, so I mean, it's not possible to include both the new feature and the bug fix as part of that exception, correct? [16:31] frankban: if you want an exception, you only need it for the feature. Bugs don't need exceptions. [16:31] frankban: then we can land it all together. If you want to do it that way. [16:31] frankban: you'll need to get the exception approved in time though, and I assume the release team are pretty busy given how long it took them to approve the previous exception. [16:32] frankban: Laney also asked for more QA assurances for future exceptions, so we'll want to document that too. I can certainly prepare the proposed package first and put it in a PPA for testing. [16:33] rbasak: OIC. so I guess we'll just release 1.3.1 with the bug fix. I believe it will be readu Mon or Tue [16:33] frankban: sounds good. [16:33] tvansteenburgh: <3 [16:34] rbasak: it will include the bug fix for juju-core mega-watcher and the --ppa flag, for discriminating between distro and pypi packages. I planned to have a module including something like JUJU_CORE_SOURCE = 'ppa', what can be changed to 'distro' to switch defaults [16:34] mbruzek: I don't see any uses of float format verbs in our code. are you sure it's not specific to that charm? [16:35] frankban: that sounds great [16:35] rbasak: cool thank you [16:35] mgz upload-maxsize: [16:35] type: int [16:35] default: 30000000 [16:35] description: "Maximum file size (in bytes) that users can upload into Sugar as attachments." [16:35] frankban: please just makes sure that it only includes bugfixes though, so that we don't find later that we have to ask for an exception [16:36] mgz I do not get that value as a number out of config-get [16:36] rbasak: sure, just those two changes, no new features, just a new patch number version, sounds good? === roadmr is now known as roadmr_afk [16:36] frankban: sounds great :) === roadmr_afk is now known as roadmr [16:42] mgz: why not do smart filtering? [16:42] err mbruzek ^^ [16:43] mbruzek: like 30M for 30 Megabytes [16:43] 1G [16:43] etc [16:43] The number has to be in bytes for the sugar config file, but in MB for the php file [16:43] So either way I have to do math on it. [16:43] mbruzek: it'd be better to do MB from user input [16:44] I defaulted to the byte number that sugar admins would be used to [16:44] to bytes for sugar [16:44] I could also change it to a string [16:44] But it is an int, and I thought that should work. [16:45] sugar admins are used to setting 30000000 [16:45] mbruzek: to be clear, what does config-get in a hook return? a mangled version or the bytes? [16:45] mgz config-get returns with the scientific notation [16:45] you should be able to reproduce with a stub charm then, and file a bug [16:46] 2014-04-03 16:45:53 INFO config-changed ++ config-get upload-maxsize [16:46] 2014-04-03 16:45:53 INFO config-changed + upload_maxsize=3e+07 [16:46] against juju-core? [16:47] yes. [16:47] a skeleton charm with config.yaml with one entry, a config-changed hook, and a tests/ script should reproduce === lazyPower is now known as lazyPower-travel [18:13] i've got some ARM64 vms, on which i'm trying to deploy with the manual provider. [18:14] bootstrap: done. juju-gui deployed to bootstrap node: done (and working) [18:14] but i juju add-machined a couple other nodes and, while add-machine completes w/o error and the agent is running, juju status shows the nodes stuck in pending [18:14] machine-0: 2014-04-03 18:14:16 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "manual:10.0.128.7": no instances found [18:15] is what debug-log shows === CyberJacob|Away is now known as CyberJacob [18:21] arrg I am having a hell of a time trying to get MAAS to act the way I want. [18:22] im done messing with it for now. Just have to spend out my behind to use service for now. [18:29] actually I am just going to follow marcoceppi tut. Not what I want but I guess good for the interm of testing [18:29] not tut - blog post [18:49] Any place to get info about net-ju ? [18:56] sinzui: hey do you know who is working on the joyent support? anyone that might currently be around? [18:56] bac, bodie_ and wallyworld_ in #juju-dev [18:57] thanks sinzui [18:58] It's possible to deploy juju in a couple of servers to use only LXC as cloud machines ? === vladk is now known as vladk|offline [19:09] xagaba_: yes, you can use deploy --to lxc: [19:11] marcoceppi: any place to get info about such deploy mode ? [19:12] xagaba_: https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-machines [19:13] cool! [19:13] marcoceppi: thanks, I take a look === roadmr is now known as roadmr_afk [19:21] what does it mean by "instance-state: missing" ? [19:56] nessita: well? talk [19:56] :p [19:56] hello everyone! my units are not moving from the pending state, log shows: [19:56] 2014-04-03 19:54:54 ERROR juju.worker.uniter.charm git.go:211 git command failed: exec: "git": executable file not found in $PATH path: /var/lib/juju/agents/unit-solr-jetty-0/state/deployer/update-20140403-165454781105382 args: []string{"init"} [19:56] complete logs: https://pastebin.canonical.com/107753/ [19:56] I ssh'd into elasticsearch/0 and confirmed the unit can access the internet [19:57] Until yesterday I was running saucy with juju/stable + juju/devel PPA's [19:57] yesterday I upgraded to trusty [19:57] and since the units were not starting, I cleaned up everything and retried [19:57] same error [19:58] any ideas? :-) === roadmr_afk is now known as roadmr === sputnik1_ is now known as sputnik13net [21:12] what does it mean by "instance-state: missing" ? === tvansteenburgh is now known as tvan-afk === thumper is now known as thumper-gym === CyberJacob is now known as CyberJacob|Away