[00:00] <davechen1y> paste.ubuntu.com
[00:00] <ghartmann> http://paste.ubuntu.com/7196567/
[00:01] <davechen1y> ghartmann: ok, i'll log a bug
[00:01] <davechen1y> it will be related to your config
[00:01] <davechen1y> backup .juju
[00:01] <davechen1y> delete it
[00:01] <davechen1y> juju init -w
[00:01] <davechen1y> and try again
[00:02] <ghartmann> error: flag provided but not defined: -w
[00:03] <ghartmann> it seems that force is the only flag there
[00:03] <davechen1y> sorry, that has changed as well
[00:03] <davechen1y> the whole thing might have changed to be juju generate-config
[00:03] <davechen1y> https://bugs.launchpad.net/juju-core/+bug/1301663
[00:03] <_mup_> Bug #1301663: cmd/juju: panic during bootstrap <juju-core:Triaged> <https://launchpad.net/bugs/1301663>
[00:04] <ghartmann> ah thanks
[00:05] <davechen1y> ghartmann: with a fresh config
[00:05] <davechen1y> does the panic still happen ?
[00:05] <davechen1y> ghartmann: what I would like to do is find the specific yaml stanza thta is causing the panic
[00:05] <davechen1y> so we can make a test case
[00:06] <ghartmann> I get ERROR environment has no access-key or secret-key
[00:07] <ghartmann> sorry it's on amazon provider
[00:07] <davechen1y> ghartmann: hang on, you just said you were using local
[00:07] <ghartmann> ok
[00:07] <ghartmann> it seems to work
[00:07] <davechen1y> ok, so lets deal with these problems one at at time
[00:08] <davechen1y> ghartmann: juju destroy-environment -y local
[00:08] <davechen1y> then put the old .juju back
[00:08] <davechen1y> or just the local: stanza from your config
[00:08] <davechen1y> and see if it happens agian
[00:08] <ghartmann> I installed juju, ran init, switched to local, created the ssh keys and bootstrap
[00:10] <ghartmann> this time I ran already had the sshkeys created, ran generate config, switched to local and bootrapped
[00:13] <davechen1y> ghartmann: can you put your original .juju back and see what happens ?
[00:14] <ghartmann> I have removed it before
[00:15] <ghartmann> what I have noticed is that beforehand the ssh folder in .juju didn't have the keys there
[00:15] <ghartmann> and now it has
[00:15] <ghartmann> it seems that when the config is generated it copies the folder in
[00:16] <davechen1y> yes, i think it does that
[00:16] <ghartmann> so the problem seems to be related with the sequence
[00:16] <ghartmann> anyway it works now
[00:16] <ghartmann> I just need to configure the bridge and should be good to go
[00:16] <ghartmann> thanks !
[00:17] <davechen1y> ok
[00:18] <ghartmann> by the way, to setup the bridge. is there a better way to expose lxc services to the internal lan ? I am going through setting up the bridge manually
[00:58] <webbrandon> if im gonna use maas for virtual machines and juju i have to install LXC on my MAAS server correct?  Not to familiar with this yet, but running lxc-create now for node, hope im on the right track
[00:59] <Valduare> interesting
[01:00] <davechen1y> webbrandon: false
[01:02] <webbrandon> wha, wha. damn. need to find some more straight forward docs
[01:02] <davechen1y> webbrandon: if you want to use maas for virtual machines the way we deploy openstack is this
[01:03] <davechen1y> 1. deploy maas controller
[01:03] <davechen1y> 2. enroll all the machines with maas
[01:03] <davechen1y> 3. deploy a juju environment on your maas cluster
[01:03] <davechen1y> 4. use juju to deploy openstack
[01:03] <davechen1y> and scale out the various services like storage and compute to consume all the available macines on your maas cluster
[01:04] <davechen1y> 5. optional, use juju to deploy environments on your openstack cloud
[01:06] <davechen1y> webbrandon: you +could+ use maas and manually call add-machine to create virtual machines on those physical maas nodes
[01:06] <davechen1y> but you'd probably find that the networking gets all screwed up
[01:18] <Valduare> hook failed : config changed   unexpected token near fi
[01:19] <Valduare> err unexpected token 'fi'
[01:20] <webbrandon> davechen1y thank you
[01:20] <davechen1y> Valduare: this is coming from your hook
[01:20] <davechen1y> which is written in bash
[01:21] <davechen1y> which has a syntax error
[01:21] <davechen1y> win8
[01:21] <davechen1y> how many times am i going to do that today ?
[01:21] <Valduare> it didnt throw this error last time I did the same charm
[01:21] <davechen1y> Valduare: that may be true, but that doesn't mean there isn't a syntax error
[01:22] <Valduare> how do I find out which line from the logs
[01:22] <Valduare> oh it did tell me
[01:22] <Valduare> line 33
[01:22] <Valduare> looking now
[01:23] <Valduare> ok so how can I fix the syntax and re-try deploying
[01:23] <davechen1y> Valduare: this is a local charm ?
[01:23] <Valduare> do I use the juju tools and copy it locally?
[01:24] <davechen1y> Valduare: this is a local charm ?
[01:24] <Valduare> ya
[01:24] <davechen1y> juju help upgrade-charm
[01:28] <Valduare> http://pastebin.com/83r1Uwn2 I dont see a syntax error on line 33
[01:28] <lemao> Ola. I see that subordinate services are used for things such as AWS ELB or RDS. What I don't understand is how can I model an RDS instance that can be used by a number of services
[01:41] <Valduare> ah and oddly it just turned green all by itself
[08:47] <overm1nd> marcoceppi did I get some time to work on the wordpress charm?
[10:15] <ghartmann> is there a recommended way to setup lxc provider network as a bridge ?
[10:39] <stub> Once a peer relation is joined, will a unit remain in it until the unit is destroyed? Even if it is the only unit in the service for a period of time?
[10:40] <stub> It seems to be, but I'm unsure if that can be relied on
[10:41] <stub> If it *can* be relied on, then I might be able to work around the split-brain problem in Bug #1258485
[10:41] <_mup_> Bug #1258485: support leader election for charms <juju-core:Triaged> <postgresql (Juju Charms Collection):New> <https://launchpad.net/bugs/1258485>
[10:42] <stub> (if a unit joins a relation that has never had a leader, then unit 0 will become the leader even if it hasn't joined yet)
[10:46] <stub> no, that still fails in a pathalogical case (dropping unit 0 before the other units can see each other in the peer relation)
[12:22] <lazyPower> stub: i'm not sure i understand the question
[12:22] <lazyPower> are you asking if the peer relationship membership remains after the peer'ing units are destroyed?
[12:23] <stub> It doesn't matter... but I was asking if it remained when all-but-one units were destroyed
[12:23] <stub> Create a single unit service, the unit isn't in the peer relation. Add a new unit, both get added to the peer relation. Remove one unit, and the remaining unit remains in the peer relation.
[12:55] <lazyPower> stub: its in the pool, but the hooks dont fire
[12:55] <lazyPower> so yeah its still in the relationship, but wit only one unit, the hooks wont fire against themselves.
[14:18] <bloodearnest> is there documentation that lists what hooks are called in what order when?
[14:18] <bloodearnest> so juju upgrade-charm calls upgrade-charm and then config-changed?
[14:19] <bloodearnest> actually, that's the other way round I think - config then upgrade
[14:20] <lazyPower> bloodearnest: you were right the first time
[14:20] <lazyPower> upgrade-charm, *then* config-changed
[14:20] <lazyPower> then any peering hooks on peer-config-changed
[14:20] <bloodearnest> lazyPower: so, interesting question then:
[14:22] <bloodearnest> upgrade-charm deploys my new payload, and restarts the server. This fails because the config format has changed slightly, and the new code can't process the old config format, so the hook fails
[14:22] <bloodearnest> lazyPower: this is a "fat" charm, as currently being discussed on the ML
[14:23] <lazyPower> bloodearnest: charms have to retain backwords compat, what you're prposing would fail charm store review
[14:23] <lazyPower> even if the config value is not used, it should be marked in the config description as depreciated, and handled appropriately
[14:24] <bloodearnest> lazyPower: yeah, but it's never going in the charmstore...
[14:24] <lazyPower> wait i may be goig overboard, the config of what?
[14:24] <lazyPower> you mean the charm config? or do you mean the service config?
[14:24] <bloodearnest> lazyPower: by config format I mean application config format, not charm config, which hasn't changed
[14:24] <lazyPower> ahhhh ok, sorry i went overboard :)
[14:24] <bloodearnest> heh
[14:24] <lazyPower> a bit preoccupied with my active sprint
[14:24] <Valduare> morning
[14:25] <bloodearnest> so in this case, an app under rapid development, a config value changed from a list to a dict
[14:25] <lazyPower> well there are 2 things you can do. Either fully depreciate the old config of the application, and do a full upgrade with a backup of the environment dropped in /mnt, or have a precondition check with 2 paths to handle the application version, one for the old app + config, and one for the new app+config
[14:25] <bloodearnest> right
[14:26] <lazyPower> this is trivial when using config management solutions like ansible, you write 2 sep. roles and assign the role based on the predicate configuration value
[14:26] <lazyPower> Valduare: o/
[14:26] <bloodearnest> or just juju resolved, as the config-changed hook will fix everything :)
[14:26] <bloodearnest> lazyPower: am using ansible
[14:27] <lazyPower> bloodearnest: i would suggest not restarting the service until config-changed has run then
[14:27] <lazyPower> so it upgrades the app along with the configuration before the restart
[14:27] <bloodearnest> lazyPower: ah yes - that sounds better
[14:37] <arosales> marcoceppi: if you exited a relation gracefully would you see relation broken?
[14:37] <arosales> or does one _always_ see relation broken and departed with gracefully severing a relation?
[14:37] <marcoceppi> arosales: yes, you'll always get a relation-broken event
[14:37] <arosales> ah ok
[14:37] <arosales> marcoceppi: thanks for the clearification
[14:37] <arosales> tvansteenburgh ^
[14:38] <arosales> tvansteenburgh: that was my misinterpretation
[14:52] <bloodearnest> lazyPower: turns out, that was exactly what I was doing, problem was the config-changed hook not doing *all* configs
[15:10] <lazyPower> bloodearnest: but you have a clear path moving forward. *thumbsup*
[15:23] <jcastro> does anyone have time to review/fix up the issues with owncloud?
[15:24] <tvansteenburgh> jcastro: is that something i can do?
[15:25] <ValDuare> what command removes a dead machine from juju
[15:26] <tvansteenburgh> juju destroy-machine <machine_num>
[15:27] <jcastro> tvansteenburgh, yeah but post-sugar
[15:28] <jcastro> we also have seafile, which is apparently awesome: http://manage.jujucharms.com/~negronjl/precise/seafile
[15:30] <ValDuare> tvansteenburgh: that just puts it to life: dead   mode
[15:30] <ValDuare> im on manual provision
[15:31] <tvansteenburgh> ValDuare: ok sorry, i don't know anything about manual provisioning
[15:40] <jcastro> does remove-machine not work?
[15:42] <ValDuare> let me see
[15:42] <ValDuare> nope
[15:44] <jcastro> what does juju say the status is of the machine in `juju status`?
[16:10] <frankban> rbasak: I erroneously marked bug 1301464 as committed in quickstart, switched back to confirmed. could you please re-triage it? thanks
[16:10] <_mup_> Bug #1301464: The mega-watcher for machines does not include containers addresses <addressability> <api> <juju-gui> <juju-core:Fix Committed by wallyworld> <juju-core (Ubuntu):Triaged> <juju-quickstart (Ubuntu):Confirmed> <https://launchpad.net/bugs/1301464>
[16:19] <rbasak> frankban: no problem. DOne.
[16:23] <mbruzek> Hi #juju
[16:23] <mbruzek> question for you
[16:23] <mbruzek> According to the log config-get returns an int in scientific notation and that makes it hard to do math on the value.
[16:23] <mbruzek> 2014-04-03 16:04:47 INFO config-changed ++ config-get upload-maxsize
[16:23] <mbruzek> 2014-04-03 16:04:47 INFO config-changed + upload_maxsize=3e+07
[16:24] <mbruzek> Is config get supposed to return 3e+07 ?
[16:24] <frankban> rbasak: thanks! also I have a question for you and jamespage: we have a branch introducing joyent support in quickstart, not yet landed. it's few code but it's also a nice new feature (for quickstart and juju-core 1.18 itself). Should we keep that out for now? I mean, until we fix the bug introduced by core changes and we rebuild the distro package? We'd like to include that, but if you prefer otherwise it's not a prob
[16:24] <frankban> lem
[16:26] <marcoceppi> mbruzek: not really, I've never seen this
[16:26] <marcoceppi> mgz: have you seen juju truncate long numbers? ^^
[16:27] <mbruzek> 30000000 is less than max int for 32 bit values, but in scientific notation I can not get bc to use it.
[16:28] <tvansteenburgh> here are my solutions: http://pastebin.ubuntu.com/7199425/
[16:28] <mgz> marcoceppi: is that just the log though?
[16:29] <rbasak> frankban: as you have probably guessed that branch would need a feature freeze exception. It's easiest for me if you don't land it and release without it.
[16:29] <rbasak> frankban: the alternative would be for me to cherry-pick the specific fixes we need in a distro patch taken from bzr commits, without bumping the upstream version number.
[16:30] <mbruzek> mgz I don't think so, I get a syntax error when using bc and can repeat the syntax error when I use scientific notation
[16:30] <rbasak> frankban: that's only slightly more work for me, if the number of commits I need to cherry-pick are small, and they all apply correctly without conflicts.
[16:30] <frankban> rbasak: ack, so I mean, it's not possible to include both the new feature and the bug fix as part of that exception, correct?
[16:31] <rbasak> frankban: if you want an exception, you only need it for the feature. Bugs don't need exceptions.
[16:31] <rbasak> frankban: then we can land it all together. If you want to do it that way.
[16:31] <rbasak> frankban: you'll need to get the exception approved in time though, and I assume the release team are pretty busy given how long it took them to approve the previous exception.
[16:32] <rbasak> frankban: Laney also asked for more QA assurances for future exceptions, so we'll want to document that too. I can certainly prepare the proposed package first and put it in a PPA for testing.
[16:33] <frankban> rbasak: OIC. so I guess we'll just release 1.3.1 with the bug fix. I believe it will be readu Mon or Tue
[16:33] <rbasak> frankban: sounds good.
[16:33] <marcoceppi> tvansteenburgh: <3
[16:34] <frankban> rbasak: it will include the bug fix for juju-core mega-watcher and the --ppa flag, for discriminating between distro and pypi packages. I planned to have a module including something like JUJU_CORE_SOURCE = 'ppa', what can be changed to 'distro' to switch defaults
[16:34] <mgz> mbruzek: I don't see any uses of float format verbs in our code. are you sure it's not specific to that charm?
[16:35] <rbasak> frankban: that sounds great
[16:35] <frankban> rbasak: cool thank you
[16:35] <mbruzek> mgz upload-maxsize:
[16:35] <mbruzek>     type: int
[16:35] <mbruzek>     default: 30000000
[16:35] <mbruzek>     description: "Maximum file size (in bytes) that users can upload into Sugar as attachments."
[16:35] <rbasak> frankban: please just makes sure that it only includes bugfixes though, so that we don't find later that we have to ask for an exception
[16:36] <mbruzek> mgz I do not get that value as a number out of config-get
[16:36] <frankban> rbasak: sure, just those two changes, no new features, just a new patch number version, sounds good?
[16:36] <rbasak> frankban: sounds great :)
[16:42] <marcoceppi> mgz: why not do smart filtering?
[16:42] <marcoceppi> err mbruzek ^^
[16:43] <marcoceppi> mbruzek: like 30M for 30 Megabytes
[16:43] <marcoceppi> 1G
[16:43] <marcoceppi> etc
[16:43] <mbruzek> The number has to be in bytes for the sugar config file, but in MB for the php file
[16:43] <mbruzek> So either way I have to do math on it.
[16:43] <marcoceppi> mbruzek: it'd be better to do MB from user input
[16:44] <mbruzek> I defaulted to the byte number that sugar admins would be used to
[16:44] <marcoceppi> to bytes for sugar
[16:44] <mbruzek> I could also change it to a string
[16:44] <mbruzek> But it is an int, and I thought that should work.
[16:45] <mbruzek> sugar admins are used to setting 30000000
[16:45] <mgz> mbruzek: to be clear, what does config-get in a hook return? a mangled version or the bytes?
[16:45] <mbruzek> mgz config-get returns with the scientific notation
[16:45] <mgz> you should be able to reproduce with a stub charm then, and file a bug
[16:46] <mbruzek> 2014-04-03 16:45:53 INFO config-changed ++ config-get upload-maxsize
[16:46] <mbruzek> 2014-04-03 16:45:53 INFO config-changed + upload_maxsize=3e+07
[16:46] <mbruzek> against juju-core?
[16:47] <mgz> yes.
[16:47] <mgz> a skeleton charm with config.yaml with one entry, a config-changed hook, and a tests/ script should reproduce
[18:13] <dannf> i've got some ARM64 vms, on which i'm trying to deploy with the manual provider.
[18:14] <dannf> bootstrap: done. juju-gui deployed to bootstrap node: done (and working)
[18:14] <dannf> but i juju add-machined a couple other nodes and, while add-machine completes w/o error and the agent is running, juju status shows the nodes stuck in pending
[18:14] <dannf> machine-0: 2014-04-03 18:14:16 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "manual:10.0.128.7": no instances found
[18:15] <dannf> is what debug-log shows
[18:21] <webbrandon> arrg I am having a hell of a time trying to get MAAS to act the way I want.
[18:22] <webbrandon> im done messing with it for now. Just have to spend out my behind to use service for now.
[18:29] <webbrandon> actually I am just going to follow marcoceppi tut.  Not what I want but I guess good for the interm of testing
[18:29] <webbrandon> not tut - blog post
[18:49] <xagaba_> Any place to get info about net-ju ?
[18:56] <bac> sinzui: hey do you know who is working on the joyent support?  anyone that might currently be around?
[18:56] <sinzui> bac, bodie_ and wallyworld_  in #juju-dev
[18:57] <bac> thanks sinzui
[18:58] <xagaba_> It's possible to deploy juju in a couple of servers to use only LXC as cloud machines ?
[19:09] <marcoceppi> xagaba_: yes, you can use deploy --to lxc:<machine-num>
[19:11] <xagaba_> marcoceppi: any place to get info about such deploy mode ?
[19:12] <marcoceppi> xagaba_: https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-machines
[19:13] <sarnold> cool!
[19:13] <xagaba_> marcoceppi: thanks, I take a look
[19:21] <themonk> what does it mean by "instance-state: missing" ?
[19:56] <perrito666> nessita: well? talk
[19:56] <perrito666> :p
[19:56] <nessita> hello everyone! my units are not moving from the pending state, log shows:
[19:56] <nessita> 2014-04-03 19:54:54 ERROR juju.worker.uniter.charm git.go:211 git command failed: exec: "git": executable file not found in $PATH path: /var/lib/juju/agents/unit-solr-jetty-0/state/deployer/update-20140403-165454781105382 args: []string{"init"}
[19:56] <nessita> complete logs: https://pastebin.canonical.com/107753/
[19:56] <nessita> I ssh'd into elasticsearch/0 and confirmed the unit can access the internet
[19:57] <nessita> Until yesterday I was running saucy with juju/stable + juju/devel PPA's
[19:57] <nessita> yesterday I upgraded to trusty
[19:57] <nessita> and since the units were not starting, I cleaned up everything and retried
[19:57] <nessita> same error
[19:58] <nessita> any ideas? :-)
[21:12] <themonk> what does it mean by "instance-state: missing" ?