[00:31] paulczar_: you can use the -u flag when deploying with local: to upgrade the cache [00:32] paulczar_: juju help deploy for more details [01:12] jujucharms.com is just demo right? i don't actually deploy there. [01:18] mrz: correct [01:18] mrz: it's in "sandbox mode" [01:19] i have discourse up and running but the detault instance sizes are small. [01:19] especially for postgres and i bet i want that as block storage [01:21] firewall-mode: global [01:21] isn't doing what I thought it should [01:34] is it known that the default juju-gui install doesn't just work? apache's configured for 8000 but the docs seem to read that it's just http [01:34] or https [01:34] and expose only opens 443/80 [01:36] oh i see. looks like haproxy fronts it [01:37] http://15.185.176.57:8000/ just hangs, however. [01:38] weird! [01:38] took 10 mins for this to come online [01:50] ha! [01:50] i tried to remote one instance and it destroyed all of them [01:53] Hi everybody!!! [01:53] :) [01:53] Anyone like Ubuntu? [01:54] Its awesome [01:56] okay, the security group limits are really hurting me [02:25] That sucks === defunctzombie is now known as defunctzombie_zz === alexrockz is now known as alexrockzz === alexrockzz is now known as alex_loves_Ubunt === alex_loves_Ubunt is now known as Alex_Loves_Ubunt [02:34] wtf === Alex_Loves_Ubunt is now known as alexrockzlovesub === alexrockzlovesub is now known as alexlovesubuntu [02:35] There. [02:36] Finally! === defunctzombie_zz is now known as defunctzombie [03:23] ping google.com === alexlovesubuntu is now known as alexrockzzzzzzzz === defunctzombie is now known as defunctzombie_zz [05:27] Is a juju environment for multiple hosts possible without MAAS? I've poked around askubuntu. [05:55] ooo, just found juju add-machine ssh:user@host === CyberJacob|Away is now known as CyberJacob [10:24] hi === freeflying_away is now known as freeflying [11:51] dhart: it is, manual provisioning is still very much a new feature [11:52] dhart: I was abou to link you to the mailing list thread, but it looks like you've already found it :) [11:53] mrz: sec groups on what provider? HP Cloud? [11:53] marcoceppi: thanks for the warning. :-) I'm happy to be a guinea pig, as I'm in the process of converting brute distribution scripts to juju charms. [11:55] mrz: default juju-gui very much so does work, I use it constantly, you can change the default instance size when deploying from "small" to a different size using constraints https://juju.ubuntu.com/docs/charms-constraints.html [11:56] dhart: we've got a new provider coming soon which will allow you to just use a machine as a "bootstrap" so you can completely use your "Just a bunch of servers" as a Juju environment. Not sure the timeline on that landing though [11:56] mrz: let me know if you have any problems with the discourse charm! :D [11:59] I'm ok with a bootstrap host that uses the 'local' provider now but moves to the null provider later. [12:48] <_mup_> Bug #1232736 was filed: juju debug-log : port 22: Connection refused === wedgwood is now known as Guest43744 === freeflying is now known as freeflying_away === defunctzombie_zz is now known as defunctzombie === Guest43744 is now known as wedgwood [16:12] Just starting out here - but when I run "juju generate-config", I don't get given a ./juju directory! Do I have to run this stuff as root? [16:22] Never mind. Did an "apt-get update", juju upgraded and now it's working. [16:41] marcoceppi: yeah it works. it's a bit opaque what's happening once i instantiate an instance. it returns green but it's not done installing software [18:05] is there a way to read data from the environments.yaml from hooks ? [18:05] I need to use the ec2 security keys in a hook ... want to read from environments.yaml rather than adding them in a second place === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [18:42] hi === nik is now known as Guest537 [18:43] any1 there [19:53] How do I open a port to all machines and have it say open? I modified my AWS security policy but when juju reset it on me === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === Dotted^ is now known as Dotted === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [21:06] I'm using juju local. I reboot my machine and suddnely juju status refused to report on my bootstrap node and the cloud that was present prior to the reboot. Also, when i run juju bootstrap i'm getting a 500 error. [21:07] however wiping out ~/.juju/local seems to have cleared up whatever was halting progress... === defunctzombie is now known as defunctzombie_zz === CyberJacob is now known as CyberJacob|Away [21:44] lazyPower: morning, what version of juju? [22:53] i killed off an instance in hpcloud and juju doesn't know it [22:53] error: no machines were destroyed: machine 11 has unit "discourse/0" assigned [22:53] is there a consistency check or a force optoin? [23:01] ha. no i meant wha's the latest news [23:05] thumper: 1.14.1 [23:07] lazyPower: can I get you to run "apt-cache policy lxc" for me? [23:07] I'm busy debugging some lxc issues anyway [23:07] lazyPower: do you have copies of the actual error you were seeing? [23:08] lazyPower: also, are you using an encrypted home drive? [23:08] ahhh, i removed ~/.juju/local and it resolved itself [23:08] i think what happened was juju was provisioning something when i power cycled and it left the juju setup in an inconsistent state [23:09] If i do encounter this again i'll backup the ~/.juju/local directory for analysis. And I am indeed using an encrypted home directory [23:14] hmm... [23:14] you may find that lxc still thinks you have machines lying around [23:14] run this "sudo lxc-ls --fancy" [23:15] nice [23:15] I noticed one of our guys had his juju environment outside his home dir because it was encrypted [23:15] I don't know if this is a problem or not [23:15] do you still have lxc-machines running? [23:16] you may also still have system upstart scripts hanging around [23:16] I do, but they are part of the new cloud i just provisioned for testing. [23:16] removing the ~/.juju/local directory doesn't fix everyting [23:16] hmm... ok [23:16] mrz: there is a --force option I believe [23:17] mrz: but I do wonder if you need to still force remove the unit [23:17] mrz: this may actually be a problem, I recall similar problems elsewhere, but it is on our radar [23:32] thumper: error: flag provided but not defined: --force [23:32] can i manually edit the state machine somwewhere? [23:45] hmm... [23:45] mrz: no [23:45] mrz: let me dig [23:48] mrz: does the machine as juju sees it have any units on it? [23:48] mrz: is that the "discourse/0" ? [23:49] it thinks it does [23:49] but machine 11 was destroyed in the hpcloud panel [23:49] mrz: have you tried "juju destroy-unit discourse/0" [23:50] * thumper is trying to remember the internal comunication. [23:50] yes. it appears to run but juju status shows it still dying [23:50] ah... [23:53] poos [23:53] just reading up, there is no --force [23:53] and it is a bug that this leaves broken bits lying around [23:54] sorry [23:54] recall where state lives? something i can manually edit? [23:54] state lives in a complicated mongo db series of documents [23:54] manually editing is not at all recommended [23:54] er, you had me at "mongo" [23:55] :) [23:55] I'll be raising the priority of this issue [23:55] it hits quite a few people [23:55] and we need to have a nicer resolution that "you're poked" [23:55] firewall-mode: global doesn't do what I was told it'd do [23:56] * thumper doesn't know much about the firewall bits [23:56] hpcloud defaults to 10 security groups [23:56] and one every instance needs its own security policy [23:56] where should i file bugs or check for dups [23:56] ? [23:58] bugs.launchpad.net/juju-core