marcoceppi | paulczar_: you can use the -u flag when deploying with local:<charm> to upgrade the cache | 00:31 |
---|---|---|
marcoceppi | paulczar_: juju help deploy for more details | 00:32 |
mrz | jujucharms.com is just demo right? i don't actually deploy there. | 01:12 |
marcoceppi | mrz: correct | 01:18 |
marcoceppi | mrz: it's in "sandbox mode" | 01:18 |
mrz | i have discourse up and running but the detault instance sizes are small. | 01:19 |
mrz | especially for postgres and i bet i want that as block storage | 01:19 |
mrz | firewall-mode: global | 01:21 |
mrz | isn't doing what I thought it should | 01:21 |
mrz | is it known that the default juju-gui install doesn't just work? apache's configured for 8000 but the docs seem to read that it's just http | 01:34 |
mrz | or https | 01:34 |
mrz | and expose only opens 443/80 | 01:34 |
mrz | oh i see. looks like haproxy fronts it | 01:36 |
mrz | http://15.185.176.57:8000/ just hangs, however. | 01:37 |
mrz | weird! | 01:38 |
mrz | took 10 mins for this to come online | 01:38 |
mrz | ha! | 01:50 |
mrz | i tried to remote one instance and it destroyed all of them | 01:50 |
alexrockz | Hi everybody!!! | 01:53 |
alexrockz | :) | 01:53 |
alexrockz | Anyone like Ubuntu? | 01:53 |
alexrockz | Its awesome | 01:54 |
mrz | okay, the security group limits are really hurting me | 01:56 |
alexrockz | That sucks | 02:25 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== alexrockz is now known as alexrockzz | ||
=== alexrockzz is now known as alex_loves_Ubunt | ||
=== alex_loves_Ubunt is now known as Alex_Loves_Ubunt | ||
Alex_Loves_Ubunt | wtf | 02:34 |
=== Alex_Loves_Ubunt is now known as alexrockzlovesub | ||
=== alexrockzlovesub is now known as alexlovesubuntu | ||
alexlovesubuntu | There. | 02:35 |
alexlovesubuntu | Finally! | 02:36 |
=== defunctzombie_zz is now known as defunctzombie | ||
panthar_ | ping google.com | 03:23 |
=== alexlovesubuntu is now known as alexrockzzzzzzzz | ||
=== defunctzombie is now known as defunctzombie_zz | ||
dhart | Is a juju environment for multiple hosts possible without MAAS? I've poked around askubuntu. | 05:27 |
dhart | ooo, just found juju add-machine ssh:user@host | 05:55 |
=== CyberJacob|Away is now known as CyberJacob | ||
dada | hi | 10:24 |
=== freeflying_away is now known as freeflying | ||
marcoceppi | dhart: it is, manual provisioning is still very much a new feature | 11:51 |
marcoceppi | dhart: I was abou to link you to the mailing list thread, but it looks like you've already found it :) | 11:52 |
marcoceppi | mrz: sec groups on what provider? HP Cloud? | 11:53 |
dhart | marcoceppi: thanks for the warning. :-) I'm happy to be a guinea pig, as I'm in the process of converting brute distribution scripts to juju charms. | 11:53 |
marcoceppi | mrz: default juju-gui very much so does work, I use it constantly, you can change the default instance size when deploying from "small" to a different size using constraints https://juju.ubuntu.com/docs/charms-constraints.html | 11:55 |
marcoceppi | dhart: we've got a new provider coming soon which will allow you to just use a machine as a "bootstrap" so you can completely use your "Just a bunch of servers" as a Juju environment. Not sure the timeline on that landing though | 11:56 |
marcoceppi | mrz: let me know if you have any problems with the discourse charm! :D | 11:56 |
dhart | I'm ok with a bootstrap host that uses the 'local' provider now but moves to the null provider later. | 11:59 |
_mup_ | Bug #1232736 was filed: juju debug-log : port 22: Connection refused <juju:New> <https://launchpad.net/bugs/1232736> | 12:48 |
=== wedgwood is now known as Guest43744 | ||
=== freeflying is now known as freeflying_away | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== Guest43744 is now known as wedgwood | ||
scaine | Just starting out here - but when I run "juju generate-config", I don't get given a ./juju directory! Do I have to run this stuff as root? | 16:12 |
scaine | Never mind. Did an "apt-get update", juju upgraded and now it's working. | 16:22 |
mrz | marcoceppi: yeah it works. it's a bit opaque what's happening once i instantiate an instance. it returns green but it's not done installing software | 16:41 |
paulczar_ | is there a way to read data from the environments.yaml from hooks ? | 18:05 |
paulczar_ | I need to use the ec2 security keys in a hook ... want to read from environments.yaml rather than adding them in a second place | 18:05 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
nik | hi | 18:42 |
=== nik is now known as Guest537 | ||
Guest537 | any1 there | 18:43 |
ZonkedZebra | How do I open a port to all machines and have it say open? I modified my AWS security policy but when juju reset it on me | 19:53 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== Dotted^ is now known as Dotted | ||
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
lazyPower | I'm using juju local. I reboot my machine and suddnely juju status refused to report on my bootstrap node and the cloud that was present prior to the reboot. Also, when i run juju bootstrap i'm getting a 500 error. | 21:06 |
lazyPower | however wiping out ~/.juju/local seems to have cleared up whatever was halting progress... | 21:07 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== CyberJacob is now known as CyberJacob|Away | ||
thumper | lazyPower: morning, what version of juju? | 21:44 |
mrz | i killed off an instance in hpcloud and juju doesn't know it | 22:53 |
mrz | error: no machines were destroyed: machine 11 has unit "discourse/0" assigned | 22:53 |
mrz | is there a consistency check or a force optoin? | 22:53 |
mrz | ha. no i meant wha's the latest news | 23:01 |
lazyPower | thumper: 1.14.1 | 23:05 |
thumper | lazyPower: can I get you to run "apt-cache policy lxc" for me? | 23:07 |
thumper | I'm busy debugging some lxc issues anyway | 23:07 |
thumper | lazyPower: do you have copies of the actual error you were seeing? | 23:07 |
thumper | lazyPower: also, are you using an encrypted home drive? | 23:08 |
lazyPower | ahhh, i removed ~/.juju/local and it resolved itself | 23:08 |
lazyPower | i think what happened was juju was provisioning something when i power cycled and it left the juju setup in an inconsistent state | 23:08 |
lazyPower | If i do encounter this again i'll backup the ~/.juju/local directory for analysis. And I am indeed using an encrypted home directory | 23:09 |
thumper | hmm... | 23:14 |
thumper | you may find that lxc still thinks you have machines lying around | 23:14 |
thumper | run this "sudo lxc-ls --fancy" | 23:14 |
lazyPower | nice | 23:15 |
thumper | I noticed one of our guys had his juju environment outside his home dir because it was encrypted | 23:15 |
thumper | I don't know if this is a problem or not | 23:15 |
thumper | do you still have lxc-machines running? | 23:15 |
thumper | you may also still have system upstart scripts hanging around | 23:16 |
lazyPower | I do, but they are part of the new cloud i just provisioned for testing. | 23:16 |
thumper | removing the ~/.juju/local directory doesn't fix everyting | 23:16 |
thumper | hmm... ok | 23:16 |
thumper | mrz: there is a --force option I believe | 23:16 |
thumper | mrz: but I do wonder if you need to still force remove the unit | 23:17 |
thumper | mrz: this may actually be a problem, I recall similar problems elsewhere, but it is on our radar | 23:17 |
mrz | thumper: error: flag provided but not defined: --force | 23:32 |
mrz | can i manually edit the state machine somwewhere? | 23:32 |
thumper | hmm... | 23:45 |
thumper | mrz: no | 23:45 |
thumper | mrz: let me dig | 23:45 |
thumper | mrz: does the machine as juju sees it have any units on it? | 23:48 |
thumper | mrz: is that the "discourse/0" ? | 23:48 |
mrz | it thinks it does | 23:49 |
mrz | but machine 11 was destroyed in the hpcloud panel | 23:49 |
thumper | mrz: have you tried "juju destroy-unit discourse/0" | 23:49 |
* thumper is trying to remember the internal comunication. | 23:50 | |
mrz | yes. it appears to run but juju status shows it still dying | 23:50 |
thumper | ah... | 23:50 |
thumper | poos | 23:53 |
thumper | just reading up, there is no --force | 23:53 |
thumper | and it is a bug that this leaves broken bits lying around | 23:53 |
thumper | sorry | 23:54 |
mrz | recall where state lives? something i can manually edit? | 23:54 |
thumper | state lives in a complicated mongo db series of documents | 23:54 |
thumper | manually editing is not at all recommended | 23:54 |
mrz | er, you had me at "mongo" | 23:54 |
mrz | :) | 23:55 |
thumper | I'll be raising the priority of this issue | 23:55 |
thumper | it hits quite a few people | 23:55 |
thumper | and we need to have a nicer resolution that "you're poked" | 23:55 |
mrz | firewall-mode: global doesn't do what I was told it'd do | 23:55 |
* thumper doesn't know much about the firewall bits | 23:56 | |
mrz | hpcloud defaults to 10 security groups | 23:56 |
mrz | and one every instance needs its own security policy | 23:56 |
mrz | where should i file bugs or check for dups | 23:56 |
mrz | ? | 23:56 |
thumper | bugs.launchpad.net/juju-core | 23:58 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!