[00:42] <cjohnston> hazmat: doubting your still around, but I just got bug #1269519 again
[00:42] <_mup_> Bug #1269519: Error on allwatcher api <juju-core:Fix Released by rogpeppe> <juju-deployer:Fix Released> <https://launchpad.net/bugs/1269519>
[00:43] <cjohnston> is there a way of knowing which machine log is needed
[00:43] <hazmat> cjohnston, traceback from deployer run .... ideally with -vWd
[00:44] <cjohnston> hazmat: http://paste.ubuntu.com/6979251/
[00:44] <cjohnston> doesn't have -vWd tho
[00:44] <hazmat> cjohnston, juju ssh 0
[00:44] <hazmat> log is in /var/log/juju/ machine-0.log afaicr
[00:44] <cjohnston> sounds right
[00:45] <cjohnston> hazmat: http://paste.ubuntu.com/6979254/ and all: http://paste.ubuntu.com/6979255/
[00:47] <hazmat> cjohnston, interesting.. thanks. i'm done for the night but thats helpful.. i'll talk to rog re the errors there but it sounds like given the synchronous py api..  short term env connections are the to go... will review and give feedback tmorrow
[00:49] <hazmat> seems like i/o timeout is  the eventual err from masking the previous ping timeout.. tbd..
[00:51] <cjohnston> thanks
[08:30] <hazmat> cjohnston, hmm.. actually the log level on these pastebins is missing the api level.. if you have a chance to do run it again.. can you set JUJU_LOGGING_LEVEL="<root>=DEBUG"
[08:30] <hazmat> or just on the enviornment via juju set logging-config="<root>=DEBUG"
[08:37] <hazmat> cjohnston, whats the instance size on your state server in these envs
[15:55] <cjohnston> hazmat: for juju set it wants a service name
[15:59] <hazmat> cjohnston, juju set-env
[15:59] <cjohnston> ack
[16:06] <rick_h> ccccccbtujivtdenulrrvgtjjlcnnjrledtruujfgnhr
[16:18] <cjohnston> agreed
[16:20] <rick_h> :)
[16:46] <cjohnston> hazmat: http://paste.ubuntu.com/6982593/ and http://paste.ubuntu.com/6982594/
[16:48] <hazmat> cjohnston, you reproduced the state watcher gone aaay.. i think the logs might be too big for pastebin
[16:48] <hazmat> cjohnston, er.. did you reproduce the state watcher gone away?
[16:50] <cjohnston> I don't think so now that I look closer, but I do see:  WARNING discarding API open error: read tcp 127.0.0.1:37017: i/o timeout
[16:50] <cjohnston> ERROR connection is shut down
[18:15] <hazmat> cjohnston, what's the load on the state server machine like..
[18:15] <cjohnston> not sure
[18:44] <cjohnston> this 5 minute timeout sucks
[18:44] <cjohnston> the highest load I've seen so far has been 2
[18:44] <cjohnston> avg
[18:58] <cjohnston> hazmat: I saw it hit almost 3
[20:46] <Darkmantle> o/
[20:46] <Darkmantle> Need some help please. I have Ubuntu desktop with Ubuntu Server 12.04 VM running MAAS - juju successfully bootstrapped but status hangs
[20:47] <Darkmantle> Logs show that it can't connect to mongodb, i've double checked the DNS configuration and its ok
[20:54] <Darkmantle> ?
[21:08] <Darkmantle> 2014-02-23 19:35:03 INFO juju.state open.go:68 opening state; mongo addresses: ["localhost:37017"]; entity "" 2014-02-23 19:35:03 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
[21:10]  * Darkmantle redeploys
[21:39] <Darkmantle> Ok I can't do juju status, it just hangs, any idea why?
[21:40] <Darkmantle> I've configured the DNS correctly - I get mongodb connection issues on juju node startup but they are all working ok, or so it seems
[21:41] <Darkmantle> Or it could be a DNS issue, if I try to do mongo name.master:37017 I get no address associated
[21:42] <Darkmantle> or via IP either
[21:51] <Darkmantle> ?
[21:51] <hazmat> Darkmantle, what version of juju?
[21:52] <Darkmantle> 1.16.6
[21:52] <Darkmantle> precise-amd64
[21:52] <hazmat> Darkmantle, can you pastebin  $ juju status --debug
[21:52] <Darkmantle> sure
[21:53] <hazmat> Darkmantle, in the maas ui .. do you see the bootstrap/state server allocated?
[21:54] <Darkmantle> Yes thats all done
[21:54] <Darkmantle> I can SSH into it and do anything
[21:54] <Darkmantle> Full network access, full connection to MAAS
[21:54] <hazmat> k
[21:54] <hazmat> Darkmantle, re pastebin.. apt-get install pastebinit .. handy cli pastebin client
[21:55] <Darkmantle> ah thanks
[21:55] <Darkmantle> Its a re-install so got nooo tools lol
[21:55] <hazmat> Darkmantle, --upload-tools is a reasonable workaround
[21:55] <Darkmantle> I did that
[21:55] <Darkmantle> I did once with once without
[21:56] <Darkmantle> http://pastebin.com/5i7rtX8U
[21:58] <Darkmantle> the cloud-init-output log has the normal cant connect to 127.0.0.1:37107 error too
[21:58] <Darkmantle> even though mongo can connect using mongo localhost:37017/juju
[21:58] <Darkmantle> Or no wait sorry, it gets init call() failed error
[21:59] <hazmat> Darkmantle, the mongo you put output for is not the same mongo that juju is running for itself..
[22:00] <Darkmantle> Surely it can connect?
[22:00]  * Darkmantle shrugs
[22:00] <hazmat> Darkmantle,  its a different process.. /etc/init/mongodb.conf vs /etc/init/juju-db.conf
[22:00] <Darkmantle> ah yeah
[22:00] <hazmat> Darkmantle, could you pastebin the /var/log/cloud-init-output.log
[22:01] <hazmat> that's fixed in the dev (>= 1.17) releases.. no extraneous mongo running.
[22:01] <Darkmantle> http://paste.ubuntu.com/6984186/
[22:01] <hazmat> fwiw
[22:02] <hazmat> looks good
[22:02] <Darkmantle> really?
[22:02] <Darkmantle> i assume DNS should point to the MAAS node's IP, right? cause it does
[22:02] <Darkmantle> and then MAAS points to .1 in the vlan
[22:04] <hazmat> Darkmantle, can your client machine connect to the server? ie.. telnet 97qay.master 37017
[22:05] <Darkmantle> Yes
[22:05] <Darkmantle> I checked all the ports
[22:05] <Darkmantle> ah no it can't by hostname hazmat , only by IP
[22:06] <Darkmantle> definitely DNS then
[22:07] <hazmat> sounds like
[22:07] <Darkmantle> ugh
[22:07] <hazmat> Darkmantle, you could add maas's dns server to your local resolv.conf maybe
[22:07] <Darkmantle> i have yeah
[22:07] <Darkmantle> i couyld update /etc/hosts since its permanent
[22:07] <Darkmantle> would be easier
[22:08] <Darkmantle> and fixed
[22:08] <hazmat> cool
[22:09] <Darkmantle> quick question, just deployed juju-gui
[22:09] <Darkmantle> what next? how do i get to it, etc?
[22:10] <hazmat> Darkmantle, juju status juju-gui ... go to  https://$ip_address
[22:10] <Darkmantle> ah duh
[22:10] <Darkmantle> meh, errors
[22:13] <Darkmantle> how long should machines be pending for? :L
[22:13] <hazmat> Darkmantle, on maas.. its a bit more dependent on hardware
[22:13] <Darkmantle> True. I gave the maas VM limited resources
[22:14] <Darkmantle> I was told it can cope well on 1g 1CPU
[22:14] <hazmat> Darkmantle, you can run juju debug-log -n 100
[22:14] <Darkmantle> cannot run instances: gomaasapi: got error back from server: 409 CONFLICT
[22:14] <hazmat> Darkmantle, or juju ssh 0  && less /var/log/juju/machine-0.log   which should have any provisioning bits
[22:15] <hazmat> Darkmantle, are there other registered/available machines in maas?
[22:15] <Darkmantle> ok there is only 1 node atm
[22:15] <Darkmantle> with juju on it
[22:15] <Darkmantle> im adding a juju-gui noow
[22:15] <hazmat> Darkmantle, right.. 409 conflict.. means no additional nodes in maas to hand back to juju
[22:15] <Darkmantle> yeah
[22:15] <Darkmantle> i have to add a node first then deploy?
[22:15] <hazmat> Darkmantle, so there's hulk-smash/manual placement mode.. when deploing..
[22:16] <hazmat> Darkmantle, basically.. but re manual-placement.. you can place services onto existing machines with deploy --to=0
[22:16] <hazmat> for example.. where 0 is placeholder for any machine id in the juju env
[22:16] <Darkmantle> Thats true
[22:16] <Darkmantle> So I could add one i MAAS
[22:16] <Darkmantle> then deploy it there?
[22:17] <Darkmantle> I thought the point was to have machines for each service
[22:17] <hazmat> Darkmantle,  machines or containers..
[22:17] <Darkmantle> blah
[22:17] <Darkmantle> AH
[22:17] <Darkmantle> confused
[22:18] <Darkmantle> Ok I get it now
[22:19] <Darkmantle> I need to make a new node that I can deploy the services too?
[22:19] <hazmat> thumper-afk, ie. you can also do juju deploy --to=lxc:0  and juju will create an lxc container on machine 0 for the service...  deploy --help for more info on that placement stuff
[22:19] <hazmat> Darkmantle, yes.
[22:19] <Darkmantle> Shouldn't it make them automatically?
[22:19] <hazmat> Darkmantle, how? juju will request new machines for the provider.. but it can't buy new hardware for maas ;-)
[22:20] <Darkmantle> so i hgave to add the new VM's automatically
[22:20] <Darkmantle> to host the juju services on
[22:20] <Darkmantle> thats loooong :P
[22:20] <Darkmantle> but ok
[22:21] <hazmat> Darkmantle, ie. if your on ec2.. it will request new instances for services.. on maas.. the machines have to be registered.. there's maas auto-enlist for racks and dcs,  but if your creating vms as maas machines, you'll have to make new ones to have them show up
[22:21] <Darkmantle> fair enough
[22:22] <Darkmantle> so make new VM, then add it to maas, then deploy
[22:22] <Darkmantle> and it should automatically find it
[22:25] <hazmat> yup
[22:25] <Darkmantle> I need openstack or something to provision the VM's automatically
[22:25] <Darkmantle> Totally should've gone that way
[22:25] <hazmat> Darkmantle, or use local provider
[22:25] <Darkmantle> Totally stillw ill at some point, MAAS is ok but not as detailed
[22:26] <Darkmantle> Or that
[22:26] <Darkmantle> Make my juju have the 12gb RAM / 6 CPU's and do it that way
[22:26] <hazmat> Darkmantle, maas on vms.. is really just a testing experience.. for charm dev / experiment.. i'd go with local provider.. or a cloud provider
[22:26] <hazmat> cloud envs are generally pretty cheap for short lived envs
[22:27] <hazmat> ie 10 machines for an hr ~ $1 usd.
[22:27] <Darkmantle> mhm
[22:28] <Darkmantle> hazmat i know its for testing
[22:28] <Darkmantle> in fact im running my own test web environment
[22:28] <Darkmantle> thats all
[22:28] <Darkmantle> blah now PXE borked
[22:29] <Darkmantle> there
[22:29]  * Darkmantle yawns