[01:44] <blairbo> the lxc bridge juju created assigned itself a subnet that already exists in my network. How do I change that so I can connect to "exposed" charms from outside the host
[01:44] <blairbo> ?
[01:52] <sarnold> blairbo: I think there's an /etc/defaults/lxc or lxc-net file that you can fiddle with; be aware though that juju's local provider is pretty touchy about what can and can't be changed with the lxc configuration, not all variables there can be changed and still have things work
[02:52] <blairbo> sarnold: well it's not worth a whole lot if I can't access the environment outside of the host. there must be some logic as to how it chooses the network address for the host bridge.
[02:54] <sarnold> blairbo: it's mostly intended for local development...
[02:55] <blairbo> sarnold: what would the ideal setup be for a private cloud?
[05:43] <sarnold> blairbo: I think 'private cloud' would probably be served better by maas or openstack, though I think the end goal of local provider or ssh provider might eventually give you what you'd like today..
[09:31] <AskUbuntu> Should MaaS and Juju get installed on one of my servers or on a client system? | http://askubuntu.com/q/347866
[12:37] <gnuoy> what process writes to all-machines.log on the bootstrap node ? The file is growing at an alarming rate, >2.5G in the past 30mins
[12:38] <gnuoy> weirdly, if I do  "head -1 all-machines.log" I get a seemingly endless stream to stdout.
[13:05] <gnuoy> I've stopped rsyslogd and wiped the file and started it up again and I have meedages from ~5 hours ago
[13:06] <gnuoy> *messages even
[13:09] <gnuoy> starting rsyslog for 1s results in 10M of log file which vi reckons is all on one line
[13:10] <gnuoy> wc reckons its 0 lines so I guess theres no eol
[13:14] <gnuoy> the messages seem to be prefixed with the unit name, which seems to be the bootstrap node for all messages I've checked. I did try deploying some charms for monitoring to the bootstrap node, I wonder if they're not playing nicely
[14:09] <marcoceppi> gnuoy: what environment are you using? HP Cloud?
[14:09] <gnuoy> private openstack
[14:27] <gnuoy> marcoceppi, I'm doing a redeploy without the charms to the bootstrap node and I'm seeing the same thing, all-machines.log is 2.2G and growing
[14:28] <gnuoy> urgh, sorry, one did go to the bootstrap node. let me redeploy
[15:38] <avoine> is there a new way to set the log level on units?
[15:51] <avoine> must be either development or logging-config environment variable
[16:02] <jcastro> sidnei: hey, how's your puppet/juju these days?
[16:04] <jcastro> noodles775: same question!
[16:08] <noodles775> jcastro: sounds like a loaded question :P. I'm not using puppet other than to update internal stuff occasionally (I've got a branch somewhere of a puppet version of a test charm for an internal service that I did a while ago - it's not been touched for months though).
[16:11]  * noodles775 might still have the charm-helpers branch that added puppet support (masterless), just for declaring machine state etc.
[16:22] <kurt_> jcastro: who is working on the quantum-gateway charm?
[16:22] <kurt_> is that jamespage or adam_g?
[16:23] <jamespage> kurt_, me
[16:23] <kurt_> jamespage: is there a bug with being able to delete floating IPs?
[16:23] <kurt_> in quantum
[16:24] <jamespage> kurt_, not that I'm aware of
[16:24] <jamespage> what's the problem? what are you trying todo which is failing?
[16:25] <kurt_> I'll give you a paste bin in a sec - putting it together
[16:25] <jamespage> is the ip still assigned to an instance? if it is quantum won't let you delete it
[16:25] <kurt_> it isn't
[16:26] <kurt_> jamespage: http://pastebin.ubuntu.com/6133418/
[16:27] <jamespage> kurt_, quantum floatingip-list ?
[16:28] <kurt_> that's there...
[16:28] <kurt_> http://pastebin.ubuntu.com/6133425/
[16:29] <jamespage> kurt_, you need to delete that first
[16:29] <kurt_> I guess quantum doesn't support more than 1 floating IP list at a time
[16:29] <kurt_> I created another and it didn't give me errorrs
[16:30] <kurt_> jamespage: anyways, thanks.  I will give that a try
[16:30] <jamespage> kurt_, ?
[16:30] <kurt_> jamespage: huh?
[16:31] <jamespage> "<kurt_> I guess quantum doesn't support more than 1 floating IP list at a time"
[16:31] <kurt_> right
[16:31] <kurt_> more than 1 subnet
[16:31] <jamespage> ah
[16:31] <kurt_> it would seem it should
[16:32] <jamespage> kurt_, well a router is normally associated with a subnet
[16:32] <jamespage> and a range of floating-ip's can be associated with the subnet as well
[16:32] <jamespage> kurt_, the nova-cloud-controller installs a helper for this
[16:32] <jamespage> quantum-ext-net
[16:33] <kurt_> I'll read up on that
[16:33] <kurt_> I'm loving quantum btw.  the more I use it, the more I like it
[17:00] <jcastro> hey guys
[17:00] <jcastro> due to some bw issues
[17:00] <jcastro> we're going to postpone the charm school for today
[17:52] <Therion87> Hello
[17:52] <Therion87> I'm using JuJu with AWS just setup it up haven't deployed anything and I already have an instance after running juju bootstrap
[17:52] <Therion87> Why is this?
[18:00] <kurt_> Therion87: when you bootstrap, you automatically create your first instance
[18:00] <kurt_> this is instance "0"
[18:00] <kurt_> aka your root node
[18:01] <kurt_> do a 'juju status' and you will see this
[18:02] <Therion87> Yea
[18:02] <Therion87> It was more a question of the purpose of it
[18:02] <kurt_> That is the core juju node.  You need this for all of the main juju functionality
[18:03] <Therion87> Ok
[18:03] <kurt_> You can use that node for other purposes too, like deploying other juju charms on
[18:04] <kurt_> RTFM on the "--to" functionality
[18:05] <adam_g> jamespage, is there a ceph charm change floating around to support setting the pg count by client when making pool?
[18:40] <kurt_> WOOT!  Fully working openstack instance with MAAS/juju/VMWare Fusion on mac osx. Finallly!
[18:41] <kurt_> it took a little fiddling with quantum networking and the vnc console to get things working, but its up and running
[18:42] <kurt_> Yes, MAAS and juju work on VMWare Fusion, as well as on KVM.  I just proved it.
[19:41] <hatch> when following the lxc setup guide on juju.ubuntu.com and then deploying/exposing the GUI the ip it's exposed to is an ip local to that machine - has there been any documentation on how to expose these servvices to the host machine?
[21:23] <marlinc> Juju is trying to access something on my MAAS server that doesn't exist..
[21:23] <marlinc> - - [20/Sep/2013:23:20:54 +0200] "GET /MAAS/api/1.0/files/provider-state/ HTTP/1.1" 404 276 "-" "Go http package"
[21:24] <marlinc> I just tried to run the juju bootstrap command
[21:24] <marlinc> I'm sorry. I ment the status command: error: file 'provider-state' not found
[21:25] <marlinc> What could it be.. running juju bootstrap doesn't work either. error: no tools available
[21:25] <kurt_> marlinc: try running again.  sometimes I've seen this on the first try
[21:25] <marlinc> Well I've ran it 5 times
[21:25] <kurt_> ah, try juju sync-tools first
[21:25] <marlinc> Well I've tried that too and that throws: error: environment has no access-key or secret-key
[21:25] <kurt_> you may get a EOF on one of the tool packages, at which time you should try again
[21:26] <marlinc> Well the sync command doesn't run at all
[21:27] <kurt_> juju destroy-environment; juju sync-tools
[21:28] <marlinc> Still the same problem
[21:28] <marlinc> https://gist.github.com/Marlinc/4461c6a36ddb36bd3840
[21:28] <marlinc> This is my environments file
[21:30] <marlinc> Ah it appears the MAAS tutorial isn't very up-to-date
[21:30] <kurt_> you don't have ssh keys generated
[21:30] <marlinc> I had to install charm-tools
[21:30] <marlinc> Now I can run bootstrap without issues
[21:30] <kurt_> are you doing this on MAAS?
[21:30] <marlinc> Yes
[21:30] <kurt_> ok, so you are good then?
[21:31] <marlinc> Yes for now
[21:31] <kurt_> do your MAAS nodes have access to the internet?
[21:31] <marlinc> They should have
[21:31] <kurt_> juju can be a little fiddly in the bootstrap process
[21:32] <kurt_> you need to generate ssh keys too
[21:32] <marlinc> I already got those :)
[21:32] <kurt_> it wasn't in your environments.yaml
[21:32] <kurt_> needs to be there
[21:33] <marlinc> Ah okay
[21:34] <marlinc> Well I'm following the quick start: http://maas.ubuntu.com/docs/juju-quick-start.html
[22:03] <utlemming> on raring, can you have multiple lxc environments using the devel ppa?
[22:04] <utlemming> I am seeing a case where I can use one environment, but if I try to use the other, it fails
[23:14] <dalek49> exit
[23:14] <dalek49> (apologies)