[00:12] <jose> sebas5384: well, you don't lose anything by trying, if it works then awesome! :)
[00:29] <sebas5384> thks jose
[00:29] <jose> np, and let me know if you need a hand with testing!
[02:05] <sebas5384> if there's some pt-br out there take a look http://blog.taller.net.br/taller-lanca-serie-de-infograficos/ :)
[02:05] <sebas5384> the next one is going to be about ubuntu juju :)
[09:07] <schegi> jamespage, online?
[09:08] <jamespage> schegi, I am
[09:11] <schegi> hey, i got some issues when redeploying a destroyed ceph charm. it seems like something is going wrong with the zapping of the disks. here is the log http://pastebin.com/mbtXtfKA. calling ceph-prepare-disk --zap-disk /dev/sdX manually solved the problem for me. just to let you know
[09:12] <schegi> jamespage, don't know if this is true for the ceph charm in the store, i used your network-split version.
[09:14] <schegi> btw anyone out there experienced with the hacluster charm??
[09:15] <jamespage> schegi, that would be me as well
[09:16] <jamespage> schegi, hmm - you might need to use the osd-reformat configuration option
[09:17] <schegi> yeah, thats the problem it is set to yes.
[09:18] <jamespage> schegi, hmm - I've not exercised that option with btrfs - we test with xfs as default
[09:18]  * jamespage pokes at the code
[09:19] <schegi> here is my config, http://pastebin.com/mXBTyuLs
[09:20] <schegi> jamespage, might be a btrfs problem. but as i said, when i log in to the machine and perform the zapping manually the error state resolves and the odds populate.
[09:20] <schegi> osds
[09:24] <jamespage> schegi, the charm should perform a pre-zap prior to calling ceph-prepare-disk - looks like that is not enought....
[09:24] <jamespage> hmm
[09:29] <schegi> jamespage, the error is a bit strange, currently deploying ceph to 3 nodes and ceph-osd to 10 more nodes and the error occurs from time to time but not reliably on all of the nodes. But if it occurs, on a node then for all of the disks. Btw the partition table in use it GPT.
[09:30] <schegi> And as described, just logging in to the node and performing ceph-prepare-disk --zap-disk /dev/sdX manually from the console resolves the error and the deployment succeeds.
[09:31] <schegi> But you have to do this for all of the disks on the node seperately
[09:36] <jamespage> schegi, how ubiqutous is the --zap-disk option in ceph-prepare-disk? does that go back to older versions? if so I can tweak how that's called
[09:36]  * jamespage checks
[09:39] <schegi> jamespage, to be honest, i have no idea i am just 4 weeks old with ceph/juju and all this. I am scientist and just got hands on a bunch of hardware and trying to make use of it.
[09:39] <schegi> :)
[09:40] <jamespage> schegi, its sufficently supported in older versions back to grizzly/raring so I'll poke that in
[09:42] <jamespage> schegi, I just nudged that into my network-splits branches
[09:43] <jamespage> if you want to give it a spin
[09:44] <jamespage> schegi, what question did you have about the hacluster charm?
[09:44] <schegi> shure, if i have to redeploy my ceph i'll try. currently i hang a bit on percona/corosync/pacemaker but i am not familiar enough with the whole to figure out why. could be caused by my network configuration. seems like all nodes in the corosync cluster running separately nd not connecting to each other.
[13:07] <g0d_51gm4> hi guys, I've a question for you, I made a test after to install MaaS and Juju deployed a juju-gui, last one works perfectly...why each time I re-boot the host machine and re-launch the VM, Region and Cluster Controler with the nodes, the juju's status is always "WARNING discarding API open error: unable to connect to "wss://1.1.2.21:17070/environment/814bd22b-4438-4ea6-8f22-92b39b866a42/api"
[13:07] <g0d_51gm4> ERROR Unable to connect to environment "maas".Please check your credentials or use 'juju bootstrap' to create a new environment".  to resolve that I've always to remove the directories .juju/enviroments and .juju/ssh and re-sync  juju and make the bootstrap again????thanks
[13:09] <g0d_51gm4> the ubuntu's installer re-begin the installation of OS? thanks a lot
[13:38] <g0d_51gm4> anyone can answer me?
[13:41] <avoine> g0d_51gm4: I guess the juju team are still sleeping at that time
[13:41] <avoine> most of them are in pst time I think
[13:42] <g0d_51gm4> avoine: maybe you right!!! last week they gave support at this time!!!!
[13:43] <g0d_51gm4> it's strange but it's ok!!!
[13:52] <lazyPower> g0d_51gm4: in the MAAS gui are your nodes still marked as assigned to whichever user's credentials are plugged into juju?
[13:52] <lazyPower> if the machine is going through the fastpath installer again, that typically means the node is not registered with maas.
[13:52] <g0d_51gm4>  lazyPower: hi how are y?anyway the answer for your q is yes
[13:53] <lazyPower> and the bootstrap node is online, with the juju processes running?
[13:53] <lazyPower> and the IP address of the node has not changed?
[13:53] <g0d_51gm4> lazyPower: all nodes are registered on MaaS
[13:53] <lazyPower> registered != assigned.
[13:53] <lazyPower> i'm talking the node is assigned to the user.
[13:54] <lazyPower> http://i.imgur.com/vGhLGZA.png
[13:54] <lazyPower> as you can see in this screenshot - i ahve several nodes registered. Only node 12, 5, 4, and 0 are actually assigned to my juju environment
[13:56] <g0d_51gm4> the ip address is not changed is always the same used to allocate the node to MaaS
[13:58] <g0d_51gm4> after to create a juju environment I've run the following commands: "juju sync-tools -e maas" and "juju bootstrap -e maas --debug". with last one I see the node starts and the ubuntu's installer runs. at the end of this process the prompt of that one is "Bootstrapping Juju machine agent
[13:58] <g0d_51gm4> Starting Juju machine agent (jujud-machine-0)
[13:58] <g0d_51gm4> 2014-07-15 13:32:12 INFO juju.cmd supercommand.go:329 command finished"
[13:59] <g0d_51gm4> I've seen your image and I've the same situation with my node.
[14:00] <g0d_51gm4> after that I've also deployed juju-gui and it works well.
[14:03] <marcoceppi> ahasenack: I think it is, at least I think it's coded that way
[14:03] <marcoceppi> ahasenack: is that causing a problem?
[14:03] <marcoceppi> ahasenack: re: calling config-changed
[14:04] <g0d_51gm4> the problem is present when I shut down the host machine (my PC). why after I re-start all virtual environment and, from MaaS  start the node where I've deployed juju-gui juju gives me that error!!!!
[14:06] <lazyPower> i don't exactly know, i've got a vmaas setup that comes back up when i restart the host.
[14:06] <lazyPower> i'm thinking, but not really coming up with anything that would be an inherent blocker
[14:06] <lazyPower> are you using KVM based virtual machines or bare metal?
[14:07] <g0d_51gm4> and another q is: adding a second node to MaaS how can I register the new node to the same juju's environment used for the first one?
[14:08] <g0d_51gm4> lazyPower:I use KVM
[14:09] <lazyPower> g0d_51gm4: are you wanting to manually add machines? otherwise juju deploy will communicate with maas and maas will return a machine randomly to add to the environment.
[14:09] <ahasenack> marcoceppi: yes, it caused a problem with a reboot
[14:10] <ahasenack> marcoceppi: a race between juju running config-changed, which restarts a service
[14:10] <ahasenack> marcoceppi: and the normal service start that happens during boot
[14:10] <ahasenack> the same initscripts called twice, at the same time
[14:10] <g0d_51gm4> no, I've added the vm node via PXE
[14:14] <g0d_51gm4> http://imgur.com/4SylrD6
[14:14] <marcoceppi> ahasenack: I know config-changed is run a certain times during certain events, may want to chat with a dev about it, but it sounds like the config-changed hook isn't 100% idempotent?
[14:14] <g0d_51gm4> http://imgur.com/1xJbvyk for the virsh
[14:14] <ahasenack> marcoceppi: it is, you can run it as many times as you want
[14:15] <marcoceppi> ahasenack: then why is there a race condition forming?
[14:15] <ahasenack> marcoceppi: because of the boot!
[14:15] <marcoceppi> ahasenack: which charm - out of curiousity
[14:15] <ahasenack> marcoceppi: during boot, ubuntu starts services
[14:15] <ahasenack> marcoceppi: landscape-server
[14:15] <ahasenack> marcoceppi: so you have two things calling "service sometstuff start"
[14:15] <ahasenack> marcoceppi: config-changed, via juju
[14:15] <ahasenack> marcoceppi: and the boot process of the machine
[14:15] <marcoceppi> Okay, so I follow why there's a race condition. Upstart is bring up a service while config-change is doing the same
[14:15] <ahasenack> right
[14:16] <marcoceppi> ahasenack: why not just do `restart somestuff || start somestuff`
[14:16] <marcoceppi> instead of assuming that you will always need to start it
[14:16] <g0d_51gm4> from the first image if I add a second node and leave that in ready status which is the way to say to juju that is present a new node and how to add it to the same environment?
[14:16] <ahasenack> marcoceppi: we assume that if config-changed was called, something changed
[14:17] <marcoceppi> ahasenack: that's not a valid assumtion
[14:17] <ahasenack> it was, from reading the doc
[14:17] <marcoceppi> any hook can be called at any time for any reason
[14:17] <ahasenack> which also states that config-changed is called everytime the agent is started
[14:17] <marcoceppi> configuration change is just an example where config-changed is envoked
[14:17] <ahasenack> so doc-wise, it's correct
[14:17] <ahasenack> marcoceppi: I wonder how many charms are tested regarding a reboot, though
[14:17] <ahasenack> we all seem so concerned with deployments
[14:18] <marcoceppi> g0d_51gm4: you can do a deploy, or allocate the node with `juju add-machine`
[14:18] <ahasenack> marcoceppi: we are fixing it in our charm, sure. But heads up, a reboot can ruin your day
[14:18] <marcoceppi> ahasenack: good question, not sure, I'll make sure we add it to our testing infrastructure
[14:18] <g0d_51gm4> ok
[14:18] <marcoceppi> tvansteenburgh: ^^
[14:18] <g0d_51gm4> thanks marcoceppi
[14:18] <ahasenack> marcoceppi: we probably hit this because we start several services, like 8 or more
[14:19] <marcoceppi> tvansteenburgh: tl;dr during testing, when we stand up a deployment, we should stop/start the agents on all the machines and see how the charm reacts
[14:19] <marcoceppi> ahasenack: it's a good point to test for, this wasn't always the behaviour in juju
[14:20] <ahasenack> marcoceppi: still, I would question this decision, to have config-changed run at agent start
[14:20] <marcoceppi> ahasenack: if the hooks is coded with the notion that it's called at anytime, and the right idempotency guards are in place, it /shouldn't/ be a problem
[14:20] <g0d_51gm4> any suggest to resolve the problem after the reboot of the host machine?
[14:21] <ahasenack> marcoceppi: I'm not convinced
[14:21] <ahasenack> marcoceppi: the hook can be called as many times as you want
[14:21] <ahasenack> marcoceppi: but sure, an admin could be calling "service foo restart" at the same time config-changed is being run by some other reason
[14:22] <marcoceppi> g0d_51gm4: I've not experienced that issue before, sorry
[14:28] <g0d_51gm4> I've recreated a new virtual environment with MaaS and Juju, I've made the bootstrap of the environment, deployed the juju-gui and until here everything works. I've reboot the host machine re-run all vMaaS environment and used check the juju's status and I've the same problem with juju " ERROR Unable to connect to environment "maas".Please check your credentials or use 'juju bootstrap' to create a new environment".  to resolve that I'
[14:28] <g0d_51gm4> ve always to remove the directories .juju/enviroments and .juju/ssh and re-sync  juju and make the bootstrap again"
[14:28] <lazyPower> g0d_51gm4: do you have your machines configured to only boot over pxe?
[14:29] <lazyPower> you may want to add hdd booting as well to your boot options, so it doesn't re-provision the machine on boot if its assigned. it *shouldnt* be doing that anyway - but its worth investigating.
[14:30] <g0d_51gm4> I've also made that.
[14:33] <g0d_51gm4> I've boot the node with its HD via MaaS, the node status is allocated, ubuntu prompt works:  but if run that command the error is present!!!
[14:33] <lazyPower> can you juju ssh 0?
[14:34] <lazyPower> or ssh into the node and valdiate that the juju processes are running?
[14:34] <lazyPower> show me the output of -  initctl list | grep juju  on your bootstrap node
[14:34] <g0d_51gm4> the ssh connection using its hostname works
[14:39] <g0d_51gm4> the output is "jujud-unit-juju-gui-0 start/running, process 3760
[14:39] <g0d_51gm4> juju-db start/running, process 3667
[14:39] <g0d_51gm4> jujud-machine-0 start/running, process 3645"
[14:41] <g0d_51gm4> I've just stop the node via MaaS and re-start it
[14:41] <lazyPower> if you stopt he node via maas, its going to do a fresh cloud boot
[14:42] <lazyPower> meaning full pxe install, no juju environment will be left on the virtual machine. its pristine, just like you chose to spin up a new VM on a public cloud.
[15:03] <g0d_51gm4> just a second in the same time i've added also the second node and used the command marcocepppi suggested me before
[15:03] <g0d_51gm4> and the ubuntu's installer is started
[15:05] <g0d_51gm4> in case i've 2 juju environment (maas0 and maas1) to add the new node to maas2 int the command juju add-machine I've to specify some other parameter?
[15:26] <schegi> does anyone know how to force juju to use a certain network interface? i got a setup with 3 different networks 2 Bonds over 2 1GBit networks and 2 10GBit networks. I like juju to use one of the 1 Gig bonds but it always uses the 10GBit interfaces, when using e.g. juju ssh X. This also leads to charms identifying themselves with this particular interface.
[15:26] <rbasak> sinzui: around? On SRU review, arges found 1) a binary in the upstream tarball for 1.18.4, and 2) that it changed from 1.18.1.
[15:26] <g0d_51gm4> lazyPower: i'm rebooting the host machine see y after few seconds
[15:26] <rbasak> sinzui: pkg/linux_amd64/github.com/errgo/errgo.a
[15:29] <sinzui> rbasak, The cause is fixed, but was not fixed until a few weeks. The devs or golang changed something to get that crack.
[15:29] <sinzui> rbasak, The fix was to rm pkg/* when the tarball is assembled. It is safe to do so
[15:30] <sinzui> rbasak, I also think it is safe to stab the person golang for doing that non-sense
[15:36] <rbasak> sinzui: thanks. Coordinated in #ubuntu-release, the conclusion is that I'll repack the tarball with the file missing for the Trusty SRU, as a one-off.
[15:37] <sinzui> rbasak, thank you +1
[15:43] <rbasak> sinzui: I can delete the pkg/ directory itself too, I take it
[15:43] <rbasak> ?
[15:50] <g0d_51gm4> lazyPower: the same error http://paste.ubuntu.com/7799040/
[15:51] <lazyPower> g0d_51gm4: i'm not sure what to recommend at this point. I would reach out via the juju mailing list and see if another member has run into this scenario before.
[15:54] <g0d_51gm4> if i remove the directories ./juju/environments, .juju/ssh and run the commands "juju sync-tools -e maas" and "juju bootstrap -e maas --debug" it juju bootstrajuju bootstrap -e maas --debug it restart the node and install ubuntu
[16:31] <g0d_51gm4> lazypower: it's incredible but also for  the third time i've received the same error!!!i don't know where is the problem with that!!!
[16:32] <lazyPower> g0d_51gm4: it really sounds like something is going on with your vm's thats wiping out the juju env, or replacing the ssh key
[16:32] <lazyPower> thats the only thing i can think of, but i dont know what it would be
[16:56] <rharper> is there any control over when juju bridges eth0 for containers ?  like a value in environments.yaml ?
[17:02] <marcoceppi> rharper: I believe so, but I think it's undcoumented, let met take a look at the source code
[17:04] <rharper> marcoceppi: do you know it wouldn't by default?  I've got an openstack environment I'm deploying to; and it's not bridging eth0 by default, so containers come up on lxcbr0 (10.0.3.x) -- but I really want it to bridge eth0 so the public ip comes from the same network as the host
[17:05] <rharper> when I use a maas provider, it does this "automatically"
[18:03] <alexisb> rharper, I know they are not online right now but that is a good question for dimiter and jam
[18:04] <alexisb> fwereade, may be around and could potentially have a quick answer for you
[18:04] <rharper> alexisb: cool, thanks.
[18:05] <alexisb> rharper, feel free to send dimiter mail
[18:05] <rharper> ok
[18:38] <data> hey, I have googled high and low, but is it possible to do mixed deployments in juju? Because I have so far set up a dozen services "locally" in VMs (maas provisioning), but would like to add other, preexisting servers to it (manual provisioning).
[18:54] <sebas5384> juju-local can be installed into a debian machine?
[18:58] <sebas5384> its being quite around here lately hehe
[19:10] <sebas5384> jcastro: ping
[19:11] <jcastro> sebas5384, hi!
[19:11] <sebas5384> hey jcastro o/
[19:11] <lazyPower> data: manual provider will give you that level of mixed deployments you're looking for
[19:12] <sebas5384> today we are going to install juju-local in more than 10 pcs
[19:12] <sebas5384> :D
[19:12] <sebas5384> but!! there are some with debian
[19:12] <sebas5384> some tip about that?
[19:12] <sebas5384> jcastro: :)
[19:12] <sebas5384> hey lazyPower o/
[19:12] <schegi> jamespage?
[19:13] <lazyPower> hey sebas5384
[19:14] <schegi> jamespage, will redeploy ceph this evening, have bzr updated your ceph/ceph-osd network-split, but nothing new. can you commit pls
[19:16] <data> lazyPower: So if I add that environment, will the two be shared?
[19:16] <lazyPower> right, in the manual provider tehre are no real limitations. You can have machines in different datacenters entirely
[19:16] <lazyPower> lag will be a factor
[19:17] <lazyPower> but its perfectly reasonable to deploy into say, digital ocean and aws and your in-house maas cluster (assuming it hs the proepr networking to reach all these instances)
[19:17] <data> k. It's more about ease of deployment for now. It's just for development
[19:17] <lazyPower> be careful about adding existing nodes that are dirty to juju though
[19:17] <data> What do you mean by dirty?
[19:17] <lazyPower> charms always assume a clean cloud image, so if you go adding your corporate ERP server to juju, and deploy something on it, it may do something unintended
[19:17] <jcastro> sebas5384, I don't think anyone's tried it before
[19:18] <lazyPower> jcastro: back in 2012 there were some articles on it, iw ent looking
[19:18] <data> None of the servers are production servers. Not exactly virgin some of them, but not too bad
[19:18] <lazyPower> i didn't find anything recent post teh go-port on getting juju running.
[19:18] <lazyPower> (on debian)
[19:18] <jcastro> lazyPower, was it me posting on the debian-cloud list asking for someone to help on it? :)
[19:19] <lazyPower> uhm, nope, it was a couple blog posts, some comments on MS's blog, and debian bugs against a package that no longer exists
[19:19] <sebas5384> shouldn't be like adding some repositories and that kind of stuff?
[19:19] <jcastro> oh, that was the pyju client in debian
[19:19] <jcastro> not supporting debian as a deployable OS, which is what he wants
[19:20] <sebas5384> jcastro: by deployable OS you are saying like, installing juju-local and running charms into lxc containers, right?
[19:20] <jcastro> right
[19:20] <sebas5384> jcastro: ahh ok :)
[19:21] <sebas5384> then its exactly what i want
[19:21] <sebas5384> htht
[19:21] <sebas5384> hehe
[19:21] <lazyPower> i think the components are there, but its largely untested.
[19:21] <lazyPower> i would give it a go in a manual provider environment
[19:21] <sebas5384> yeah thats going to be a problem
[19:22] <sebas5384> troubleshooting is going to by a pain in the a**
[19:23] <data> while I am here: I had this problem yesterday that juju didn't continue with anything because it was waiting for a server to finish dieing. problem was that the server didn't exist anymore
[19:24] <lazyPower> did you destroy the machine with a --force
[19:24] <lazyPower> and then destroy the service data?
[19:24] <lazyPower> often times, i find that hooks get trapped in a failed relationship cycle
[19:24] <lazyPower> and resolving the relationship bits, after having destroyed the machine with extreme predjudice, works pretty well
[19:25] <data> it was something like that, I believe. Are you talking about some kind of "service data" or was it another implication of my unfortunate nick :)
[19:25] <data> I could not figure it out and completely destroyed the environment
[19:26] <lazyPower> unfortunate implication of the nickname :)
[20:14] <jrwren> what is a good way to test a relation-broken hook?
[20:23] <jose> jrwren: I'd say deploying, relating and destroying the relation?
[20:24] <jrwren> that does a that does a relation-departed, different from broken, AFAIK
[20:25] <jose> jrwren: I think it's just an alias, though I'm not entirely sure
[20:40] <ahasenack> hi, which package has this command:
[20:40] <ahasenack> andreas@nsn7:~/charms/trusty/ntp$ make sync
[20:40] <ahasenack> make: charm-helper-sync: Command not found
[21:45] <schegi__> hey if i bootstrap a maas environemt and later deploy a service in an lxc container and like the container to use a predefined bridge (br0) on the server. how to define the bridge device
[21:52] <dergrunepunkt> Hi, using maas I run "juju bootstrap" and when de node boots starts the ubuntu installation
[21:52] <dergrunepunkt> ideas?
[22:02] <thumper> dergrunepunkt: that is what maas does
[22:03] <dergrunepunkt> I know
[22:03] <thumper> I'm not sure what you are asking
[22:03] <dergrunepunkt> but it souldnt do it over and over and over again
[22:03] <thumper> yeah, that sounds wrong :)
[22:03] <dergrunepunkt> maas shoud do it once
[22:04] <dergrunepunkt> once the node it's installed it shouldn't install the operating system again
[22:05] <dergrunepunkt> thumper: I have 3 nodes in the "Ready" state, and when run "juju bootstrap" bloody maas wants to install the operating system again
[22:06]  * thumper doesn't know much about maas
[23:09] <schegi__> dergruenepunkt, before bootstrapping no system is installed, it justs boots to a pxe image and during commisioning only information about the server is collected and impi user is created.
[23:09] <schegi__> ready state only means that the node is ready for deployment, if you mean ready in maas.