[09:32] <lonroth> sorry about that =D
[14:20] <skay> hi juju. I have managed to get my laptop in to a crazy and exciting state
[14:22] <skay> I got in to a state where juju calling lxc-create had this problem http://paste.ubuntu.com/9096762/
[14:22] <skay> agent-state-info: 'error executing "lxc-create": Container already exists'
[14:23] <skay> and did call juju destroy-environment a gazillion times, lxc-ls did not list anything for the machines, so then resorted to trying to clean things up by hand
[14:24] <skay> by going around to /var/lib/juju/containers and deleting the image directories
[14:24] <skay> etc
[14:24] <skay> now I get an exciting error when I try to bootstrap http://paste.ubuntu.com/9096721/
[14:24] <skay> now I just need to grind until I kill the big boss
[14:37] <mbruzek1> hello skay
[14:37] <mbruzek1> Are you using sudo with the lxc-ls  command?
[14:37] <skay> mbruzek1: yes
[14:38] <skay> mbruzek1: it's int he pastebin. I called: sudo lxc-ls --fancy --nesting
[14:38] <mbruzek1> looking
[14:39] <skay> mbruzek1: got two pastebins.
[14:39] <skay> mbruzek1: I think I've managed to royally screw things up after trying to do manual cleanup
[14:40] <mbruzek1> skay: it looks like it, still reading.
[14:40] <skay> mbruzek1: I'll probably need to figure out how to clean up everything. drastically.
[14:40] <mbruzek1> Definitely looks like an lxc related problem.  I have not seen where lxc-destroy fails.
[14:41] <mbruzek1> OK lets do this.
[14:41] <mbruzek1> juju destroy-environment -y local --force
[14:42] <mbruzek1> delete the images in /var/lib/juju/container/*
[14:42] <skay> mbruzek1: I did try --force, I will try again
[14:43] <mbruzek1> skay: I am sure you did, I just want to get juju to stop talking to those images
[14:43] <skay> along with deleting the images in /var/lib/juju/container/*
[14:43] <mbruzek1> Looks like you have problems destroying the images.
[14:43] <skay> mbruzek1: thanks, it does make sense to try all the steps because I must have missed something
[14:44] <mbruzek1> sudo lxc-ls --fancy
[14:45] <mbruzek1> do you see any containers running?
[14:45] <mbruzek1> skay also delete things in /var/lib/lxc/juju*
[14:45] <mbruzek1> if there is anything there
[14:45] <skay> mbruzek1: no, but it shows some as STOPPED. which I wouldn't expect. sanity check. http://paste.ubuntu.com/9097472/
[14:46] <mbruzek1> ok is there anything in /var/lib/lxc/juju*?
[14:47] <skay> mbruzek1: yes, and I deleted it. lxc-ls no longer shows anything. that is hopeful
[14:47] <mbruzek1> I think we are getting somewhere.
[14:47] <mbruzek1> Let me check if there are any other clean up bits I do
[14:48] <mbruzek1> Ok delete everything in /var/lib/juju/locks/*
[14:49] <skay> mbruzek1: done. and there were things in there
[14:49]  * mbruzek1 nods
[14:49] <mbruzek1> OK if you sudo lxc-ls shows nothing more I think you should try another bootstrap.
[14:50] <mbruzek1> skay: juju bootstrap -v -e local --debug
[14:50] <skay> mbruzek1: thanks! sudo lxc-ls shows nothing, so here goes
[14:50] <skay> mbruzek1: debug starts tmux right? (I've not tried it yet.)
[14:50] <skay> mbruzek1: and I'm tmux already. maybe I should get out
[14:50] <mbruzek1> no it just prints out an obnxious amount of data
[14:51] <roadmr> obnoxiousness ftw
[14:51] <skay> not seeing any ERRORs... yet
[14:51] <skay> OH NOES
[14:51] <mbruzek1> ?
[14:52] <skay> let me pastebin it.
[14:53] <skay> last line shows hte error, http://paste.ubuntu.com/9097595/
[14:56] <skay> mbruzek1: there is this blog post, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ and I didn't kill the mongod or jujud processes, so let me check that (earlier today I did look for a running juju process, but I didn't know to check for mongod)
[14:58] <skay> though, ps aux | grep mongo doesn't find anything
[14:58] <mbruzek1> skay: Yeah I was looking at that kind of script I have on my own system, it is home made so nothing official let me pastebin something for you
[14:58] <skay> mbruzek1: thanks!
[14:58] <mbruzek1> http://pastebin.ubuntu.com/9097629/
[14:59] <mbruzek1> It started with Jorge's ask ubuntu post but I have added and removed from it
[15:00] <mbruzek1> skay: It looks like you had juju running before.  Did you change anything recently?
[15:01] <skay> I can't figure out if I did before I started having hte problems. last night I was pretty frustrated and figured why not upgrade to utopic.
[15:01] <skay> so I did. similar things are happening today, so I don't know how much that would have changed things, except now my 0 is utopic
[15:03] <mbruzek1> OK.  So there are no juju or mongo processes running now right?
[15:03] <mbruzek1> Did you try the clean script?
[15:04] <skay> correct. I'm currently looking through the script to see what it does, and was listing the directories to see if they have anything in them before running the script because Im curious whether I had cleaned up everything
[15:05] <skay> and then I'll run the script for good measure
[15:06] <mbruzek1> skay: We tried the major parts of this script I would be surprised if it fixes your problem.  So you recently updated to utopic.  Do you have default-series:  set in ~/.juju/environments.yaml?
[15:06] <skay> mbruzek1: yes, to precise
[15:07] <mbruzek1> skay: run the script and let me know if you see anything clean up better.
[15:07] <skay> ok
[15:13] <skay> mbruzek1: it failed, http://paste.ubuntu.com/9097960/
[15:13] <skay> I notice that the script only deletes cloud-{precise,trusty}, and I see download and trusty in that dir. would it affect this?
[15:14] <skay> and, any reason not to delete /var/cache/lxc/cloud-*
[15:15] <mbruzek1> skay: Yes this script is pretty old and "unofficial" so updates for utopic
[15:15] <avoine> skay: do you have any mongodb in your /var/log/syslog?
[15:15] <avoine> *mongodb errors
[15:15] <mbruzek1> would be needed in your case
[15:19] <avoine> I found that cleaning running lxc vm and /var/lib/juju/ is enought for me most of the time
[15:20] <skay> avoine: http://paste.ubuntu.com/9098077/
[15:22] <avoine> skay: do you have a local IP address in the 10.x.x.x range?
[15:23] <skay> avoine: ifconfig shows lxcbr0 with one
[15:23] <avoine> that's ok
[15:24] <skay> avoine: if lxc-ls doesn't show any containers, should lxcbr0 still show up?
[15:24] <avoine> I was suspecting a bug I had last week but I seams to be something else
[15:24] <avoine> skay: yes
[15:25] <avoine> skay: Have you tried to boot up an lxc node manually?
[15:26] <avoine> with something like: lxc-create -t ubuntu -n ubuntutest
[15:26] <skay> avoine: I can't remember if I've tried that today, I'll do so now. btw, juju --version gives me 1.20.11-utopic-amd64 in case there is any known issue with that
[15:26] <avoine> I'm at the same version
[15:27] <mbruzek1> I was searching for your problem skay and I found this bug https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1346815
[15:27] <mup> Bug #1346815: lxc-clone causes duplicate MAC address and IP address <amd64> <apparmor> <apport-bug> <utopic> <lxc (Ubuntu):Fix Released> <lxc (Ubuntu Trusty):Triaged> <https://launchpad.net/bugs/1346815>
[15:28] <avoine> this in your log looks suspicious: start: Job is already running: juju-agent-sheila-local
[15:28] <avoine> do you have any juju-* process running?
[15:28] <mbruzek1> avoine: That is the errror message that I searched on
[15:28] <mbruzek1> to find the bug listed above
[15:29] <skay> avoine: I thought not, but will check again
[15:30] <skay> avoine: from my earlier pastebin, I showed ps aux | grep juju and it didn't show any processes other than the grep
[15:30] <skay> avoine: still nothing showing from that. is there a better way to check?
[15:31] <skay> avoine: lxc-create still running, btw
[15:32] <avoine> the "Job is already running" error must be "normal" then
[15:33] <avoine> I don't use lxc-clone or lxc-clone-aufs so mbruzek1's bug could be it
[15:33] <avoine> maybe you could try to put them both to false
[15:34] <mbruzek1> skay: The bug I listed had some pretty easy re-create steps
[15:34] <avoine> in your environments.yaml
[15:34] <mbruzek1> skay when you get a chance can we try steps 1-4?
[15:34] <skay> avoine: lxc-create just finished, sudo lxc-attach -n ubuntutest gives me: lxc-attach: attach.c: lxc_attach: 635 failed to get the init pid
[15:34] <skay> mbruzek1: I'll try to recreate the bug now
[15:35] <mbruzek1> skay: I just ran the steps on my machine and I got the "correct" output (different macs)
[15:37] <skay> also, oops, forgot to lxc-start before attempting to attach to ubuntutest, that works as expected once I did that
[15:37] <avoine> ok
[15:38] <skay> mbruzek1: I followed steps 1 through 4, and sudo lxc-ls -f shows bar and foo have different ip addresses.
[15:38] <avoine> skay: what is you mongodb version? dpkg -l | grep mongo
[15:39] <avoine> skay: and could you paste what's in /var/log/juju-*-local/all-machines.log
[15:39] <skay> avoine: ii  juju-mongodb                                         2.4.10-0ubuntu1                                   amd64        MongoDB object/document-oriented database for Juju
[15:39] <mbruzek1> skay: then I suspect the bug is not our problem
[15:40] <skay> mbruzek1: which version of mongo do you have?
[15:41] <mbruzek1> 2.4.9-0ubuntu3
[15:41] <mbruzek1> I am on trusty
[15:41] <skay> avoine: nothing in /var/log/juju-*-local/
[15:41] <mbruzek1> skay: if you got different mac addresses then the bug I found is not the problem
[15:42] <skay> mbruzek1: true.
[15:42] <skay> avoine: which mongo version do you have?
[15:42] <avoine> same as yours
[15:42] <skay> avoine: are you on trusty or utopic?
[15:44] <avoine> skay: utopic
[15:45] <mbruzek1> skay: What is your version of lxc?   (Mine is 1.0.6-0ubuntu0.1)  dpkg -l | grep lxc
[15:46] <avoine> I have 1.1.0~alpha2-0ubuntu3
[15:47] <skay> avoine: I've got 1.1.0~alpha2+master~20141106-1929-0ubuntu1~utopic
[15:49] <skay> avoine: I'm using the ubuntu-lxc daily ppa
[15:49] <skay> avoine: perhaps I should not?
[15:49] <mbruzek1> skay is there a reason you are on the daily one?
[15:49] <skay> mbruzek1: not really
[15:50] <mbruzek1> skay: Comment #6 of the bug I listed states : This bug was fixed in the package lxc - 1.1.0~alpha2-0ubuntu2
[15:50] <skay> mbruzek1: I checked and the IPs were different... so probably that bug is fixed in daily as well?
[15:51] <mbruzek1> It looks like avoine has a later version, I don't know what yours is.  The date looks later
[15:51] <mbruzek1> yes but since we are having an LXC problem and you are on the daily build I would suspect some other lxc regression is causing this problem.
[15:52] <avoine> skay: that could be it, try removing it with ppa-purge
[15:52] <skay> mbruzek1: I'll remove the ppa and stop using daily
[15:52] <mbruzek1> skay: if there is no particular reason for the daily ppa could you go back to the package lxc?
[15:52] <skay> mbruzek1: I'll try so
[15:53] <skay> avoine: which package installs ppa-purge? I do not have that command
[15:53] <avoine> ppa-purge I think
[15:53] <skay> haha, go figure
[15:58] <avoine> it still troubles me that you don't have anything in /var/log/juju-*
[16:05] <mbruzek1> avoine: If the bootstrap node is not coming up that might be why we have no logs
[16:06] <skay> avoine: I cleaned up everything, and then after that ran bootstrap, which failed. so what mbruzek1just said is likely the reason
[16:06]  * skay just joined a meeting, so not as chatty
[16:06] <skay> appreciate all the help. I just did a ppa-purge, and will try everything over again once the meeting is over
[16:09] <avoine> mbruzek1: that would make sens
[16:09] <avoine> maybe checkout in /var/log/upstart/juju-* instead
[16:19] <dosaboy> jamespage_: https://code.launchpad.net/~hopem/charms/trusty/nova-compute/rbd-imagebackend-support
[16:20] <dosaboy> jamespage_: as mentioned, not ready for review yet, but hopefully almost
[16:20] <dosaboy> jamespage_: needs ceph-broker to land first
[16:26] <darknet_> someone can help me? I've reported a problem deployed all services for Openstack and make all relations between nodes but if I try to open horizon, I see just a white page!!This lab has realized using a Virtual MaaS Server and with 2 Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
[16:26] <darknet_> is there anyone can help me
[16:28] <darknet_> ops sorry I wrote a bad sentence!!
[16:32] <darknet_> I want to say that I deployed all service and make all relations between node, but when I try to open the dashboard I see just a white page. I've also to try to ping from host the VM using FQDN and it works.
[16:41] <darknet_> anyone can help me?
[16:42] <avoine> darknet_: this is either a problem in horizon templates or apache2 is returning you an masked error
[16:42] <avoine> darknet_: check the apache2 logs for any error
[16:43] <darknet_> I've also try to connect on node where juju has deployed horizon and restart apache but nothing
[16:43] <gQuigs> how do I customize the default deployment name?  instead of "juju-canonistack-machine-#"?
[16:44] <darknet_> this is a log of apache http://paste.ubuntu.com/8615952/
[16:46] <gQuigs> (I'm running into DNS conflicts as others have used the same name..)
[16:46] <darknet_> I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
[16:47] <darknet_> avoine_: I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
[16:47] <marcoceppi> darknet_: did you go to horizon-ip/horizon ?
[16:47] <darknet_> hi marco I've posted on your guide the same problem
[16:48] <darknet_> I'm sorry but I've to go now I'll connect back about 10 min.
[16:58] <marcoceppi> jose: charm review queue queue should be updating again
[17:12] <lazyPower> Great Success!
[17:12] <gQuigs> answered>  change your environment name.. oops
[17:21] <jcastro> hey lazyPower
[17:21] <jcastro> and aisrael
[17:21] <lazyPower> Whats up jcastro
[17:21] <jcastro> I noticed the vanilla vagrant boxes are 14.04, not 14.04.1
[17:21] <jcastro> any idea what's up with that?
[17:21] <lazyPower> I think the cpc build scripts haven't been updated with the latest base image
[17:22] <lazyPower> good catch - haven't been in vagrant land in over a month now
[17:23] <lazyPower> utlemming: ping
[17:23] <jcastro> lazyPower, hey so, where do we file vagrant box bugs that are not juju related?
[17:23] <jcastro> is my real question
[17:23] <jcastro> (I'll also ensure the juju ones are on the list)
[17:23] <utlemming> jcastro: what do you mean by they are not 14.04.1
[17:24] <utlemming> jcastro: this is a labeling thing?
[17:24] <jcastro> well initially it was 14.04
[17:24] <jcastro> and I upgraded it
[17:24] <jcastro> to 14.04.1
[17:24] <utlemming> jcastro: ack, file a bug and we'll get on it
[17:25] <jcastro> utlemming, we're unclear as to where
[17:25] <lazyPower> i'm sifting through old email threads looking for that link
[17:25] <lazyPower> i know we settled on one, but i forget which project
[17:25] <jcastro> I will also file a bug to add a bug link to the descriptions on vagrantcloud.com
[17:25] <jcastro> that should make it easier
[17:26] <lazyPower> adeuring: Abel, were we only tracking bugs based on the vagrant supporting files like the redirector / provisioning bits in the vagrantfile?
[17:28] <utlemming> jcastro: you can file a public bug against ubuntu and assing it Odd_Bloke
[17:32] <lazyPower> utlemming: is that the path forward we want with public bugs against the vagrantboxes (i'm thinking vagrantcloud.com listing)? I'm still not finding the bugtracker we have for the box themselves - as its several components to track, and we only settled on the redirector and other sub-components.
[18:34] <rick_h_> juju do we have any sort of 'recover your juju env' from this azure outage notes going on?
[18:34] <rick_h_> for instance, we had our CI environment in Juju, it seems to have come back but with new hostnames and juju is quite unhappy. I wonder if there's a standard "what to watch for, tips for recovering" we're putting together and getting out to the public on this?
[18:40] <darknet_> marcoceppi_: I'm so sorry for before, but I had to go out from office!!!
[18:40] <lazyPower> rick_h_: yes! i covered this last week
[18:40] <rick_h_> lazyPower: linky!
[18:40] <lazyPower> rick_h_: http://blog.dasroot.net/reconnecting-juju-connectivity/
[18:41] <rick_h_> lazyPower: might I suggest a giant twitter storm referencing the azure downtime and this then if we're sure it's the right way to go?
[18:41] <rick_h_> and we'll check it out for our env
[18:41] <darknet_> marcoceppi_: as url I've used http://IP_address/horizon
[18:41] <lazyPower> rick_h_: sounds good - ping me with what you discover and I'll lock and load some social media candy
[18:41] <rick_h_> lazyPower: maybe even a juju mailing list email post
[18:42] <rick_h_> lazyPower: I assume there's got to be > 1 juju on azure user doing :( today
[18:43] <lazyPower> yeah, global azure outage is going to be a fun run for a lot of users
[18:44] <rick_h_> lazyPower: yea, proactive canonical response ftw. bac is going to test it out on our env and see how it goes and then we can see about getting a great message out to users
[18:44] <rick_h_> ty for the link, nice timing :)
[18:44] <lazyPower> its almost like i knew
[18:44] <rick_h_> hah!
[18:44]  * lazyPower waves his arms like a mystic
[18:46] <skay> avoine: thanks for all the help, bootstrap works again, and things are looking okay. mbruzek1 isn't around to thank. oh well!
[18:46] <skay> avoine: I did end up rebooting since it didn't work right after ppa-purge and I figured, what the hell, why not reboot
[18:49] <lazyPower> skay: really happy to hear we got you sorted.
[18:49] <lazyPower> and i'lli pass along your well wishes to mbruzek when he returns
[18:49] <avoine> skay: great news!
[18:49] <skay> lazyPower: I am very grateful. I was almost ready to resort to completely blowing away my laptop and starting over
[18:49] <lazyPower> ooo, tricky
[18:49] <lazyPower> glad you didn't have to resort to such extreme measures
[18:50] <skay> lazyPower: maybe I should see if I can reproduce the problem ina  friendly way in case I uncovered something in a daily build
[18:50] <skay> but I don't have time for it right now
[18:50] <skay> and also I feel a bit antsy at the idea since I'd rather do that on a different computer
[18:50] <lazyPower> skay: i cant say that i blame you there :)
[18:51] <lazyPower> possibly a vagrant run/build would be in order to test taht so its isolated
[19:17] <bac> lazyPower: hey, thanks for the doc about reconnecting juju
[19:18] <lazyPower> bac: np, did that fix ya up?
[19:18] <bac> lazyPower: our problem seems little more complicated.  they machine that is supposed to be our state server was not brought back up
[19:18] <bac> s/they/the/
[19:18] <lazyPower> ah, yeah - if your state server isn't back online - you're hozed
[19:18] <bac> azure has it marked as created but it isn't running
[19:18] <lazyPower> until the state-server re-appears.
[19:19] <bac> lazyPower: yeah, it isn't going to just appear and i don't know how to bring it back
[19:19] <lazyPower> hmmm..  do you have a snapshot you can re-deploy?
[19:19] <bac> lazyPower: no, no snapshot
[19:19] <lazyPower> and/or was your state-server ha-enabled?
[19:19] <bac> nope
[19:19] <lazyPower> oh man :(
[19:20] <lazyPower> i have bad news
[19:20] <bac> i think we'll be recreating it.
[19:20] <lazyPower> you're going to need that database on the api server for things to normalize - otherwise you're registering units the state server knows nothing about.
[19:20] <bac> lazyPower: yeah, we'll just have to redeploy.
[19:28] <lazyPower> rick_h_: sorry to hear about the trouble - however social media candy has been deployed. Can I get some syndication lovin on that?
[19:28] <rick_h_> lazyPower: sure thing, will look for it
[19:52] <skay> pip question... I have a local directory with wheels in it, let's call it /path/to/dependencies. and I've hacked python-django to accept extra pip args in hook.py (versus ansible, which I'm not using at the moment). do I need to mount a shared founder where dependiencies should live? or will the charm "magically" be able to use my local folder?
[19:52] <skay> my pip_extra_args is "--no-index --find-links=/path/to/dependencies"
[19:53] <skay> and the python-django hack is http://bazaar.launchpad.net/~codersquid/+junk/pure-python-with-tgz/revision/70
[19:53] <skay> I'm not going to make a MR based off that, it's just a hack
[19:54] <avoine> skay: do you plan to shared your wheels package cache with other instances?
[19:55] <skay> avoine: no
[19:56] <skay> avoine: I was about to say, currently pip is not finding the files
[19:56] <skay> I'm trying to dig up the log, I had it in a window a moment ago
[19:57] <skay> avoine: I get: ValueError: unknown url type: /path/to/dependencies
[19:58] <skay> pip can handle the path when I run it locally
[19:58] <avoine> skay: what is your complet pip command?
[19:58] <skay> avoine: will the juju log echo that? let me scroll back
[20:00] <skay> avoine: the juju log does not echo that, I will add something to echo the command. I know what I think is the complete command, but in reality I should print it out to see what juju thinks it is
[20:01] <avoine> skay: it might be that the version of pip in the vm is too old
[20:02] <avoine> skay: try to add:
[20:02] <avoine> pip_additional_packages: "pip"
[20:02] <avoine> in your juju config file
[20:02] <skay> avoine: okay
[20:06]  * skay is rerunning everything
[20:25] <darknet_> someone can help me? I've reported a problem deployed the modules to have Openstack on my infrastructure. I've made all relations between nodes but if I try to open horizon, I see just a white page!!This lab has been realized using a Virtual MaaS Server and with 2 VM Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
[20:31] <darknet_> anyone can help me?
[20:32] <sarnold> darknet_: how long have you 'waited' for everything to start?
[20:33] <sarnold> darknet_: sometimes a lot of work is hidden behind the 'juju relate ...' calls; I know a recent video I saw for deploying openstack took ~15 minutes or something..
[20:34] <darknet_> sarnold_: on juju-gui all module and relations are green.
[20:35] <darknet_> sarnold_: anyway I wait but nothing, the link http://hostname/horizon presents a white page!!!
[20:37] <lazyPower> darknet_: green relations dont necessarily mean the relationships have completed running
[20:37] <lazyPower> do you see any output from the units under relation when you run juju debug-log?
[20:39] <darknet_> lazypower_: but if I run the command "juju status -e maas" I see that everything is started!!!
[20:40] <lazyPower> darknet_: that just means the charm has reached the started hook - as juju is event driven, and relationships can be called after teh started hook it can be a bit misleading
[20:40] <lazyPower> darknet_: did you see any output from the units under relation when you ran juju debug-log?
[20:41] <lazyPower> darknet_: also sorry for teh confusion there - we've had some discussions about this on teh mailing list recently - about charms and hooks providing more accurate reporting
[20:41] <darknet_> I didn't try to run that.
[20:43] <darknet_> I promise y that tomorrow I'll post y log
[20:43] <darknet_> i can do that now
[20:44] <lazyPower> darknet_: juju debug-log should give you an immediate feedback of whats currently happening in the system. if you have the time, a quick check will yield if we need to start debugging or if this is a time to bepatient while juju finishes its housekeeping.
[20:44] <darknet_> will y be here tomorrow?
[20:44] <lazyPower> darknet_: i will be here from ~ 9am EDT to 5pm EDT m-f most weeks.
[20:44] <lazyPower> er, EST - sorry, timeshift happened and i keep forgetting to update my timestamp.
[20:45] <gnuoy> Hi, would somebody mind preventing my charmers membership from expiring please ?
[20:45] <darknet_> ok let's make that and tomorrow I'll contact you
[20:46] <lazyPower> sounds good darknet_
[20:46] <darknet_> in case I'll send y a private text
[20:46] <lazyPower> marcoceppi: gnuoy is running out of time, can you renew him for me please?
[20:46] <gnuoy> thanks lazyPower
[20:47] <lazyPower> my pleasure
[20:47]  * lazyPower hat tips
[20:47] <darknet_> lazyPower_: just one technical question!
[20:47] <lazyPower> darknet_: i'm all ears
[20:49] <darknet_> why in MaaS I've to report the ssh keys of the Host machine, of Region Controller and of a maas user created on RC?
[20:49] <darknet_> lazyPower_: and also Juju
[20:51] <darknet_> lazyPower:_ I told y that because everytime I want to run all infrastructure (virtual) I've to use the same network connection otherwise the from MaaS the VM not run
[20:52] <lazyPower> darknet_: i'm not understanding what you're asking me - let me try to ask what i think you're asking.
[20:52] <lazyPower> You're questioning why you have to register your ssh keys in the region controller of MAAS?
[20:53] <darknet_> yes! and why to run the VM node allocated on MaaS I've to use the same connection?
[20:54] <lazyPower> darknet_: So long as you have a user on the MAAS region controller - and have the api credentials obtained from the RC - juju will automatically register ssh keys that it uses with any nodes spun up. This key exchange happens transparently.
[20:54] <lazyPower> darknet_: when you ask why are the VM's using the same connection - are you referring to the same network device? This is highly dependent on how you have your MAAS cluster setup, and if this is physical MAAS vs VirtualMAAS
[20:55] <lazyPower> im' assuming its vmaas - as you're only using 2 machines per marco's post right?
[20:55] <darknet_> perfect, but if the host where I've installed MaaS change the IP address I can't launch the node via MaaS
[20:55] <lazyPower> darknet_: if your machine has 2 network devices, that is the recommended path to use - 1 for public traffic access, and the second as the private network (or management network)
[20:56] <lazyPower> your public network bridge should be bridged into your VM Cluster, the private network can very well be a virtual network created inside of your KVM configuration
[20:56] <darknet_> ah here is my problem!!!!
[20:56] <darknet_> my RC has to have 2 interface
[20:56] <lazyPower> Networking and VMAAS is a very tricky thing - the reasoning being MAAS recommends you run the MAAS DHCP server and DNS - this is the necessity for a private network that exists only within the vlan of that cluster.
[20:57] <lazyPower> your public network wont have the same requirement, and you're safe to use whatever DHCP/DNS settings are incoming from your bridged network on that particular interface
[20:58] <lazyPower> it will be a bridged mode networking connection, and helping you get that set up is a bit beyond my scope of knowledge - iv'e done it a few times but its highly dependent on how your network is setup. The best I can offer from where I'm sitting is encouragement and answers to very specific questions.
[20:58] <darknet_> I explain y my lab....i've on host ubuntu 14.04lts with kvm and virt-manager then with it I've created a VM (MaaS) with just one interface.
[20:59] <lazyPower> darknet_: the first step to doing any of this is creaeting a bridged interface - do you know how to do that?
[20:59] <darknet_> I've created a new virtual network (1
[21:00] <darknet_> 1.1.0.0/24
[21:00] <darknet_> with virt-manager and I've used that as network for MaaS,
[21:03] <darknet_> lazyPower_: and for 2 VM,
[21:08] <lazyPower> darknet_: i just got pulled into a meeting - so far sounds good.
[21:09] <lazyPower> replies will be latent
[21:09] <darknet_> lazyPower:_thanks a lot for your supporting see you tomorrow with log!!!
[21:09] <lazyPower> best of luck darknet_, cheers
[22:21] <lazyPower> jose: Congrats on your first solo promulgation man. May the juju powers be with you.
[22:22] <jose> thanks! :)