/srv/irclogs.ubuntu.com/2014/11/19/#juju.txt

=== liam_ is now known as Guest50977
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== lonroth changed the topic of #juju to: /join #android-dev
=== lonroth changed the topic of #juju to: lonroth
=== lonroth changed the topic of #juju to: Juju
lonrothsorry about that =D09:32
=== kadams54 is now known as kadams54-away
skayhi juju. I have managed to get my laptop in to a crazy and exciting state14:20
skayI got in to a state where juju calling lxc-create had this problem http://paste.ubuntu.com/9096762/14:22
skayagent-state-info: 'error executing "lxc-create": Container already exists'14:22
skayand did call juju destroy-environment a gazillion times, lxc-ls did not list anything for the machines, so then resorted to trying to clean things up by hand14:23
skayby going around to /var/lib/juju/containers and deleting the image directories14:24
skayetc14:24
skaynow I get an exciting error when I try to bootstrap http://paste.ubuntu.com/9096721/14:24
skaynow I just need to grind until I kill the big boss14:24
mbruzek1hello skay14:37
mbruzek1Are you using sudo with the lxc-ls  command?14:37
skaymbruzek1: yes14:37
skaymbruzek1: it's int he pastebin. I called: sudo lxc-ls --fancy --nesting14:38
mbruzek1looking14:38
skaymbruzek1: got two pastebins.14:39
skaymbruzek1: I think I've managed to royally screw things up after trying to do manual cleanup14:39
mbruzek1skay: it looks like it, still reading.14:40
skaymbruzek1: I'll probably need to figure out how to clean up everything. drastically.14:40
mbruzek1Definitely looks like an lxc related problem.  I have not seen where lxc-destroy fails.14:40
mbruzek1OK lets do this.14:41
mbruzek1juju destroy-environment -y local --force14:41
mbruzek1delete the images in /var/lib/juju/container/*14:42
skaymbruzek1: I did try --force, I will try again14:42
mbruzek1skay: I am sure you did, I just want to get juju to stop talking to those images14:43
skayalong with deleting the images in /var/lib/juju/container/*14:43
mbruzek1Looks like you have problems destroying the images.14:43
skaymbruzek1: thanks, it does make sense to try all the steps because I must have missed something14:43
mbruzek1sudo lxc-ls --fancy14:44
mbruzek1do you see any containers running?14:45
mbruzek1skay also delete things in /var/lib/lxc/juju*14:45
mbruzek1if there is anything there14:45
skaymbruzek1: no, but it shows some as STOPPED. which I wouldn't expect. sanity check. http://paste.ubuntu.com/9097472/14:45
mbruzek1ok is there anything in /var/lib/lxc/juju*?14:46
skaymbruzek1: yes, and I deleted it. lxc-ls no longer shows anything. that is hopeful14:47
mbruzek1I think we are getting somewhere.14:47
mbruzek1Let me check if there are any other clean up bits I do14:47
mbruzek1Ok delete everything in /var/lib/juju/locks/*14:48
skaymbruzek1: done. and there were things in there14:49
* mbruzek1 nods14:49
mbruzek1OK if you sudo lxc-ls shows nothing more I think you should try another bootstrap.14:49
mbruzek1skay: juju bootstrap -v -e local --debug14:50
skaymbruzek1: thanks! sudo lxc-ls shows nothing, so here goes14:50
skaymbruzek1: debug starts tmux right? (I've not tried it yet.)14:50
skaymbruzek1: and I'm tmux already. maybe I should get out14:50
mbruzek1no it just prints out an obnxious amount of data14:50
roadmrobnoxiousness ftw14:51
skaynot seeing any ERRORs... yet14:51
skayOH NOES14:51
mbruzek1?14:51
skaylet me pastebin it.14:52
skaylast line shows hte error, http://paste.ubuntu.com/9097595/14:53
skaymbruzek1: there is this blog post, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ and I didn't kill the mongod or jujud processes, so let me check that (earlier today I did look for a running juju process, but I didn't know to check for mongod)14:56
skaythough, ps aux | grep mongo doesn't find anything14:58
mbruzek1skay: Yeah I was looking at that kind of script I have on my own system, it is home made so nothing official let me pastebin something for you14:58
skaymbruzek1: thanks!14:58
mbruzek1http://pastebin.ubuntu.com/9097629/14:58
mbruzek1It started with Jorge's ask ubuntu post but I have added and removed from it14:59
mbruzek1skay: It looks like you had juju running before.  Did you change anything recently?15:00
skayI can't figure out if I did before I started having hte problems. last night I was pretty frustrated and figured why not upgrade to utopic.15:01
skayso I did. similar things are happening today, so I don't know how much that would have changed things, except now my 0 is utopic15:01
mbruzek1OK.  So there are no juju or mongo processes running now right?15:03
mbruzek1Did you try the clean script?15:03
skaycorrect. I'm currently looking through the script to see what it does, and was listing the directories to see if they have anything in them before running the script because Im curious whether I had cleaned up everything15:04
skayand then I'll run the script for good measure15:05
mbruzek1skay: We tried the major parts of this script I would be surprised if it fixes your problem.  So you recently updated to utopic.  Do you have default-series:  set in ~/.juju/environments.yaml?15:06
skaymbruzek1: yes, to precise15:06
mbruzek1skay: run the script and let me know if you see anything clean up better.15:07
skayok15:07
skaymbruzek1: it failed, http://paste.ubuntu.com/9097960/15:13
skayI notice that the script only deletes cloud-{precise,trusty}, and I see download and trusty in that dir. would it affect this?15:13
skayand, any reason not to delete /var/cache/lxc/cloud-*15:14
mbruzek1skay: Yes this script is pretty old and "unofficial" so updates for utopic15:15
avoineskay: do you have any mongodb in your /var/log/syslog?15:15
avoine*mongodb errors15:15
mbruzek1would be needed in your case15:15
avoineI found that cleaning running lxc vm and /var/lib/juju/ is enought for me most of the time15:19
skayavoine: http://paste.ubuntu.com/9098077/15:20
avoineskay: do you have a local IP address in the 10.x.x.x range?15:22
skayavoine: ifconfig shows lxcbr0 with one15:23
avoinethat's ok15:23
skayavoine: if lxc-ls doesn't show any containers, should lxcbr0 still show up?15:24
avoineI was suspecting a bug I had last week but I seams to be something else15:24
avoineskay: yes15:24
avoineskay: Have you tried to boot up an lxc node manually?15:25
avoinewith something like: lxc-create -t ubuntu -n ubuntutest15:26
skayavoine: I can't remember if I've tried that today, I'll do so now. btw, juju --version gives me 1.20.11-utopic-amd64 in case there is any known issue with that15:26
avoineI'm at the same version15:26
mbruzek1I was searching for your problem skay and I found this bug https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/134681515:27
mupBug #1346815: lxc-clone causes duplicate MAC address and IP address <amd64> <apparmor> <apport-bug> <utopic> <lxc (Ubuntu):Fix Released> <lxc (Ubuntu Trusty):Triaged> <https://launchpad.net/bugs/1346815>15:27
avoinethis in your log looks suspicious: start: Job is already running: juju-agent-sheila-local15:28
avoinedo you have any juju-* process running?15:28
mbruzek1avoine: That is the errror message that I searched on15:28
mbruzek1to find the bug listed above15:28
skayavoine: I thought not, but will check again15:29
skayavoine: from my earlier pastebin, I showed ps aux | grep juju and it didn't show any processes other than the grep15:30
skayavoine: still nothing showing from that. is there a better way to check?15:30
skayavoine: lxc-create still running, btw15:31
avoinethe "Job is already running" error must be "normal" then15:32
avoineI don't use lxc-clone or lxc-clone-aufs so mbruzek1's bug could be it15:33
avoinemaybe you could try to put them both to false15:33
mbruzek1skay: The bug I listed had some pretty easy re-create steps15:34
avoinein your environments.yaml15:34
mbruzek1skay when you get a chance can we try steps 1-4?15:34
skayavoine: lxc-create just finished, sudo lxc-attach -n ubuntutest gives me: lxc-attach: attach.c: lxc_attach: 635 failed to get the init pid15:34
skaymbruzek1: I'll try to recreate the bug now15:34
mbruzek1skay: I just ran the steps on my machine and I got the "correct" output (different macs)15:35
skayalso, oops, forgot to lxc-start before attempting to attach to ubuntutest, that works as expected once I did that15:37
avoineok15:37
skaymbruzek1: I followed steps 1 through 4, and sudo lxc-ls -f shows bar and foo have different ip addresses.15:38
avoineskay: what is you mongodb version? dpkg -l | grep mongo15:38
avoineskay: and could you paste what's in /var/log/juju-*-local/all-machines.log15:39
skayavoine: ii  juju-mongodb                                         2.4.10-0ubuntu1                                   amd64        MongoDB object/document-oriented database for Juju15:39
mbruzek1skay: then I suspect the bug is not our problem15:39
skaymbruzek1: which version of mongo do you have?15:40
mbruzek12.4.9-0ubuntu315:41
mbruzek1I am on trusty15:41
skayavoine: nothing in /var/log/juju-*-local/15:41
mbruzek1skay: if you got different mac addresses then the bug I found is not the problem15:41
skaymbruzek1: true.15:42
skayavoine: which mongo version do you have?15:42
avoinesame as yours15:42
skayavoine: are you on trusty or utopic?15:42
avoineskay: utopic15:44
mbruzek1skay: What is your version of lxc?   (Mine is 1.0.6-0ubuntu0.1)  dpkg -l | grep lxc15:45
avoineI have 1.1.0~alpha2-0ubuntu315:46
skayavoine: I've got 1.1.0~alpha2+master~20141106-1929-0ubuntu1~utopic15:47
skayavoine: I'm using the ubuntu-lxc daily ppa15:49
skayavoine: perhaps I should not?15:49
mbruzek1skay is there a reason you are on the daily one?15:49
skaymbruzek1: not really15:49
mbruzek1skay: Comment #6 of the bug I listed states : This bug was fixed in the package lxc - 1.1.0~alpha2-0ubuntu215:50
skaymbruzek1: I checked and the IPs were different... so probably that bug is fixed in daily as well?15:50
mbruzek1It looks like avoine has a later version, I don't know what yours is.  The date looks later15:51
mbruzek1yes but since we are having an LXC problem and you are on the daily build I would suspect some other lxc regression is causing this problem.15:51
avoineskay: that could be it, try removing it with ppa-purge15:52
skaymbruzek1: I'll remove the ppa and stop using daily15:52
mbruzek1skay: if there is no particular reason for the daily ppa could you go back to the package lxc?15:52
skaymbruzek1: I'll try so15:52
skayavoine: which package installs ppa-purge? I do not have that command15:53
avoineppa-purge I think15:53
skayhaha, go figure15:53
avoineit still troubles me that you don't have anything in /var/log/juju-*15:58
=== liam_ is now known as Guest9691
mbruzek1avoine: If the bootstrap node is not coming up that might be why we have no logs16:05
skayavoine: I cleaned up everything, and then after that ran bootstrap, which failed. so what mbruzek1just said is likely the reason16:06
* skay just joined a meeting, so not as chatty16:06
skayappreciate all the help. I just did a ppa-purge, and will try everything over again once the meeting is over16:06
avoinembruzek1: that would make sens16:09
avoinemaybe checkout in /var/log/upstart/juju-* instead16:09
dosaboyjamespage_: https://code.launchpad.net/~hopem/charms/trusty/nova-compute/rbd-imagebackend-support16:19
dosaboyjamespage_: as mentioned, not ready for review yet, but hopefully almost16:20
dosaboyjamespage_: needs ceph-broker to land first16:20
darknet_someone can help me? I've reported a problem deployed all services for Openstack and make all relations between nodes but if I try to open horizon, I see just a white page!!This lab has realized using a Virtual MaaS Server and with 2 Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/16:26
darknet_is there anyone can help me16:26
darknet_ops sorry I wrote a bad sentence!!16:28
darknet_I want to say that I deployed all service and make all relations between node, but when I try to open the dashboard I see just a white page. I've also to try to ping from host the VM using FQDN and it works.16:32
darknet_anyone can help me?16:41
avoinedarknet_: this is either a problem in horizon templates or apache2 is returning you an masked error16:42
avoinedarknet_: check the apache2 logs for any error16:42
darknet_I've also try to connect on node where juju has deployed horizon and restart apache but nothing16:43
gQuigshow do I customize the default deployment name?  instead of "juju-canonistack-machine-#"?16:43
darknet_this is a log of apache http://paste.ubuntu.com/8615952/16:44
gQuigs(I'm running into DNS conflicts as others have used the same name..)16:46
darknet_I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/16:46
darknet_avoine_: I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/16:47
marcoceppidarknet_: did you go to horizon-ip/horizon ?16:47
darknet_hi marco I've posted on your guide the same problem16:47
darknet_I'm sorry but I've to go now I'll connect back about 10 min.16:48
marcoceppijose: charm review queue queue should be updating again16:58
=== kadams54_ is now known as kadams54-away
lazyPowerGreat Success!17:12
gQuigsanswered>  change your environment name.. oops17:12
=== kadams54-away is now known as kadams54_
jcastrohey lazyPower17:21
jcastroand aisrael17:21
lazyPowerWhats up jcastro17:21
jcastroI noticed the vanilla vagrant boxes are 14.04, not 14.04.117:21
jcastroany idea what's up with that?17:21
lazyPowerI think the cpc build scripts haven't been updated with the latest base image17:21
lazyPowergood catch - haven't been in vagrant land in over a month now17:22
lazyPowerutlemming: ping17:23
jcastrolazyPower, hey so, where do we file vagrant box bugs that are not juju related?17:23
jcastrois my real question17:23
jcastro(I'll also ensure the juju ones are on the list)17:23
utlemmingjcastro: what do you mean by they are not 14.04.117:23
utlemmingjcastro: this is a labeling thing?17:24
jcastrowell initially it was 14.0417:24
jcastroand I upgraded it17:24
jcastroto 14.04.117:24
utlemmingjcastro: ack, file a bug and we'll get on it17:24
jcastroutlemming, we're unclear as to where17:25
lazyPoweri'm sifting through old email threads looking for that link17:25
lazyPoweri know we settled on one, but i forget which project17:25
jcastroI will also file a bug to add a bug link to the descriptions on vagrantcloud.com17:25
jcastrothat should make it easier17:25
lazyPoweradeuring: Abel, were we only tracking bugs based on the vagrant supporting files like the redirector / provisioning bits in the vagrantfile?17:26
utlemmingjcastro: you can file a public bug against ubuntu and assing it Odd_Bloke17:28
lazyPowerutlemming: is that the path forward we want with public bugs against the vagrantboxes (i'm thinking vagrantcloud.com listing)? I'm still not finding the bugtracker we have for the box themselves - as its several components to track, and we only settled on the redirector and other sub-components.17:32
=== kadams54_ is now known as kadams54-away
=== kadams54-away is now known as kadams54_
rick_h_juju do we have any sort of 'recover your juju env' from this azure outage notes going on?18:34
rick_h_for instance, we had our CI environment in Juju, it seems to have come back but with new hostnames and juju is quite unhappy. I wonder if there's a standard "what to watch for, tips for recovering" we're putting together and getting out to the public on this?18:34
darknet_marcoceppi_: I'm so sorry for before, but I had to go out from office!!!18:40
lazyPowerrick_h_: yes! i covered this last week18:40
rick_h_lazyPower: linky!18:40
lazyPowerrick_h_: http://blog.dasroot.net/reconnecting-juju-connectivity/18:40
rick_h_lazyPower: might I suggest a giant twitter storm referencing the azure downtime and this then if we're sure it's the right way to go?18:41
rick_h_and we'll check it out for our env18:41
darknet_marcoceppi_: as url I've used http://IP_address/horizon18:41
lazyPowerrick_h_: sounds good - ping me with what you discover and I'll lock and load some social media candy18:41
rick_h_lazyPower: maybe even a juju mailing list email post18:41
rick_h_lazyPower: I assume there's got to be > 1 juju on azure user doing :( today18:42
lazyPoweryeah, global azure outage is going to be a fun run for a lot of users18:43
rick_h_lazyPower: yea, proactive canonical response ftw. bac is going to test it out on our env and see how it goes and then we can see about getting a great message out to users18:44
rick_h_ty for the link, nice timing :)18:44
lazyPowerits almost like i knew18:44
rick_h_hah!18:44
* lazyPower waves his arms like a mystic18:44
skayavoine: thanks for all the help, bootstrap works again, and things are looking okay. mbruzek1 isn't around to thank. oh well!18:46
skayavoine: I did end up rebooting since it didn't work right after ppa-purge and I figured, what the hell, why not reboot18:46
lazyPowerskay: really happy to hear we got you sorted.18:49
lazyPowerand i'lli pass along your well wishes to mbruzek when he returns18:49
avoineskay: great news!18:49
skaylazyPower: I am very grateful. I was almost ready to resort to completely blowing away my laptop and starting over18:49
lazyPowerooo, tricky18:49
lazyPowerglad you didn't have to resort to such extreme measures18:49
skaylazyPower: maybe I should see if I can reproduce the problem ina  friendly way in case I uncovered something in a daily build18:50
skaybut I don't have time for it right now18:50
skayand also I feel a bit antsy at the idea since I'd rather do that on a different computer18:50
lazyPowerskay: i cant say that i blame you there :)18:50
lazyPowerpossibly a vagrant run/build would be in order to test taht so its isolated18:51
baclazyPower: hey, thanks for the doc about reconnecting juju19:17
lazyPowerbac: np, did that fix ya up?19:18
baclazyPower: our problem seems little more complicated.  they machine that is supposed to be our state server was not brought back up19:18
bacs/they/the/19:18
lazyPowerah, yeah - if your state server isn't back online - you're hozed19:18
bacazure has it marked as created but it isn't running19:18
lazyPoweruntil the state-server re-appears.19:18
baclazyPower: yeah, it isn't going to just appear and i don't know how to bring it back19:19
lazyPowerhmmm..  do you have a snapshot you can re-deploy?19:19
baclazyPower: no, no snapshot19:19
lazyPowerand/or was your state-server ha-enabled?19:19
bacnope19:19
lazyPoweroh man :(19:19
lazyPoweri have bad news19:20
baci think we'll be recreating it.19:20
lazyPoweryou're going to need that database on the api server for things to normalize - otherwise you're registering units the state server knows nothing about.19:20
baclazyPower: yeah, we'll just have to redeploy.19:20
lazyPowerrick_h_: sorry to hear about the trouble - however social media candy has been deployed. Can I get some syndication lovin on that?19:28
rick_h_lazyPower: sure thing, will look for it19:28
=== CyberJacob|Away is now known as CyberJacob
=== kadams54_ is now known as kadams54-away
skaypip question... I have a local directory with wheels in it, let's call it /path/to/dependencies. and I've hacked python-django to accept extra pip args in hook.py (versus ansible, which I'm not using at the moment). do I need to mount a shared founder where dependiencies should live? or will the charm "magically" be able to use my local folder?19:52
skaymy pip_extra_args is "--no-index --find-links=/path/to/dependencies"19:52
skayand the python-django hack is http://bazaar.launchpad.net/~codersquid/+junk/pure-python-with-tgz/revision/7019:53
skayI'm not going to make a MR based off that, it's just a hack19:53
avoineskay: do you plan to shared your wheels package cache with other instances?19:54
skayavoine: no19:55
skayavoine: I was about to say, currently pip is not finding the files19:56
skayI'm trying to dig up the log, I had it in a window a moment ago19:56
skayavoine: I get: ValueError: unknown url type: /path/to/dependencies19:57
skaypip can handle the path when I run it locally19:58
avoineskay: what is your complet pip command?19:58
skayavoine: will the juju log echo that? let me scroll back19:58
skayavoine: the juju log does not echo that, I will add something to echo the command. I know what I think is the complete command, but in reality I should print it out to see what juju thinks it is20:00
avoineskay: it might be that the version of pip in the vm is too old20:01
avoineskay: try to add:20:02
avoinepip_additional_packages: "pip"20:02
avoinein your juju config file20:02
skayavoine: okay20:02
* skay is rerunning everything20:06
=== roadmr is now known as roadmr_afk
=== kadams54-away is now known as kadams54_
darknet_someone can help me? I've reported a problem deployed the modules to have Openstack on my infrastructure. I've made all relations between nodes but if I try to open horizon, I see just a white page!!This lab has been realized using a Virtual MaaS Server and with 2 VM Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/20:25
=== kadams54_ is now known as kadams54-away
darknet_anyone can help me?20:31
sarnolddarknet_: how long have you 'waited' for everything to start?20:32
sarnolddarknet_: sometimes a lot of work is hidden behind the 'juju relate ...' calls; I know a recent video I saw for deploying openstack took ~15 minutes or something..20:33
darknet_sarnold_: on juju-gui all module and relations are green.20:34
=== roadmr_afk is now known as roadmr
darknet_sarnold_: anyway I wait but nothing, the link http://hostname/horizon presents a white page!!!20:35
lazyPowerdarknet_: green relations dont necessarily mean the relationships have completed running20:37
lazyPowerdo you see any output from the units under relation when you run juju debug-log?20:37
darknet_lazypower_: but if I run the command "juju status -e maas" I see that everything is started!!!20:39
lazyPowerdarknet_: that just means the charm has reached the started hook - as juju is event driven, and relationships can be called after teh started hook it can be a bit misleading20:40
lazyPowerdarknet_: did you see any output from the units under relation when you ran juju debug-log?20:40
lazyPowerdarknet_: also sorry for teh confusion there - we've had some discussions about this on teh mailing list recently - about charms and hooks providing more accurate reporting20:41
darknet_I didn't try to run that.20:41
darknet_I promise y that tomorrow I'll post y log20:43
darknet_i can do that now20:43
lazyPowerdarknet_: juju debug-log should give you an immediate feedback of whats currently happening in the system. if you have the time, a quick check will yield if we need to start debugging or if this is a time to bepatient while juju finishes its housekeeping.20:44
darknet_will y be here tomorrow?20:44
lazyPowerdarknet_: i will be here from ~ 9am EDT to 5pm EDT m-f most weeks.20:44
lazyPowerer, EST - sorry, timeshift happened and i keep forgetting to update my timestamp.20:44
gnuoyHi, would somebody mind preventing my charmers membership from expiring please ?20:45
darknet_ok let's make that and tomorrow I'll contact you20:45
lazyPowersounds good darknet_20:46
darknet_in case I'll send y a private text20:46
lazyPowermarcoceppi: gnuoy is running out of time, can you renew him for me please?20:46
gnuoythanks lazyPower20:46
lazyPowermy pleasure20:47
* lazyPower hat tips20:47
darknet_lazyPower_: just one technical question!20:47
lazyPowerdarknet_: i'm all ears20:47
darknet_why in MaaS I've to report the ssh keys of the Host machine, of Region Controller and of a maas user created on RC?20:49
=== mjs0 is now known as menn0
darknet_lazyPower_: and also Juju20:49
=== menn0_ is now known as menn0
darknet_lazyPower:_ I told y that because everytime I want to run all infrastructure (virtual) I've to use the same network connection otherwise the from MaaS the VM not run20:51
lazyPowerdarknet_: i'm not understanding what you're asking me - let me try to ask what i think you're asking.20:52
lazyPowerYou're questioning why you have to register your ssh keys in the region controller of MAAS?20:52
darknet_yes! and why to run the VM node allocated on MaaS I've to use the same connection?20:53
lazyPowerdarknet_: So long as you have a user on the MAAS region controller - and have the api credentials obtained from the RC - juju will automatically register ssh keys that it uses with any nodes spun up. This key exchange happens transparently.20:54
lazyPowerdarknet_: when you ask why are the VM's using the same connection - are you referring to the same network device? This is highly dependent on how you have your MAAS cluster setup, and if this is physical MAAS vs VirtualMAAS20:54
lazyPowerim' assuming its vmaas - as you're only using 2 machines per marco's post right?20:55
darknet_perfect, but if the host where I've installed MaaS change the IP address I can't launch the node via MaaS20:55
lazyPowerdarknet_: if your machine has 2 network devices, that is the recommended path to use - 1 for public traffic access, and the second as the private network (or management network)20:55
lazyPoweryour public network bridge should be bridged into your VM Cluster, the private network can very well be a virtual network created inside of your KVM configuration20:56
darknet_ah here is my problem!!!!20:56
darknet_my RC has to have 2 interface20:56
lazyPowerNetworking and VMAAS is a very tricky thing - the reasoning being MAAS recommends you run the MAAS DHCP server and DNS - this is the necessity for a private network that exists only within the vlan of that cluster.20:56
lazyPoweryour public network wont have the same requirement, and you're safe to use whatever DHCP/DNS settings are incoming from your bridged network on that particular interface20:57
lazyPowerit will be a bridged mode networking connection, and helping you get that set up is a bit beyond my scope of knowledge - iv'e done it a few times but its highly dependent on how your network is setup. The best I can offer from where I'm sitting is encouragement and answers to very specific questions.20:58
darknet_I explain y my lab....i've on host ubuntu 14.04lts with kvm and virt-manager then with it I've created a VM (MaaS) with just one interface.20:58
lazyPowerdarknet_: the first step to doing any of this is creaeting a bridged interface - do you know how to do that?20:59
darknet_I've created a new virtual network (120:59
darknet_1.1.0.0/2421:00
darknet_with virt-manager and I've used that as network for MaaS,21:00
darknet_lazyPower_: and for 2 VM,21:03
lazyPowerdarknet_: i just got pulled into a meeting - so far sounds good.21:08
lazyPowerreplies will be latent21:09
darknet_lazyPower:_thanks a lot for your supporting see you tomorrow with log!!!21:09
lazyPowerbest of luck darknet_, cheers21:09
=== jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || News and stuff: http://reddit.com/r/juju
=== menn0_ is now known as menn0
lazyPowerjose: Congrats on your first solo promulgation man. May the juju powers be with you.22:21
josethanks! :)22:22

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!