=== liam_ is now known as Guest50977 === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === lonroth changed the topic of #juju to: /join #android-dev === lonroth changed the topic of #juju to: lonroth === lonroth changed the topic of #juju to: Juju [09:32] sorry about that =D === kadams54 is now known as kadams54-away [14:20] hi juju. I have managed to get my laptop in to a crazy and exciting state [14:22] I got in to a state where juju calling lxc-create had this problem http://paste.ubuntu.com/9096762/ [14:22] agent-state-info: 'error executing "lxc-create": Container already exists' [14:23] and did call juju destroy-environment a gazillion times, lxc-ls did not list anything for the machines, so then resorted to trying to clean things up by hand [14:24] by going around to /var/lib/juju/containers and deleting the image directories [14:24] etc [14:24] now I get an exciting error when I try to bootstrap http://paste.ubuntu.com/9096721/ [14:24] now I just need to grind until I kill the big boss [14:37] hello skay [14:37] Are you using sudo with the lxc-ls command? [14:37] mbruzek1: yes [14:38] mbruzek1: it's int he pastebin. I called: sudo lxc-ls --fancy --nesting [14:38] looking [14:39] mbruzek1: got two pastebins. [14:39] mbruzek1: I think I've managed to royally screw things up after trying to do manual cleanup [14:40] skay: it looks like it, still reading. [14:40] mbruzek1: I'll probably need to figure out how to clean up everything. drastically. [14:40] Definitely looks like an lxc related problem. I have not seen where lxc-destroy fails. [14:41] OK lets do this. [14:41] juju destroy-environment -y local --force [14:42] delete the images in /var/lib/juju/container/* [14:42] mbruzek1: I did try --force, I will try again [14:43] skay: I am sure you did, I just want to get juju to stop talking to those images [14:43] along with deleting the images in /var/lib/juju/container/* [14:43] Looks like you have problems destroying the images. [14:43] mbruzek1: thanks, it does make sense to try all the steps because I must have missed something [14:44] sudo lxc-ls --fancy [14:45] do you see any containers running? [14:45] skay also delete things in /var/lib/lxc/juju* [14:45] if there is anything there [14:45] mbruzek1: no, but it shows some as STOPPED. which I wouldn't expect. sanity check. http://paste.ubuntu.com/9097472/ [14:46] ok is there anything in /var/lib/lxc/juju*? [14:47] mbruzek1: yes, and I deleted it. lxc-ls no longer shows anything. that is hopeful [14:47] I think we are getting somewhere. [14:47] Let me check if there are any other clean up bits I do [14:48] Ok delete everything in /var/lib/juju/locks/* [14:49] mbruzek1: done. and there were things in there [14:49] * mbruzek1 nods [14:49] OK if you sudo lxc-ls shows nothing more I think you should try another bootstrap. [14:50] skay: juju bootstrap -v -e local --debug [14:50] mbruzek1: thanks! sudo lxc-ls shows nothing, so here goes [14:50] mbruzek1: debug starts tmux right? (I've not tried it yet.) [14:50] mbruzek1: and I'm tmux already. maybe I should get out [14:50] no it just prints out an obnxious amount of data [14:51] obnoxiousness ftw [14:51] not seeing any ERRORs... yet [14:51] OH NOES [14:51] ? [14:52] let me pastebin it. [14:53] last line shows hte error, http://paste.ubuntu.com/9097595/ [14:56] mbruzek1: there is this blog post, http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ and I didn't kill the mongod or jujud processes, so let me check that (earlier today I did look for a running juju process, but I didn't know to check for mongod) [14:58] though, ps aux | grep mongo doesn't find anything [14:58] skay: Yeah I was looking at that kind of script I have on my own system, it is home made so nothing official let me pastebin something for you [14:58] mbruzek1: thanks! [14:58] http://pastebin.ubuntu.com/9097629/ [14:59] It started with Jorge's ask ubuntu post but I have added and removed from it [15:00] skay: It looks like you had juju running before. Did you change anything recently? [15:01] I can't figure out if I did before I started having hte problems. last night I was pretty frustrated and figured why not upgrade to utopic. [15:01] so I did. similar things are happening today, so I don't know how much that would have changed things, except now my 0 is utopic [15:03] OK. So there are no juju or mongo processes running now right? [15:03] Did you try the clean script? [15:04] correct. I'm currently looking through the script to see what it does, and was listing the directories to see if they have anything in them before running the script because Im curious whether I had cleaned up everything [15:05] and then I'll run the script for good measure [15:06] skay: We tried the major parts of this script I would be surprised if it fixes your problem. So you recently updated to utopic. Do you have default-series: set in ~/.juju/environments.yaml? [15:06] mbruzek1: yes, to precise [15:07] skay: run the script and let me know if you see anything clean up better. [15:07] ok [15:13] mbruzek1: it failed, http://paste.ubuntu.com/9097960/ [15:13] I notice that the script only deletes cloud-{precise,trusty}, and I see download and trusty in that dir. would it affect this? [15:14] and, any reason not to delete /var/cache/lxc/cloud-* [15:15] skay: Yes this script is pretty old and "unofficial" so updates for utopic [15:15] skay: do you have any mongodb in your /var/log/syslog? [15:15] *mongodb errors [15:15] would be needed in your case [15:19] I found that cleaning running lxc vm and /var/lib/juju/ is enought for me most of the time [15:20] avoine: http://paste.ubuntu.com/9098077/ [15:22] skay: do you have a local IP address in the 10.x.x.x range? [15:23] avoine: ifconfig shows lxcbr0 with one [15:23] that's ok [15:24] avoine: if lxc-ls doesn't show any containers, should lxcbr0 still show up? [15:24] I was suspecting a bug I had last week but I seams to be something else [15:24] skay: yes [15:25] skay: Have you tried to boot up an lxc node manually? [15:26] with something like: lxc-create -t ubuntu -n ubuntutest [15:26] avoine: I can't remember if I've tried that today, I'll do so now. btw, juju --version gives me 1.20.11-utopic-amd64 in case there is any known issue with that [15:26] I'm at the same version [15:27] I was searching for your problem skay and I found this bug https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1346815 [15:27] Bug #1346815: lxc-clone causes duplicate MAC address and IP address [15:28] this in your log looks suspicious: start: Job is already running: juju-agent-sheila-local [15:28] do you have any juju-* process running? [15:28] avoine: That is the errror message that I searched on [15:28] to find the bug listed above [15:29] avoine: I thought not, but will check again [15:30] avoine: from my earlier pastebin, I showed ps aux | grep juju and it didn't show any processes other than the grep [15:30] avoine: still nothing showing from that. is there a better way to check? [15:31] avoine: lxc-create still running, btw [15:32] the "Job is already running" error must be "normal" then [15:33] I don't use lxc-clone or lxc-clone-aufs so mbruzek1's bug could be it [15:33] maybe you could try to put them both to false [15:34] skay: The bug I listed had some pretty easy re-create steps [15:34] in your environments.yaml [15:34] skay when you get a chance can we try steps 1-4? [15:34] avoine: lxc-create just finished, sudo lxc-attach -n ubuntutest gives me: lxc-attach: attach.c: lxc_attach: 635 failed to get the init pid [15:34] mbruzek1: I'll try to recreate the bug now [15:35] skay: I just ran the steps on my machine and I got the "correct" output (different macs) [15:37] also, oops, forgot to lxc-start before attempting to attach to ubuntutest, that works as expected once I did that [15:37] ok [15:38] mbruzek1: I followed steps 1 through 4, and sudo lxc-ls -f shows bar and foo have different ip addresses. [15:38] skay: what is you mongodb version? dpkg -l | grep mongo [15:39] skay: and could you paste what's in /var/log/juju-*-local/all-machines.log [15:39] avoine: ii juju-mongodb 2.4.10-0ubuntu1 amd64 MongoDB object/document-oriented database for Juju [15:39] skay: then I suspect the bug is not our problem [15:40] mbruzek1: which version of mongo do you have? [15:41] 2.4.9-0ubuntu3 [15:41] I am on trusty [15:41] avoine: nothing in /var/log/juju-*-local/ [15:41] skay: if you got different mac addresses then the bug I found is not the problem [15:42] mbruzek1: true. [15:42] avoine: which mongo version do you have? [15:42] same as yours [15:42] avoine: are you on trusty or utopic? [15:44] skay: utopic [15:45] skay: What is your version of lxc? (Mine is 1.0.6-0ubuntu0.1) dpkg -l | grep lxc [15:46] I have 1.1.0~alpha2-0ubuntu3 [15:47] avoine: I've got 1.1.0~alpha2+master~20141106-1929-0ubuntu1~utopic [15:49] avoine: I'm using the ubuntu-lxc daily ppa [15:49] avoine: perhaps I should not? [15:49] skay is there a reason you are on the daily one? [15:49] mbruzek1: not really [15:50] skay: Comment #6 of the bug I listed states : This bug was fixed in the package lxc - 1.1.0~alpha2-0ubuntu2 [15:50] mbruzek1: I checked and the IPs were different... so probably that bug is fixed in daily as well? [15:51] It looks like avoine has a later version, I don't know what yours is. The date looks later [15:51] yes but since we are having an LXC problem and you are on the daily build I would suspect some other lxc regression is causing this problem. [15:52] skay: that could be it, try removing it with ppa-purge [15:52] mbruzek1: I'll remove the ppa and stop using daily [15:52] skay: if there is no particular reason for the daily ppa could you go back to the package lxc? [15:52] mbruzek1: I'll try so [15:53] avoine: which package installs ppa-purge? I do not have that command [15:53] ppa-purge I think [15:53] haha, go figure [15:58] it still troubles me that you don't have anything in /var/log/juju-* === liam_ is now known as Guest9691 [16:05] avoine: If the bootstrap node is not coming up that might be why we have no logs [16:06] avoine: I cleaned up everything, and then after that ran bootstrap, which failed. so what mbruzek1just said is likely the reason [16:06] * skay just joined a meeting, so not as chatty [16:06] appreciate all the help. I just did a ppa-purge, and will try everything over again once the meeting is over [16:09] mbruzek1: that would make sens [16:09] maybe checkout in /var/log/upstart/juju-* instead [16:19] jamespage_: https://code.launchpad.net/~hopem/charms/trusty/nova-compute/rbd-imagebackend-support [16:20] jamespage_: as mentioned, not ready for review yet, but hopefully almost [16:20] jamespage_: needs ceph-broker to land first [16:26] someone can help me? I've reported a problem deployed all services for Openstack and make all relations between nodes but if I try to open horizon, I see just a white page!!This lab has realized using a Virtual MaaS Server and with 2 Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ [16:26] is there anyone can help me [16:28] ops sorry I wrote a bad sentence!! [16:32] I want to say that I deployed all service and make all relations between node, but when I try to open the dashboard I see just a white page. I've also to try to ping from host the VM using FQDN and it works. [16:41] anyone can help me? [16:42] darknet_: this is either a problem in horizon templates or apache2 is returning you an masked error [16:42] darknet_: check the apache2 logs for any error [16:43] I've also try to connect on node where juju has deployed horizon and restart apache but nothing [16:43] how do I customize the default deployment name? instead of "juju-canonistack-machine-#"? [16:44] this is a log of apache http://paste.ubuntu.com/8615952/ [16:46] (I'm running into DNS conflicts as others have used the same name..) [16:46] I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ [16:47] avoine_: I've followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ [16:47] darknet_: did you go to horizon-ip/horizon ? [16:47] hi marco I've posted on your guide the same problem [16:48] I'm sorry but I've to go now I'll connect back about 10 min. [16:58] jose: charm review queue queue should be updating again === kadams54_ is now known as kadams54-away [17:12] Great Success! [17:12] answered> change your environment name.. oops === kadams54-away is now known as kadams54_ [17:21] hey lazyPower [17:21] and aisrael [17:21] Whats up jcastro [17:21] I noticed the vanilla vagrant boxes are 14.04, not 14.04.1 [17:21] any idea what's up with that? [17:21] I think the cpc build scripts haven't been updated with the latest base image [17:22] good catch - haven't been in vagrant land in over a month now [17:23] utlemming: ping [17:23] lazyPower, hey so, where do we file vagrant box bugs that are not juju related? [17:23] is my real question [17:23] (I'll also ensure the juju ones are on the list) [17:23] jcastro: what do you mean by they are not 14.04.1 [17:24] jcastro: this is a labeling thing? [17:24] well initially it was 14.04 [17:24] and I upgraded it [17:24] to 14.04.1 [17:24] jcastro: ack, file a bug and we'll get on it [17:25] utlemming, we're unclear as to where [17:25] i'm sifting through old email threads looking for that link [17:25] i know we settled on one, but i forget which project [17:25] I will also file a bug to add a bug link to the descriptions on vagrantcloud.com [17:25] that should make it easier [17:26] adeuring: Abel, were we only tracking bugs based on the vagrant supporting files like the redirector / provisioning bits in the vagrantfile? [17:28] jcastro: you can file a public bug against ubuntu and assing it Odd_Bloke [17:32] utlemming: is that the path forward we want with public bugs against the vagrantboxes (i'm thinking vagrantcloud.com listing)? I'm still not finding the bugtracker we have for the box themselves - as its several components to track, and we only settled on the redirector and other sub-components. === kadams54_ is now known as kadams54-away === kadams54-away is now known as kadams54_ [18:34] juju do we have any sort of 'recover your juju env' from this azure outage notes going on? [18:34] for instance, we had our CI environment in Juju, it seems to have come back but with new hostnames and juju is quite unhappy. I wonder if there's a standard "what to watch for, tips for recovering" we're putting together and getting out to the public on this? [18:40] marcoceppi_: I'm so sorry for before, but I had to go out from office!!! [18:40] rick_h_: yes! i covered this last week [18:40] lazyPower: linky! [18:40] rick_h_: http://blog.dasroot.net/reconnecting-juju-connectivity/ [18:41] lazyPower: might I suggest a giant twitter storm referencing the azure downtime and this then if we're sure it's the right way to go? [18:41] and we'll check it out for our env [18:41] marcoceppi_: as url I've used http://IP_address/horizon [18:41] rick_h_: sounds good - ping me with what you discover and I'll lock and load some social media candy [18:41] lazyPower: maybe even a juju mailing list email post [18:42] lazyPower: I assume there's got to be > 1 juju on azure user doing :( today [18:43] yeah, global azure outage is going to be a fun run for a lot of users [18:44] lazyPower: yea, proactive canonical response ftw. bac is going to test it out on our env and see how it goes and then we can see about getting a great message out to users [18:44] ty for the link, nice timing :) [18:44] its almost like i knew [18:44] hah! [18:44] * lazyPower waves his arms like a mystic [18:46] avoine: thanks for all the help, bootstrap works again, and things are looking okay. mbruzek1 isn't around to thank. oh well! [18:46] avoine: I did end up rebooting since it didn't work right after ppa-purge and I figured, what the hell, why not reboot [18:49] skay: really happy to hear we got you sorted. [18:49] and i'lli pass along your well wishes to mbruzek when he returns [18:49] skay: great news! [18:49] lazyPower: I am very grateful. I was almost ready to resort to completely blowing away my laptop and starting over [18:49] ooo, tricky [18:49] glad you didn't have to resort to such extreme measures [18:50] lazyPower: maybe I should see if I can reproduce the problem ina friendly way in case I uncovered something in a daily build [18:50] but I don't have time for it right now [18:50] and also I feel a bit antsy at the idea since I'd rather do that on a different computer [18:50] skay: i cant say that i blame you there :) [18:51] possibly a vagrant run/build would be in order to test taht so its isolated [19:17] lazyPower: hey, thanks for the doc about reconnecting juju [19:18] bac: np, did that fix ya up? [19:18] lazyPower: our problem seems little more complicated. they machine that is supposed to be our state server was not brought back up [19:18] s/they/the/ [19:18] ah, yeah - if your state server isn't back online - you're hozed [19:18] azure has it marked as created but it isn't running [19:18] until the state-server re-appears. [19:19] lazyPower: yeah, it isn't going to just appear and i don't know how to bring it back [19:19] hmmm.. do you have a snapshot you can re-deploy? [19:19] lazyPower: no, no snapshot [19:19] and/or was your state-server ha-enabled? [19:19] nope [19:19] oh man :( [19:20] i have bad news [19:20] i think we'll be recreating it. [19:20] you're going to need that database on the api server for things to normalize - otherwise you're registering units the state server knows nothing about. [19:20] lazyPower: yeah, we'll just have to redeploy. [19:28] rick_h_: sorry to hear about the trouble - however social media candy has been deployed. Can I get some syndication lovin on that? [19:28] lazyPower: sure thing, will look for it === CyberJacob|Away is now known as CyberJacob === kadams54_ is now known as kadams54-away [19:52] pip question... I have a local directory with wheels in it, let's call it /path/to/dependencies. and I've hacked python-django to accept extra pip args in hook.py (versus ansible, which I'm not using at the moment). do I need to mount a shared founder where dependiencies should live? or will the charm "magically" be able to use my local folder? [19:52] my pip_extra_args is "--no-index --find-links=/path/to/dependencies" [19:53] and the python-django hack is http://bazaar.launchpad.net/~codersquid/+junk/pure-python-with-tgz/revision/70 [19:53] I'm not going to make a MR based off that, it's just a hack [19:54] skay: do you plan to shared your wheels package cache with other instances? [19:55] avoine: no [19:56] avoine: I was about to say, currently pip is not finding the files [19:56] I'm trying to dig up the log, I had it in a window a moment ago [19:57] avoine: I get: ValueError: unknown url type: /path/to/dependencies [19:58] pip can handle the path when I run it locally [19:58] skay: what is your complet pip command? [19:58] avoine: will the juju log echo that? let me scroll back [20:00] avoine: the juju log does not echo that, I will add something to echo the command. I know what I think is the complete command, but in reality I should print it out to see what juju thinks it is [20:01] skay: it might be that the version of pip in the vm is too old [20:02] skay: try to add: [20:02] pip_additional_packages: "pip" [20:02] in your juju config file [20:02] avoine: okay [20:06] * skay is rerunning everything === roadmr is now known as roadmr_afk === kadams54-away is now known as kadams54_ [20:25] someone can help me? I've reported a problem deployed the modules to have Openstack on my infrastructure. I've made all relations between nodes but if I try to open horizon, I see just a white page!!This lab has been realized using a Virtual MaaS Server and with 2 VM Node. I followed this guide http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ === kadams54_ is now known as kadams54-away [20:31] anyone can help me? [20:32] darknet_: how long have you 'waited' for everything to start? [20:33] darknet_: sometimes a lot of work is hidden behind the 'juju relate ...' calls; I know a recent video I saw for deploying openstack took ~15 minutes or something.. [20:34] sarnold_: on juju-gui all module and relations are green. === roadmr_afk is now known as roadmr [20:35] sarnold_: anyway I wait but nothing, the link http://hostname/horizon presents a white page!!! [20:37] darknet_: green relations dont necessarily mean the relationships have completed running [20:37] do you see any output from the units under relation when you run juju debug-log? [20:39] lazypower_: but if I run the command "juju status -e maas" I see that everything is started!!! [20:40] darknet_: that just means the charm has reached the started hook - as juju is event driven, and relationships can be called after teh started hook it can be a bit misleading [20:40] darknet_: did you see any output from the units under relation when you ran juju debug-log? [20:41] darknet_: also sorry for teh confusion there - we've had some discussions about this on teh mailing list recently - about charms and hooks providing more accurate reporting [20:41] I didn't try to run that. [20:43] I promise y that tomorrow I'll post y log [20:43] i can do that now [20:44] darknet_: juju debug-log should give you an immediate feedback of whats currently happening in the system. if you have the time, a quick check will yield if we need to start debugging or if this is a time to bepatient while juju finishes its housekeeping. [20:44] will y be here tomorrow? [20:44] darknet_: i will be here from ~ 9am EDT to 5pm EDT m-f most weeks. [20:44] er, EST - sorry, timeshift happened and i keep forgetting to update my timestamp. [20:45] Hi, would somebody mind preventing my charmers membership from expiring please ? [20:45] ok let's make that and tomorrow I'll contact you [20:46] sounds good darknet_ [20:46] in case I'll send y a private text [20:46] marcoceppi: gnuoy is running out of time, can you renew him for me please? [20:46] thanks lazyPower [20:47] my pleasure [20:47] * lazyPower hat tips [20:47] lazyPower_: just one technical question! [20:47] darknet_: i'm all ears [20:49] why in MaaS I've to report the ssh keys of the Host machine, of Region Controller and of a maas user created on RC? === mjs0 is now known as menn0 [20:49] lazyPower_: and also Juju === menn0_ is now known as menn0 [20:51] lazyPower:_ I told y that because everytime I want to run all infrastructure (virtual) I've to use the same network connection otherwise the from MaaS the VM not run [20:52] darknet_: i'm not understanding what you're asking me - let me try to ask what i think you're asking. [20:52] You're questioning why you have to register your ssh keys in the region controller of MAAS? [20:53] yes! and why to run the VM node allocated on MaaS I've to use the same connection? [20:54] darknet_: So long as you have a user on the MAAS region controller - and have the api credentials obtained from the RC - juju will automatically register ssh keys that it uses with any nodes spun up. This key exchange happens transparently. [20:54] darknet_: when you ask why are the VM's using the same connection - are you referring to the same network device? This is highly dependent on how you have your MAAS cluster setup, and if this is physical MAAS vs VirtualMAAS [20:55] im' assuming its vmaas - as you're only using 2 machines per marco's post right? [20:55] perfect, but if the host where I've installed MaaS change the IP address I can't launch the node via MaaS [20:55] darknet_: if your machine has 2 network devices, that is the recommended path to use - 1 for public traffic access, and the second as the private network (or management network) [20:56] your public network bridge should be bridged into your VM Cluster, the private network can very well be a virtual network created inside of your KVM configuration [20:56] ah here is my problem!!!! [20:56] my RC has to have 2 interface [20:56] Networking and VMAAS is a very tricky thing - the reasoning being MAAS recommends you run the MAAS DHCP server and DNS - this is the necessity for a private network that exists only within the vlan of that cluster. [20:57] your public network wont have the same requirement, and you're safe to use whatever DHCP/DNS settings are incoming from your bridged network on that particular interface [20:58] it will be a bridged mode networking connection, and helping you get that set up is a bit beyond my scope of knowledge - iv'e done it a few times but its highly dependent on how your network is setup. The best I can offer from where I'm sitting is encouragement and answers to very specific questions. [20:58] I explain y my lab....i've on host ubuntu 14.04lts with kvm and virt-manager then with it I've created a VM (MaaS) with just one interface. [20:59] darknet_: the first step to doing any of this is creaeting a bridged interface - do you know how to do that? [20:59] I've created a new virtual network (1 [21:00] 1.1.0.0/24 [21:00] with virt-manager and I've used that as network for MaaS, [21:03] lazyPower_: and for 2 VM, [21:08] darknet_: i just got pulled into a meeting - so far sounds good. [21:09] replies will be latent [21:09] lazyPower:_thanks a lot for your supporting see you tomorrow with log!!! [21:09] best of luck darknet_, cheers === jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || News and stuff: http://reddit.com/r/juju === menn0_ is now known as menn0 [22:21] jose: Congrats on your first solo promulgation man. May the juju powers be with you. [22:22] thanks! :)