=== freeflying_away is now known as freeflying === defunctzombie_zz is now known as defunctzombie [02:46] So I had an instance running on AWS which I started simply to test the constraints and make sure it was starting the right instance type. Once I had verified that it was, I immediately used destroy-machine and left it. A few hours later I noticed the machine was still running, and after repeated destroy-machine commands it didn't go away. I eventually terminated it from the AWS console, but it still shows up in juju stat. Is there a way I ca [02:46] n tell juju to forget about the machine? [02:52] kenn: nope, sorry [02:52] not of the delete failed [02:52] sorry [02:52] destroy failued [02:52] can you give some more information [02:52] liek the output of juju status [02:52] hang on [02:52] when you were trying to destroy the machine [02:53] current output of juju status: http://pastebin.com/rj0g0UAi [02:54] before I terminated the machine on AWS, instance-state on machine 1 said something other than missing. Sorry I didn't note down any of the information, but if it happens again I'll pick up the logs and such as well [02:54] kenn: was a machine created ? [02:55] hmm, i see dying, it probably was created [02:55] yeah, it has an instance [02:55] I created machine 1 and requested it be destroyed very soon after that. The instance was created in Amazon, and I could also SSH to it [02:55] it said dying when I realised it hadn't shut down 2.5 hours later [02:56] that's when I killed it in AWS [02:56] ok, there is currently no way to remove the record from juju [02:56] sorry [02:56] ok, cool. I'll leave it around for when I remake the environment [02:56] thanks for your help. Next time I will grab more info [02:58] np, sorry i wasn't able to do more [02:58] oh actually, just noticed one of my terminals still has the output of a tail -f on machine 1 for /var/log/juju/machine-1.log: http://pastebin.com/wuW91Eur [02:59] 2013-09-13 02:34:33 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://ec2-23-23-45-19.compute-1.amazonaws.com:17070/: lookup ec2-23-23-45-19.compute-1.amazonaws.com.: no such host [02:59] wow [02:59] impossible [03:00] best diagnostic ;) [03:00] that machine failed to boot properly [03:00] no idea what happened to it [03:00] ec2 plays the law of large numbers [03:01] oh wow it's actually failing to connect to itself? [03:01] a certain % of macines spawned are duds [03:01] /win 10 [03:02] yeah, strange things happen occasionally [03:02] thanks for the help davecheney, I'll leave the entry in juju alone until I rebootstrap, which I will at some point anyway [03:03] lol I missed the "no such host bit" that's funny, yeah, law of large numbers === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === CyberJacob|Away is now known as CyberJacob === defunctzombie is now known as defunctzombie_zz === tasdomas_afk is now known as tasdomas === tasdomas is now known as tasdomas_afk === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === kentb is now known as kentb-afk === kentb-afk is now known as kentb === freeflying is now known as freeflying_away === natefinch is now known as natefinch-afk === natefinch-afk is now known as natefinch [17:55] Hi guys - I put this request already to ask ubuntu yesterday, but is there any way to track progress with add-relation in debug-hooks? Or is it just as easy to look straight at the debug-log? [18:16] kurt_: Okay [18:16] kurt_: I've seen you ask this a few times, but I've been too busy to respond [18:16] kurt_: I'm making time to get you resolved, because the answer is yes [18:16] marcoceppi: thanks! [18:17] kurt_: debug-hooks will stop at every hook, so yes, you should be able to just trap the hook when you add relation. But I feel like you're experiencing an issue stopping that from happening [18:18] marcoceppi: I'm not seeing any output in the debug-hook window [18:18] kurt_: you should see a byobu/tmux window with numbers at the bottom [18:18] kurt_: right? [18:18] marcoceppi: yup, got that [18:19] kurt_: have you already run juju add-relation from your machine? [18:19] marcoceppi: yes, which gives the error agent-state-info: 'hook failed: "relation-changed"' [18:19] which I believe is something to do with ssh keys [18:19] kurt_: so right now you have the relation-changed error and debug hooks open? [18:20] marcoceppi: no just general window [18:20] kurt_: Right, but the unit is currently in an error state, right? [18:21] if you run juju status in another terminal window from your machine it shows hook failed.. right? [18:22] yes [18:22] kurt_: PERFECT! You're just one step away from making this rock [18:22] I do "watch juju status" which gives me the error above [18:22] kurt_: in a terminal window other than the debug-hooks window, run `juju resolved --retry ` === BradCrittenden is now known as bac [18:23] should I have the debug-hooks window open yet? [18:23] kurt_: yes, you should [18:23] k, hang on [18:23] kurt_: cool [18:24] oh actually I do,it was hidden === defunctzombie_zz is now known as defunctzombie [18:24] What happens, you've got debug-hooks open, then you run `resolved --retry` it will attempt to run that hook again, however debug-hooks will catch it and put you in window 1 on the bottom, which should be X-relation-changed [18:25] kurt_: at that point you can run hooks/X-relation-changed or any hook and watch the output live [18:25] kurt_: you can even edit the hooks on the machine and run them over and over again [18:25] until you resolve the issue, just make sure you copy your changes to your local repo :) [18:25] it doesn't do anything [18:25] all I see is a root prompt [18:25] root@amcet:~# [18:25] kurt_: are you debug-hooks in to the right unit? [18:26] juju debug-hooks nova-cloud-controller/0 [18:26] have you run `juju resolved --retry nova-cloud-controller/0` [18:27] yep === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob [18:40] kurt_: Okay, debug-hooks was added in 1.13.1 [18:41] kurt_: which is why it's not working on you 1.12.0 deployed nodes [18:42] kurt_: if you're willing to give it a shot, there's a juju upgrade-tools command, which will upgrade the version of juju on all your nodes. I don't know if there's an upgrade path between 1.12 -> 1.13; I know they try to do them between stable versions, IE 1.10 -> 1.12 -> 1.14 [18:42] kurt_: At worse, you'll have to destroy and try again [18:42] kurt_: the command I think you'll want to use is `juju upgrade-juju --dev --upload-tools` [18:43] which will upload to your maas bucket the latest tools and will select the latest dev, 1.13.3 [18:43] kurt_: I've not used the command before, so I'm not sure how long this will take or what success looks like (other than agent-version being updated) [18:44] I'll try that.... [18:44] kurt_: I'd watch juju status like you are until the agent-versions are > 1.13.3 (could be 1.13.3.1, not sure the exact version) [18:44] ok, give me a few minutes... [18:44] once that's done, do the same steps. Launch debug hooks, run resolved --retry, wait for it to trap the hook [18:44] kurt_: sure, np [18:45] cheers [18:47] kurt_: feel free to ping me if something unexpected pops up [18:47] thanks marcoceppi [18:51] marcoceppi: juju status is now dead [18:52] well, returns nothing [18:52] just spinning [18:52] kurt_: it might be in the process of restarting the state-server [18:52] ok, same thing with debug-log [18:52] kurt_: once juju status is broken, most other juju commands will be [18:53] ah, here we go.. unfortunately, still at 1.12 [18:53] kurt_: almost all of them rely on connecting to the state server [18:53] kurt_: are any of the other nodes updated, or are all of them 1.12 still ? [18:53] all still at 1.12 [18:54] kurt_ :\ well it was worth a shot [18:54] but I see stuff going on in debug-log [18:54] lots of stuff [18:54] kurt_: oh, maybe it's still in the process of updating then [18:54] yes [18:54] kurt_: there is still hope [18:54] let's give it a few minutes, I will report back [18:54] kurt_: awesome! [18:57] marcoceppi: WOOT! WOOT! agent-version: 1.13.3.1 [18:57] kurt_: AWESOME! [18:58] kurt_: You should be able to play with debug-hooks now [18:58] on it [18:58] stuck here... [18:58] nova-cloud-controller/0:cloud-compute-relation-changed % [18:58] kurt_: that's not stuck [18:58] that's the hook [18:58] ah..ok [18:58] if you look at the bottom [18:59] you'll see you're in tab 1 [18:59] yup [18:59] so if you do an `ls`, you'll see you're in your root charm [18:59] yep [18:59] kurt_: you should be able to run `hooks/cloud-compute-relation-changed` and watch the output and change stuff [18:59] kurt_: this is also an environment in which you can run the special juju commands, like relation-get, config-get, open-port, etc [19:00] Just like with tab 0, when you're done playing with the hook just type `exit` [19:00] and it'll proceed on with the rest of the queued events [19:00] trapping each one in succesion [19:01] ok, I saw some of that in the documentation. this is helpful. what is tmux command for opening new win? guess its time to RTFM [19:01] :) [19:01] kurt_: Ctrl + A, c [19:01] right - been a while since I've worked with tmux [19:01] Ctrl + A is the escape sequence, type it, thenthe command which is 'c' [19:01] ok, back to the core problem which I saw before [19:01] kurt_: cool, yeah, c creates space moves between [19:02] its doing a getaddrinfo - but it should know the address already since it should be querying the name server which should be the root node [19:02] natefinch: seems upgrade-juju works as advertised [19:03] kurt_: getaddrinfo for itself or for the other unit it's connected to? [19:03] * marcoceppi reads the hook [19:03] are you still in my screenshare? [19:03] kurt_: oh, doh [19:03] kurt_: let me join again [19:03] ok === defunctzombie is now known as defunctzombie_zz [19:12] marcoceppi: great :) === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [19:47] kurt_: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1225160 [19:47] <_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error [19:47] Launchpad bug 1225160 in nova-cloud-controller (Juju Charms Collection) "cloud-compute-relation-changed fails, getaddrinfo error" [Critical,Confirmed] [19:47] <_mup_> Bug #1225160: cloud-compute-relation-changed fails, getaddrinfo error [19:47] holy cow, we don't need that many bots [19:48] marcoceppi: haha... I bet they could easily get into an infinite loop.... [19:49] natefinch: I was really worried that ubot5` was going to start up again. [19:49] natefinch: I wonder if you pasted a URL to another but in the title of a bug if it would set them off [19:51] natefinch: I was thinking about trolling them that way.... but i'd feel bad if they just spammed the channel for forever [20:08] Hi, I'm trying Juju with DevStack and I'm getting this when I run juju bootstrap: [20:08] error: cannot query old bootstrap state: failed to GET object provider-state from container juju-674f89e24929d54fb4376c9bfad71193 caused by: cannot create service URLs caused by: the configured region "RegionOne" does not allow access to all required services, namely: compute, object-store access to these services is missing: object-store [20:09] Do I need to run an other type than "openstack" for devstack? [20:09] frakt: I've not tried devstack and juju yet [20:09] frakt: what does `juju version` say? just for reference [20:09] 1.12.0-precise-amd64 [20:09] frakt: okay, cool [20:10] frakt: if you're just looking to play with juju, and you have an Ubuntu machine already, you could use the local provider. It uses LXC to create a cloud environment on your machine. Since charms work on all the different providers you'll get the same "juju" experience without having a cloud available to you yet [20:11] Ok I'll try [20:13] marcoceppi: thanks [20:14] frakt: https://juju.ubuntu.com/docs/config-local.html [20:15] frakt: you'll just need to install the `juju-local` package iirc [20:15] hi there. i'm trying to read up on deploying openstack on ubuntu and trying to wrap my head around juju and maas etc. is there a guide that explains the relationships and the process better? thanks [20:15] Thanks, [20:17] nosleep77: we're in the process of getting our docs super stellar to explain the whole juju, openstack, maas relationship. But they're still a bit behind. There are a few blog posts out there but I'd be happy to answer any specific questions you had [20:18] marcoceppi: thanks i'm still doing some initial reading so if you can tell me about those blogs that would be awesome. i basically wanted to do a simple one or two node openstack deployment to get a taste of things [20:20] nosleep77: well, there's no real "simple" openstack deployment. At least not with one to two nodes. Juju's default mode of operation is one unit of service per machine. So following that and the openstack services we have "charmed" you'll need a min of 7 pieces of hardware to standup openstack at this time. There's things like containerization and stuch that will allow you to co-locate services on one physical machine but they're not [20:20] fully implemented yet. [20:20] nosleep77: let me find you some further reading [20:22] marcoceppi: thank you; however RDO has packstack which can be done on a single node then there's also devstack and stackops and i'm sure others that I can use as well... I thought maybe something like that existed for ubuntu server since I generally like ubuntu [20:22] nosleep77: so, with juju you /can/ do openstack all on one machine, it's just not really tested or recommended at the moment [20:23] nosleep77: here's a video of deploying openstack with the Juju GUI http://www.youtube.com/watch?v=mspwQfoYQks [20:24] https://javacruft.wordpress.com/2013/06/25/virtme/ [20:24] marcoceppi: that's not a problem at all.. i can still try to do it and see how it goes... i do have resources to bring up 3-4 VMs and my hypervisor passes the virt cpu flags so i should be good.. not real hardware but it should work i think [20:24] marcoceppi: thank you [20:24] nosleep77: https://help.ubuntu.com/community/UbuntuCloudInfrastructure [20:25] nosleep77: feel free to ask any questions you may have here, on askubuntu.com with the "juju" tag, or to our mailing list juju@lists.ubuntu.com === medberry is now known as med_ [20:25] oh fantastic... thanks! for some reason this link wasn't coming up in the google search.. thank you [20:26] nosleep77: finally, if you haven't already, here are our user docs: http://juju.ubuntu.com/docs [20:27] There's also a whole host of videos in our video section of the site, nosleep77, https://juju.ubuntu.com/resources/videos/ [20:27] thanks that gives me enough for a couple days :) [20:28] nosleep77: cheers! [20:28] marcoceppi: cheers! [20:33] frakt@ubuntu:~$ sudo juju bootstrap error: no reachable servers [20:33] k? :) [20:38] frakt: run `sudo juju destroy-environment` then `sudo juju bootstrap -v --debug` and pate the output to http://paste.ubuntu.com [20:39] http://paste.ubuntu.com/6103261/ [20:40] frakt: what version of ubuntu are you on? 12.04? [20:40] yes [20:40] frakt: did you add the ppa:juju/stable ppa? [20:41] yes [20:41] gonna try on my other machine, sec [20:41] frakt: run sudo juju destroy-environment again, then sudo apt-get update && sudo apt-get install mongodb-server [20:41] ok [20:41] frakt: You just need a more recent version of mongodb; the precise version isn't compiled with ssl support. The mongodb in the stable ppa will fix that [20:42] ok yeah I did things the wrong order I guess :) [20:47] so there's supposed to be a web ui for juju? https://juju.ubuntu.com/resources/the-juju-gui/ [20:47] frakt: yup, it's actually packaged as a charm, so it's optional. [20:47] ah I see! [20:47] frakt: if you've got juju status saying that there's a machine 0 ready to go [20:47] frakt: you can add it with `juju deploy juju-gui` and once that's in a started state, you can navigate to the address and use that from there on out [20:48] cool, I'll give it a try [20:49] frakt: the GUI's great and does almost everything the command line does, it does tend to lag a little behind new features, but they're always working to close the gap === freeflying_away is now known as freeflying [21:05] http://i.imgur.com/zB59lqB.png [21:05] I guess it's a success! :) [21:06] frakt: Yup! nice! [21:06] frakt: just need to fill out the last bit of stuff on that setup page you'll have a running WordPress install [21:06] yeah [21:06] frakt: local provider doesn't have a firewaller, so in the future you'll need to expose the wordpress service before being able to access it [21:07] frakt: there are a few minor caveats between the local provider and the other cloud providers, but they're minor in nature [21:07] I did juju expose wordpress but uts still on a different LAN than my desktop computer so I use a http tunnel to access it [21:07] is there an easy way to expose it to my 'real' lan? [21:08] frakt: not without tweaking lxc configuration, it's only available to the host machine running the juju commands [21:08] frakt: let me see if there's a blog post on it, if not I'll look in to it this weekend [21:09] frakt: here's a bug report and a very brief answer on askubuntu [21:09] https://bugs.launchpad.net/juju-core/+bug/1064263 [21:09] <_mup_> Bug #1064263: Allow local containers to be exposed on the network [21:09] Launchpad bug 1064263 in juju-core "Allow local containers to be exposed on the network" [Undecided,New] [21:09] <_mup_> Bug #1064263: Allow local containers to be exposed on the network [21:10] http://askubuntu.com/questions/139208/how-can-i-access-local-juju-instances-from-other-network-hosts [21:10] calm down ubuntu bots! [21:11] kurt_: I'm about to head out for the night, it's a Friday here in the US so most of us have left for the weekend. If you have questions and no ones around to answer http://askubuntu.com is a great place to stick them or you can mail the juju mailing list: juju@lists.ubuntu.com [21:11] as well as leave them here, I'll be back in a few hours [21:23] marcoceppi: cool. Thanks again for your help. [21:24] does anyone have an idea on how to force the juju provisioner to start in 1.13.3? Im running into the bug where the api starts first and i cant frovision new machines === defunctzombie_zz is now known as defunctzombie === freeflying is now known as freeflying_away [22:42] does anyone have a fix for the 17070 provision errors? [22:42] in the bug report it states that it should eventually resolve itself, but it still hasn't come online === CyberJacob is now known as CyberJacob|Away === gary_poster is now known as gary_poster|away