=== bradm_ is now known as bradm [00:55] hey all [00:55] per http://www.dreamhost.com/newsletter/1113.html#a2 [00:55] dreamcompute is now "officially in beta" [00:55] so you could sign up at http://www.dreamhost.com/cloud/dreamcompute/ [00:55] and presumably get a account and then point juju at it [00:56] (i say presumably becauase i have not gotten response to my signup from earlier this mornign) [00:56] o/ smoser [01:01] Thats good news! [01:50] smoser: sweet, I'll give it a go as soon as I get creds [02:00] marcoceppi hey wb :) [02:01] o/ [02:01] now go review my Ghost charm :P [02:02] * hatch snickers and runs away [02:03] hatch: it's not in the queue, go put it in the queue [02:03] marcoceppi: I am back now. So in regards of network infrastructure, could you point some document that explains how networks are set up using charms? [02:05] hatch: If it's not listed here: http://manage.jujucharms.com/tools/review-queue then I'm not going to review it [02:05] hmm [02:06] marcoceppi so https://juju.ubuntu.com/docs/authors-charm-store.html#submitting is not a complete list of steps then? [02:06] hatch: it is, link to the bug [02:07] I'll see what's missing [02:07] * marcoceppi is working on a juju charm submit command to make this process a little more automated [02:07] https://bugs.launchpad.net/charms/+bug/1229377 [02:07] <_mup_> Bug #1229377: Charm needed: Ghost blogging platform [02:07] I don't see where it says to add it to the review queue....or how for that matter :) [02:08] hatch: Did you add the "charmers" group to the bug? [02:08] hatch: huh, this should be in the queue. Charmers is watching the bug and it's in a new or fix commited status. [02:08] lazypower yup [02:08] hatch: I'll open a bug and make a note to review this first tomorrow [02:08] marcoceppi cool, so I did everything correctly then? [02:09] I'm not really too concerned I just want to make sure that the documentation is up to date so outsiders don't run into issue [02:09] hatch: looks like it, I'll ping charmworld and figure out why this didn't get scraped [02:09] great thanks [02:10] marcoceppi in that list it says "store error on ghost" [02:11] hatch: that's something different, but equally interesting [02:11] oh :) cool well I was just joking about rushing to get it approved, take your time [02:12] hatch: too late, rush order already processed and your card has been debited $125 [02:12] awwww man [02:13] there's a cancellation fee of $200 [02:13] man you must be taking business lessons from the government [02:13] haha [02:17] well thanks in advance, lemme know if you need any help with anything [02:19] hatch: hey, I got it to show in the queue [02:21] hatch: you need to assign the bug to someone, preferably yourself. I'll update the docs. Hopefully `juju charm submit` will be easy enough that the majority of instructions can be condensed to that one command [02:21] * marcoceppi searches for other charms that may have fallen through the cracks [03:02] marcoceppi ahh interesting thanks === jam1 is now known as jam [05:05] Hi there [05:05] does anybody know how to reuse a machine once issued a juju destroy-machine command? [05:06] it seems even this machine goes back to ready status in MAAS, once picked up again for a different service provision it just stuck and does not do anything [05:07] Luca__: that is werid [05:07] it shoulnd't do that [05:07] after destroy-machine [05:07] can you confirm that the machine is no longer listed in juju status ? [05:09] davecheney: I confirm it is no longer in the juju status [05:09] my understanding is that it should be reused and go through the OS reinstallation [05:09] yup [05:09] the only thing which could prevent that is constraints [05:10] /var/log/juju/machine-0.log on machine-0 will have the details of whta is going on [05:10] I am checking now the machine, status is pending, however not doing any installation [05:10] let me check [05:12] what should I exactly look for? there is a bunch of logs [05:13] 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0 [05:14] it will say worker/provisioner [05:17] nothing about worker/provisioner [05:17] the messages I can find are worker/firewaller [05:17] 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/0 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/1 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0 2013-12-02 05:02:03 DEBUG juju firewaller.go:495 worker/firewaller: started watching machine 13 2013-12-02 05:02:03 D [05:20] Luca__: nothing from the provisioner ? [05:20] in the whole file ? [05:21] yes, nothing in the whole file [05:21] but I could deploy mysql and ceph [05:22] hold on [05:22] filtering for provisioner return some [05:22] 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "13" pending provisioning [05:23] 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "14" pending provisioning [05:23] 2013-12-02 05:02:03 DEBUG juju.provisioner provisioner_task.go:303 Stopping instances: [nfqwq.master:/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ ac69q.master:/MAAS/api/1.0/nodes/node-ee42a908-59d4-11e3-9087-525400378304/ pwq4h.master:/MAAS/api/1.0/nodes/node-472158c0-59d6-11e3-9087-525400378304/] [05:23] 2013-12-02 05:02:05 INFO juju.provisioner provisioner_task.go:367 started machine 13 as instance /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ with hardware [05:24] but does not seem doing anything [05:27] Luca__: ok, time to check the maas log [05:27] if maas says that machine is in ready state [05:28] then it will have a record of why it refused to start a machine [05:28] from MAAS Web machine was ready [05:28] i think what has happened is thing happende too fasr [05:28] 3 seconds between maas stopping the machien then reusing it [05:29] not a lot of time for all the bookkeeping maas has to do [05:29] you could try deleting the service, unit and machine [05:29] or actualy just delet ehte unie [05:29] davecheney: maas shouldn't have 3s of work to do [05:29] juju remove-unit $UNIT [05:29] ok [05:29] juju destroy-machine 14 [05:29] then watching the log [05:29] ok [05:29] waiting til you see maas remove the unit [05:30] then juju add-unit $SERVICE [05:30] MAAS logs or JUJU logs? [05:32] 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/0 [05:32] 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/1 [05:33] the juju machine-0 log for a start [05:33] but watching the maas logs is recommended as it can be very terse otherwise [05:36] it returns some errors but not sure whether is related [05:37] 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3057,"Type":"Provisioner","Request":"Life","Params":{"Entities":[{"Tag":"machine-14"}]}} [05:37] 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3057,"Response":{"Results":[{"Life":"dying","Error":null}]}} [05:37] 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3058,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-14"}]}} [05:38] machine 13 and 14's life is dying [05:38] ok, those arent' errors [05:39] have machine 13 and 14 returned to maas ready state ? [05:39] 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}} [05:39] nope [05:39] it's possible they haven't been properly relased back to maaas so now nobody owns them [05:39] ok, this isn't good [05:39] maas has lost track of the machine [05:39] state is still dying [05:39] which version of juju are you using ? [05:39] which version of maas are you using ? [05:40] maas: 1.4+bzr1701+dfsg-0+1718+212~ppa0~ubuntu12.04.1 [05:41] juju-core 1.16.3-0ubuntu1~ubuntu12.04.1~juju1 [05:41] on 12.04.03 [05:41] hmm, i think thta maas install is quite old [05:43] hmm, actually, error: null is fine [05:44] i just checked the code [05:44] ok [05:44] so, maas gave you back the same machine you deleted [05:44] what state is maas saying /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304 is in ? [05:45] life: dying [05:47] in maas or in juju ? [05:47] juju status [05:49] what about maas ? [05:49] I am looking under /var/log/maas/ [05:52] is there any specific file I should check? maas.log is empty and others do not seems to have any information related to this node [05:58] so there is no record that maas assigned that node to match [05:58] 16:39 < Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}} [05:58] this is what I get from bootstrap node [05:59] but if I check maas node I cant find any reference to those under /var/log/maas/ [06:02] what is the request that matches that response ? === axw_ is now known as axw [06:07] 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3064,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-13"}]}} [06:09] ok, that is getting wiereder and weiredr [06:09] how many machines did you deleted [06:09] it looks like /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ has been assigned as machine/14 === freeflying_away is now known as freeflying [06:12] Yes I agree. I have never been able to reclaim a destroyed machine and what I was doing was replacing the virtual HDD with a new one, going through the discover process in maas and make the machine available once again [06:21] davecheney: any thought about that? [06:29] I did delete 2 machines [06:30] #13 and #14 [06:30] which were the machines were I tried to install rabbitmq === CyberJacob|Away is now known as CyberJacob [07:49] getting this 2013-12-02 05:02:15 ERROR juju runner.go:200 worker: fatal "api": agent should be terminated [07:50] From here machine is in dying staus [07:50] hi. Juju is stuck on "juju status". I've destroyed environments, setup .bashrc to handle keys, and then "juju init; juju bootstrap; juju status" without any other customizations [07:50] what's going on? [07:53] what's the difference between 'juju init' and 'juju generate-config' ? [08:04] makara: nothing. generate-config alias for init [08:05] makara: What do you mean juju is stuck on juju status? Can you provide logs? [08:12] i type in the command and it hangs [08:12] sorry, I'm too tired to send logs [08:13] i was hoping somebody else had the same problem :) [08:37] Is there anybody that could deploy openstack with charms here? === CyberJacob is now known as CyberJacob|Away === axw__ is now known as axw [09:46] makara: are you sourcing your .bashrc? [09:48] makara: run juju bootstrap with --debug and --show-log the paste the output http://paste.ubuntu.com [09:49] marcoceppi, by sourcing...i've added environment variables to .bashrc [09:50] makara: cool, when you have a moment run bootstrap with the extra flags and let us know the output [09:58] marcoceppi, http://pastebin.com/x1QCGGnD [10:01] makara: give it a few mins and run juju status --debug --show-log [10:05] marcoceppi, do I need any special ports open? I'm behind a fairly restrictive firewall [10:05] 2013-12-02 10:04:21 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 54.226.118.252:37017: connection timed out [10:06] makara: yes, you need 37017 and 27017 [10:06] you need access to 37017 or a proxy server outside of your firewall [10:07] marcoceppi: um, i don't think juju cli can use a proxy [10:07] although you could deploy the gui [10:07] davecheney: :( [10:07] davecheney: no you cant, he can't reach the bootatrap node [10:09] davecheney: i thought you guys respected proxies. maybe not. could use sshuttle as a proxy i guess. either way we should find and document a work around [10:09] marcoceppi: it's complicated [10:09] we respect the apt proxy [10:09] but the mongo traffic isn't proxyable [10:09] and that sort of carried through into the api [10:09] * davecheney raises a bug [10:11] i wonder if quickstart would fix this. you could get a gui on the bootstrap node directly without using the deploy command [10:11] jamespage, hi, got an MP for the glance charm, any chance having a look? thanks https://code.launchpad.net/~teknico/charms/precise/glance/preload-ubuntu-images/+merge/196341 [10:11] https://bugs.launchpad.net/juju-core/+bug/1256849 [10:11] <_mup_> Bug #1256849: cmd/juju: no support for proxies [10:12] marcoceppi: maybe we should just always deploy the gui on the bootstrap node [10:12] +10000000000000000000000 [10:13] or have a flag --without-gui [10:13] teknico, ah [10:13] for those that dont want it [10:13] * davecheney foes to raise another bug [10:16] https://bugs.launchpad.net/juju-core/+bug/1256852 [10:16] <_mup_> Bug #1256852: cmd/juju: juju should always deploy the gui when bootstrapping [10:16] teknico, ok - we need to discuss how we integrate simplestreams sync with glance; I'm not convinced this is the right way todo it [10:16] jamespage, we do. I'm not convinced either, it couples the two of them too much [10:17] teknico, ageed [10:17] I'd envisaged a simplestreams-mirror charm [10:17] that you deploy (probably in a container) [10:17] and then relate to keystone [10:17] so it can register a product-streams endpoint and find out where things like glance are [10:19] a separate charm altogether? [10:19] I was considering just calling the tools/sstream-mirror-glance script, but it's not packaged (yet) [10:20] (that script is in simplestreams) [10:48] hi. I've setup a juju control instance so I shouldn't have problems with ports. How can I use a specific SSH key when I deploy with juju? [10:50] i also want to install instances into a VPC [11:01] makara: we don't support VPC at this time, and you can specify ssh keys in your environments.yaml file (~/.juju/) using the authorized-keys option [11:26] marcoceppi, where can I find a complete list of options for the .yaml? [11:28] makara: if you do juju init --show [11:28] then all the option are presented [11:29] commented out (ie, set to default) in the sample config [11:30] davecheney: yeah, but we don't have /all/ of the options in there [11:30] like authorized-keys [11:32] and its not in https://juju.ubuntu.com/docs/config-aws.html [11:34] what exactly does bootstrap do? [11:35] i've had to create an instance with juju on EC2 to get past company firewall issues. Now I just want to start deploying [11:38] can I just bootstrap to the instance with juju installed on it? [11:39] my pc > dmz instance with juju > bootstrap > wordpress [11:40] is a little verbose [11:42] teknico, yeah - a separate charm [11:43] jamespage, invoking a simplestreams script would decouple the glance charm from simplestreams internals, would that be enough? [11:44] teknico, I'm not worried about that so much; but the sync process needs to be able to deal with [11:44] a) multiple instances of the glance charm in different service units for scale-out/ha [11:44] b) periodic re-syncing of images [11:45] c) endpoint registration for things like juju to use [11:45] if we do it right, we should also be able to sync juju tools into a openstack cloud in the same way [11:46] wow, this expands the scope of the thing quite a bit :-) [11:46] by splitting it into its own charm, we don't have to worry about scale out (its a async, period process) [11:46] teknico, I don't really want to have a feature which conflicts/confuses those objectives in the glance charm itself [11:47] yeah, I understand it being a separate and extrinsic concern for the charm [11:49] jamespage, can you expand a little on "register a product-streams endpoint" please? [11:49] (which I believe is what your c) point refers to) [11:49] teknico, juju uses the keystone service catalog to lookup both image information and juju tool information [11:50] oh right, the service catalog in keystone [11:50] the metadata for these are normally stored in swift, with the images in glance for ubuntu images, and in swift for juju tools [11:50] the sync charm is something that communicates with a number of services in openstack, not just glance :-) [11:51] when you say "sync charm is" you mean "will be", right? you're not referring to something that already exists [11:54] jamespage, and yeah, the current MP talks to both keystone and swift via simplestreams === gary_pos` is now known as gary_poster [13:44] hey evilnickveitch [13:44] http://discourse.ubuntu.com/t/how-to-translate-juju-docs/1284 [13:45] jcastro, ok, cool [13:53] jamespage, how familiar are you with eclipse virgo? [13:54] jcastro, still using juno right now [13:55] hey so they're writing a charm and have it all mostly working, but are having some upstart problems, I was thinking of pinging you when they hit the review queue to have you take a look? [13:56] It won't be soon [13:57] https://github.com/glyn/virgo-charm/ if you are bored though. :) [15:00] marcoceppi: are u there? [15:00] X-warrior: yup [15:01] oh I thought you were one of the creators of elasticsearch charm, but looking on git commit tree you just did the latest commit === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster [15:30] hi jamespage, pushed some updates to heat charm [15:36] yolanda, cool - I'll take a peek later! [15:36] thx === freeflying is now known as freeflying_away === rogpeppe3 is now known as rogpeppe [16:34] Is instance-type not working on 1.16.3? [16:34] X-warrior: correct, it hasn't been fully ported from juju 0.7 [16:35] X-warrior: https://juju.ubuntu.com/docs/reference-constraints.html [16:41] marcoceppi: I'm ready when you are: https://plus.google.com/hangouts/_/76cpit9hbj5c3kr1burqftqmk0?hl=en [17:31] Buenos Dias everyone [18:26] If I deploy something with juju, it will save it on charmcache and in future if I deploy again, it will deploy the same version. Right? [18:29] X-warrior: so long as the environment is alive. If you destroy-environment the cache goes away [18:29] marcoceppi: so if I destroy the machine where that one service is deployed [18:30] it will still be in cache... [18:31] Based on that, I think that upgrade-charm should work even if we don't have any instance with that service running... [18:31] if that service exists in cache, it should be updated imo === BradCrittenden is now known as ba === ba is now known as bac [18:44] window 3 === CyberJacob|Away is now known as CyberJacob === Guest42390 is now known as Kyle === Guest11270 is now known as Kyle [20:44] Hello fellas i need your help , i just install juju locally on my pc and when i try to deploy the juju-gui on the "watch juju status" i see that it creates a second machine that is pending . Is this normal ? [20:48] anyone ? [20:52] zzecool: yes, that's expected [20:52] zzecool: if you wanted to not have it create another machine, you can tell juju-gui to deploy to the bootstrap node using this line instead: [20:53] juju deploy --to 0 juju-gui [20:53] ohh thank you :) [20:53] zzecool: since you've "deployed" the juju-gui already, you'll need to first destroy it, before deploying again. [20:53] juju destroy-service juju-gui [20:53] juju deploy --to 0 juju-gui [20:53] how long does it take to deploy for the first times? [20:53] juju terminate-machine 1 [20:54] zzecool: it depends on the cloud provider [20:54] it locally [20:54] and the charm, some charms can take a long time, others are pretty quick [20:54] zzecool: ah, for local deployments it needs to download the ubuntu cloud image, it's about 250MB so it can take some time [20:54] i see [20:54] really thank you [20:54] :) [20:54] zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine [20:55] ohh [20:55] zzecool: the 0 machine (bootstrap node) is technically your computer for the local provider [20:55] you wouldn't want to put the gui directly on you machine! [20:55] since you're not "paying" for the machines in the cloud provider, having an extra machine for the gui won't hurt [20:56] zzecool: what version of juju are you using? (juju version) [20:56] the latest [20:56] i used the ubuntu ppa [20:56] zzecool: cool, perfect [20:57] i think i misunderstood the juju thing [20:57] so all tha deployment thing doesnt install the actual applications on my pc [20:57] is it something like a sandbox? [20:58] zzecool: no no, it's using LXC to create contianers that simulate a cloud on your machine [20:58] except, with the local provider, machine 0 is technically your laptop/desktop. The rest of the machines juju deploys are LXC machines running on your laptop/desktop [20:59] LXC is like virtual machines right? [20:59] so i guess i cant use machine 0 to deploy things for security reasons ? [21:00] marcoceppi: Thank you for you help :) [21:01] zzecool: right, only with local provider, machine 0 is off limits [21:02] i see [21:02] LXC is conceptually the same thing as virtual machines, just faster and lighter weight [21:03] then my whole concept is a fail . The reason i installed juju was to easily install observium locally to monitor my pc [21:04] so i guess if i install observium i will monitor the LXC machine (1) and not my real pc... [21:09] zzecool: correct [21:09] zzecool: juju is a service orchestration tool, meant to drive bare metal and cloud machines [21:09] the local provider is a way to develop and hack on charms, what juju deploys, without needing lots of servers or spending money on a cloud [21:10] It is so nice though , it should be used for local install as well instead of synaptic or apt-get [21:34] marcoceppi: you there? [21:34] marcoceppi: FWIW, you can't deploy units to machine 0 with the local provider [21:34] as the host machines IS machine-0 [21:35] zzecool: if this is the first time with the local provider, it is probably syncing the underlying ubuntu cloud images [21:36] marcoceppi: with the local provider, machine-0 isn't able to host units (and should be constrained in the data model) [21:36] if it thinks it can, it is a bug [21:36] thumper: yeap he allready told this [21:37] zzecool: cool, just flicked through the chatter without reading in depth :) [21:37] zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine [21:37] :) [21:37] thumper: np thanks though [21:39] If I'm on a juju deployed unit, is there anyway for me to signal that I'm leaving the service and shutting down? [21:47] dpb1: not yet [21:48] dpb1: I'm assuming you mean the unit itself wants to leave? [21:48] Hey thumper, I just wrote to juju@ as well. Yes, right. [21:48] what is the use case for this? [21:49] ah... [21:49] I guess the same use case that shutdown on the command line of an instance presents [21:49] just read the email [21:49] ok, I think that makes it more clear [21:49] I think we added 'destroy-machine --force' [21:49] which works in this case (I think) [21:50] not sure which release that is coming in though [21:50] ok... I can check that [21:50] thx for the pointer. [21:57] marcoceppi: really ? that is le suck [22:03] davecheney: which is? [22:07] marcoceppi: the fact we don't document all the options in juju init > sample.yaml [22:07] davecheney: oh, yeah. It is [22:07] we don't document it anywhere [22:07] you have to crawl through the code [22:08] i'm sure evilnickveitch won't appreciate us dropping that one in his lap [22:08] at least, we don't document all the fringe options [22:08] * marcoceppi makes a bug [22:08] I might do it, well I might gather all the config options, I'll let someone smarter figure out what they mean [22:11] `juju is` [22:14] <3 easter eggs === gary_poster is now known as gary_poster|away === CyberJacob is now known as CyberJacob|Away