=== bradm_ is now known as bradm | ||
smoser | hey all | 00:55 |
---|---|---|
smoser | per http://www.dreamhost.com/newsletter/1113.html#a2 | 00:55 |
smoser | dreamcompute is now "officially in beta" | 00:55 |
smoser | so you could sign up at http://www.dreamhost.com/cloud/dreamcompute/ | 00:55 |
smoser | and presumably get a account and then point juju at it | 00:55 |
smoser | (i say presumably becauase i have not gotten response to my signup from earlier this mornign) | 00:56 |
thumper | o/ smoser | 00:56 |
lazypower | Thats good news! | 01:01 |
marcoceppi | smoser: sweet, I'll give it a go as soon as I get creds | 01:50 |
hatch | marcoceppi hey wb :) | 02:00 |
marcoceppi | o/ | 02:01 |
hatch | now go review my Ghost charm :P | 02:01 |
* hatch snickers and runs away | 02:02 | |
marcoceppi | hatch: it's not in the queue, go put it in the queue | 02:03 |
Luca__ | marcoceppi: I am back now. So in regards of network infrastructure, could you point some document that explains how networks are set up using charms? | 02:03 |
marcoceppi | hatch: If it's not listed here: http://manage.jujucharms.com/tools/review-queue then I'm not going to review it | 02:05 |
hatch | hmm | 02:05 |
hatch | marcoceppi so https://juju.ubuntu.com/docs/authors-charm-store.html#submitting is not a complete list of steps then? | 02:06 |
marcoceppi | hatch: it is, link to the bug | 02:06 |
marcoceppi | I'll see what's missing | 02:07 |
* marcoceppi is working on a juju charm submit command to make this process a little more automated | 02:07 | |
hatch | https://bugs.launchpad.net/charms/+bug/1229377 | 02:07 |
_mup_ | Bug #1229377: Charm needed: Ghost blogging platform <Juju Charms Collection:New> <https://launchpad.net/bugs/1229377> | 02:07 |
hatch | I don't see where it says to add it to the review queue....or how for that matter :) | 02:07 |
lazypower | hatch: Did you add the "charmers" group to the bug? | 02:08 |
marcoceppi | hatch: huh, this should be in the queue. Charmers is watching the bug and it's in a new or fix commited status. | 02:08 |
hatch | lazypower yup | 02:08 |
marcoceppi | hatch: I'll open a bug and make a note to review this first tomorrow | 02:08 |
hatch | marcoceppi cool, so I did everything correctly then? | 02:08 |
hatch | I'm not really too concerned I just want to make sure that the documentation is up to date so outsiders don't run into issue | 02:09 |
marcoceppi | hatch: looks like it, I'll ping charmworld and figure out why this didn't get scraped | 02:09 |
hatch | great thanks | 02:09 |
hatch | marcoceppi in that list it says "store error on ghost" | 02:10 |
marcoceppi | hatch: that's something different, but equally interesting | 02:11 |
hatch | oh :) cool well I was just joking about rushing to get it approved, take your time | 02:11 |
marcoceppi | hatch: too late, rush order already processed and your card has been debited $125 | 02:12 |
hatch | awwww man | 02:12 |
marcoceppi | there's a cancellation fee of $200 | 02:13 |
hatch | man you must be taking business lessons from the government | 02:13 |
hatch | haha | 02:13 |
hatch | well thanks in advance, lemme know if you need any help with anything | 02:17 |
marcoceppi | hatch: hey, I got it to show in the queue | 02:19 |
marcoceppi | hatch: you need to assign the bug to someone, preferably yourself. I'll update the docs. Hopefully `juju charm submit` will be easy enough that the majority of instructions can be condensed to that one command | 02:21 |
* marcoceppi searches for other charms that may have fallen through the cracks | 02:21 | |
hatch | marcoceppi ahh interesting thanks | 03:02 |
=== jam1 is now known as jam | ||
Luca__ | Hi there | 05:05 |
Luca__ | does anybody know how to reuse a machine once issued a juju destroy-machine command? | 05:05 |
Luca__ | it seems even this machine goes back to ready status in MAAS, once picked up again for a different service provision it just stuck and does not do anything | 05:06 |
davecheney | Luca__: that is werid | 05:07 |
davecheney | it shoulnd't do that | 05:07 |
davecheney | after destroy-machine | 05:07 |
davecheney | can you confirm that the machine is no longer listed in juju status ? | 05:07 |
Luca__ | davecheney: I confirm it is no longer in the juju status | 05:09 |
Luca__ | my understanding is that it should be reused and go through the OS reinstallation | 05:09 |
davecheney | yup | 05:09 |
davecheney | the only thing which could prevent that is constraints | 05:09 |
davecheney | /var/log/juju/machine-0.log on machine-0 will have the details of whta is going on | 05:10 |
Luca__ | I am checking now the machine, status is pending, however not doing any installation | 05:10 |
Luca__ | let me check | 05:10 |
Luca__ | what should I exactly look for? there is a bunch of logs | 05:12 |
Luca__ | 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0 | 05:13 |
davecheney | it will say worker/provisioner | 05:14 |
Luca__ | nothing about worker/provisioner | 05:17 |
Luca__ | the messages I can find are worker/firewaller | 05:17 |
Luca__ | 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/0 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/1 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0 2013-12-02 05:02:03 DEBUG juju firewaller.go:495 worker/firewaller: started watching machine 13 2013-12-02 05:02:03 D | 05:17 |
davecheney | Luca__: nothing from the provisioner ? | 05:20 |
davecheney | in the whole file ? | 05:20 |
Luca__ | yes, nothing in the whole file | 05:21 |
Luca__ | but I could deploy mysql and ceph | 05:21 |
Luca__ | hold on | 05:22 |
Luca__ | filtering for provisioner return some | 05:22 |
Luca__ | 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "13" pending provisioning | 05:22 |
Luca__ | 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "14" pending provisioning | 05:23 |
Luca__ | 2013-12-02 05:02:03 DEBUG juju.provisioner provisioner_task.go:303 Stopping instances: [nfqwq.master:/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ ac69q.master:/MAAS/api/1.0/nodes/node-ee42a908-59d4-11e3-9087-525400378304/ pwq4h.master:/MAAS/api/1.0/nodes/node-472158c0-59d6-11e3-9087-525400378304/] | 05:23 |
Luca__ | 2013-12-02 05:02:05 INFO juju.provisioner provisioner_task.go:367 started machine 13 as instance /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ with hardware <nil> | 05:23 |
Luca__ | but does not seem doing anything | 05:24 |
davecheney | Luca__: ok, time to check the maas log | 05:27 |
davecheney | if maas says that machine is in ready state | 05:27 |
davecheney | then it will have a record of why it refused to start a machine | 05:28 |
Luca__ | from MAAS Web machine was ready | 05:28 |
davecheney | i think what has happened is thing happende too fasr | 05:28 |
davecheney | 3 seconds between maas stopping the machien then reusing it | 05:28 |
davecheney | not a lot of time for all the bookkeeping maas has to do | 05:29 |
davecheney | you could try deleting the service, unit and machine | 05:29 |
davecheney | or actualy just delet ehte unie | 05:29 |
lifeless | davecheney: maas shouldn't have 3s of work to do | 05:29 |
davecheney | juju remove-unit $UNIT | 05:29 |
Luca__ | ok | 05:29 |
davecheney | juju destroy-machine 14 | 05:29 |
davecheney | then watching the log | 05:29 |
Luca__ | ok | 05:29 |
davecheney | waiting til you see maas remove the unit | 05:29 |
davecheney | then juju add-unit $SERVICE | 05:30 |
Luca__ | MAAS logs or JUJU logs? | 05:30 |
Luca__ | 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/0 | 05:32 |
Luca__ | 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/1 | 05:32 |
davecheney | the juju machine-0 log for a start | 05:33 |
davecheney | but watching the maas logs is recommended as it can be very terse otherwise | 05:33 |
Luca__ | it returns some errors but not sure whether is related | 05:36 |
Luca__ | 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3057,"Type":"Provisioner","Request":"Life","Params":{"Entities":[{"Tag":"machine-14"}]}} | 05:37 |
Luca__ | 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3057,"Response":{"Results":[{"Life":"dying","Error":null}]}} | 05:37 |
Luca__ | 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3058,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-14"}]}} | 05:37 |
Luca__ | machine 13 and 14's life is dying | 05:38 |
davecheney | ok, those arent' errors | 05:38 |
davecheney | have machine 13 and 14 returned to maas ready state ? | 05:39 |
Luca__ | 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}} | 05:39 |
Luca__ | nope | 05:39 |
davecheney | it's possible they haven't been properly relased back to maaas so now nobody owns them | 05:39 |
davecheney | ok, this isn't good | 05:39 |
davecheney | maas has lost track of the machine | 05:39 |
Luca__ | state is still dying | 05:39 |
davecheney | which version of juju are you using ? | 05:39 |
davecheney | which version of maas are you using ? | 05:39 |
Luca__ | maas: 1.4+bzr1701+dfsg-0+1718+212~ppa0~ubuntu12.04.1 | 05:40 |
Luca__ | juju-core 1.16.3-0ubuntu1~ubuntu12.04.1~juju1 | 05:41 |
Luca__ | on 12.04.03 | 05:41 |
davecheney | hmm, i think thta maas install is quite old | 05:41 |
davecheney | hmm, actually, error: null is fine | 05:43 |
davecheney | i just checked the code | 05:44 |
Luca__ | ok | 05:44 |
davecheney | so, maas gave you back the same machine you deleted | 05:44 |
davecheney | what state is maas saying /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304 is in ? | 05:44 |
Luca__ | life: dying | 05:45 |
davecheney | in maas or in juju ? | 05:47 |
Luca__ | juju status | 05:47 |
davecheney | what about maas ? | 05:49 |
Luca__ | I am looking under /var/log/maas/ | 05:49 |
Luca__ | is there any specific file I should check? maas.log is empty and others do not seems to have any information related to this node | 05:52 |
davecheney | so there is no record that maas assigned that node to match | 05:58 |
davecheney | 16:39 < Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}} | 05:58 |
Luca__ | this is what I get from bootstrap node | 05:58 |
Luca__ | but if I check maas node I cant find any reference to those under /var/log/maas/ | 05:59 |
davecheney | what is the request that matches that response ? | 06:02 |
=== axw_ is now known as axw | ||
Luca__ | 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3064,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-13"}]}} | 06:07 |
davecheney | ok, that is getting wiereder and weiredr | 06:09 |
davecheney | how many machines did you deleted | 06:09 |
davecheney | it looks like /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ has been assigned as machine/14 | 06:09 |
=== freeflying_away is now known as freeflying | ||
Luca__ | Yes I agree. I have never been able to reclaim a destroyed machine and what I was doing was replacing the virtual HDD with a new one, going through the discover process in maas and make the machine available once again | 06:12 |
Luca__ | davecheney: any thought about that? | 06:21 |
Luca__ | I did delete 2 machines | 06:29 |
Luca__ | #13 and #14 | 06:30 |
Luca__ | which were the machines were I tried to install rabbitmq | 06:30 |
=== CyberJacob|Away is now known as CyberJacob | ||
Luca__ | getting this 2013-12-02 05:02:15 ERROR juju runner.go:200 worker: fatal "api": agent should be terminated | 07:49 |
Luca__ | From here machine is in dying staus | 07:50 |
makara | hi. Juju is stuck on "juju status". I've destroyed environments, setup .bashrc to handle keys, and then "juju init; juju bootstrap; juju status" without any other customizations | 07:50 |
makara | what's going on? | 07:50 |
makara | what's the difference between 'juju init' and 'juju generate-config' ? | 07:53 |
lazypower | makara: nothing. generate-config alias for init | 08:04 |
lazypower | makara: What do you mean juju is stuck on juju status? Can you provide logs? | 08:05 |
makara | i type in the command and it hangs | 08:12 |
makara | sorry, I'm too tired to send logs | 08:12 |
makara | i was hoping somebody else had the same problem :) | 08:13 |
Luca__ | Is there anybody that could deploy openstack with charms here? | 08:37 |
=== CyberJacob is now known as CyberJacob|Away | ||
=== axw__ is now known as axw | ||
marcoceppi | makara: are you sourcing your .bashrc? | 09:46 |
marcoceppi | makara: run juju bootstrap with --debug and --show-log the paste the output http://paste.ubuntu.com | 09:48 |
makara | marcoceppi, by sourcing...i've added environment variables to .bashrc | 09:49 |
marcoceppi | makara: cool, when you have a moment run bootstrap with the extra flags and let us know the output | 09:50 |
makara | marcoceppi, http://pastebin.com/x1QCGGnD | 09:58 |
marcoceppi | makara: give it a few mins and run juju status --debug --show-log | 10:01 |
makara | marcoceppi, do I need any special ports open? I'm behind a fairly restrictive firewall | 10:05 |
makara | 2013-12-02 10:04:21 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 54.226.118.252:37017: connection timed out | 10:05 |
davecheney | makara: yes, you need 37017 and 27017 | 10:06 |
marcoceppi | you need access to 37017 or a proxy server outside of your firewall | 10:06 |
davecheney | marcoceppi: um, i don't think juju cli can use a proxy | 10:07 |
davecheney | although you could deploy the gui | 10:07 |
marcoceppi | davecheney: :( | 10:07 |
marcoceppi | davecheney: no you cant, he can't reach the bootatrap node | 10:07 |
marcoceppi | davecheney: i thought you guys respected proxies. maybe not. could use sshuttle as a proxy i guess. either way we should find and document a work around | 10:09 |
davecheney | marcoceppi: it's complicated | 10:09 |
davecheney | we respect the apt proxy | 10:09 |
davecheney | but the mongo traffic isn't proxyable | 10:09 |
davecheney | and that sort of carried through into the api | 10:09 |
* davecheney raises a bug | 10:09 | |
marcoceppi | i wonder if quickstart would fix this. you could get a gui on the bootstrap node directly without using the deploy command | 10:11 |
teknico | jamespage, hi, got an MP for the glance charm, any chance having a look? thanks https://code.launchpad.net/~teknico/charms/precise/glance/preload-ubuntu-images/+merge/196341 | 10:11 |
davecheney | https://bugs.launchpad.net/juju-core/+bug/1256849 | 10:11 |
_mup_ | Bug #1256849: cmd/juju: no support for proxies <juju-core:New> <https://launchpad.net/bugs/1256849> | 10:11 |
davecheney | marcoceppi: maybe we should just always deploy the gui on the bootstrap node | 10:12 |
marcoceppi | +10000000000000000000000 | 10:12 |
marcoceppi | or have a flag --without-gui | 10:13 |
jamespage | teknico, ah | 10:13 |
marcoceppi | for those that dont want it | 10:13 |
* davecheney foes to raise another bug | 10:13 | |
davecheney | https://bugs.launchpad.net/juju-core/+bug/1256852 | 10:16 |
_mup_ | Bug #1256852: cmd/juju: juju should always deploy the gui when bootstrapping <juju-core:New> <https://launchpad.net/bugs/1256852> | 10:16 |
jamespage | teknico, ok - we need to discuss how we integrate simplestreams sync with glance; I'm not convinced this is the right way todo it | 10:16 |
teknico | jamespage, we do. I'm not convinced either, it couples the two of them too much | 10:16 |
jamespage | teknico, ageed | 10:17 |
jamespage | I'd envisaged a simplestreams-mirror charm | 10:17 |
jamespage | that you deploy (probably in a container) | 10:17 |
jamespage | and then relate to keystone | 10:17 |
jamespage | so it can register a product-streams endpoint and find out where things like glance are | 10:17 |
teknico | a separate charm altogether? | 10:19 |
teknico | I was considering just calling the tools/sstream-mirror-glance script, but it's not packaged (yet) | 10:19 |
teknico | (that script is in simplestreams) | 10:20 |
makara | hi. I've setup a juju control instance so I shouldn't have problems with ports. How can I use a specific SSH key when I deploy with juju? | 10:48 |
makara | i also want to install instances into a VPC | 10:50 |
marcoceppi | makara: we don't support VPC at this time, and you can specify ssh keys in your environments.yaml file (~/.juju/) using the authorized-keys option | 11:01 |
makara | marcoceppi, where can I find a complete list of options for the .yaml? | 11:26 |
davecheney | makara: if you do juju init --show | 11:28 |
davecheney | then all the option are presented | 11:28 |
davecheney | commented out (ie, set to default) in the sample config | 11:29 |
marcoceppi | davecheney: yeah, but we don't have /all/ of the options in there | 11:30 |
marcoceppi | like authorized-keys | 11:30 |
makara | and its not in https://juju.ubuntu.com/docs/config-aws.html | 11:32 |
makara | what exactly does bootstrap do? | 11:34 |
makara | i've had to create an instance with juju on EC2 to get past company firewall issues. Now I just want to start deploying | 11:35 |
makara | can I just bootstrap to the instance with juju installed on it? | 11:38 |
makara | my pc > dmz instance with juju > bootstrap > wordpress | 11:39 |
makara | is a little verbose | 11:40 |
jamespage | teknico, yeah - a separate charm | 11:42 |
teknico | jamespage, invoking a simplestreams script would decouple the glance charm from simplestreams internals, would that be enough? | 11:43 |
jamespage | teknico, I'm not worried about that so much; but the sync process needs to be able to deal with | 11:44 |
jamespage | a) multiple instances of the glance charm in different service units for scale-out/ha | 11:44 |
jamespage | b) periodic re-syncing of images | 11:44 |
jamespage | c) endpoint registration for things like juju to use | 11:45 |
jamespage | if we do it right, we should also be able to sync juju tools into a openstack cloud in the same way | 11:45 |
teknico | wow, this expands the scope of the thing quite a bit :-) | 11:46 |
jamespage | by splitting it into its own charm, we don't have to worry about scale out (its a async, period process) | 11:46 |
jamespage | teknico, I don't really want to have a feature which conflicts/confuses those objectives in the glance charm itself | 11:46 |
teknico | yeah, I understand it being a separate and extrinsic concern for the charm | 11:47 |
teknico | jamespage, can you expand a little on "register a product-streams endpoint" please? | 11:49 |
teknico | (which I believe is what your c) point refers to) | 11:49 |
jamespage | teknico, juju uses the keystone service catalog to lookup both image information and juju tool information | 11:49 |
teknico | oh right, the service catalog in keystone | 11:50 |
jamespage | the metadata for these are normally stored in swift, with the images in glance for ubuntu images, and in swift for juju tools | 11:50 |
jamespage | the sync charm is something that communicates with a number of services in openstack, not just glance :-) | 11:50 |
teknico | when you say "sync charm is" you mean "will be", right? you're not referring to something that already exists | 11:51 |
teknico | jamespage, and yeah, the current MP talks to both keystone and swift via simplestreams | 11:54 |
=== gary_pos` is now known as gary_poster | ||
jcastro | hey evilnickveitch | 13:44 |
jcastro | http://discourse.ubuntu.com/t/how-to-translate-juju-docs/1284 | 13:44 |
evilnickveitch | jcastro, ok, cool | 13:45 |
jcastro | jamespage, how familiar are you with eclipse virgo? | 13:53 |
jamespage | jcastro, still using juno right now | 13:54 |
jcastro | hey so they're writing a charm and have it all mostly working, but are having some upstart problems, I was thinking of pinging you when they hit the review queue to have you take a look? | 13:55 |
jcastro | It won't be soon | 13:56 |
jcastro | https://github.com/glyn/virgo-charm/ if you are bored though. :) | 13:57 |
X-warrior | marcoceppi: are u there? | 15:00 |
marcoceppi | X-warrior: yup | 15:00 |
X-warrior | oh I thought you were one of the creators of elasticsearch charm, but looking on git commit tree you just did the latest commit | 15:01 |
=== gary_poster is now known as gary_poster|away | ||
=== gary_poster|away is now known as gary_poster | ||
yolanda | hi jamespage, pushed some updates to heat charm | 15:30 |
jamespage | yolanda, cool - I'll take a peek later! | 15:36 |
yolanda | thx | 15:36 |
=== freeflying is now known as freeflying_away | ||
=== rogpeppe3 is now known as rogpeppe | ||
X-warrior | Is instance-type not working on 1.16.3? | 16:34 |
marcoceppi | X-warrior: correct, it hasn't been fully ported from juju 0.7 | 16:34 |
marcoceppi | X-warrior: https://juju.ubuntu.com/docs/reference-constraints.html | 16:35 |
benji | marcoceppi: I'm ready when you are: https://plus.google.com/hangouts/_/76cpit9hbj5c3kr1burqftqmk0?hl=en | 16:41 |
lazypower | Buenos Dias everyone | 17:31 |
X-warrior | If I deploy something with juju, it will save it on charmcache and in future if I deploy again, it will deploy the same version. Right? | 18:26 |
marcoceppi | X-warrior: so long as the environment is alive. If you destroy-environment the cache goes away | 18:29 |
X-warrior | marcoceppi: so if I destroy the machine where that one service is deployed | 18:29 |
X-warrior | it will still be in cache... | 18:30 |
X-warrior | Based on that, I think that upgrade-charm should work even if we don't have any instance with that service running... | 18:31 |
X-warrior | if that service exists in cache, it should be updated imo | 18:31 |
=== BradCrittenden is now known as ba | ||
=== ba is now known as bac | ||
lazypower | window 3 | 18:44 |
=== CyberJacob|Away is now known as CyberJacob | ||
=== Guest42390 is now known as Kyle | ||
=== Guest11270 is now known as Kyle | ||
zzecool | Hello fellas i need your help , i just install juju locally on my pc and when i try to deploy the juju-gui on the "watch juju status" i see that it creates a second machine that is pending . Is this normal ? | 20:44 |
zzecool | anyone ? | 20:48 |
marcoceppi | zzecool: yes, that's expected | 20:52 |
marcoceppi | zzecool: if you wanted to not have it create another machine, you can tell juju-gui to deploy to the bootstrap node using this line instead: | 20:52 |
marcoceppi | juju deploy --to 0 juju-gui | 20:53 |
zzecool | ohh thank you :) | 20:53 |
marcoceppi | zzecool: since you've "deployed" the juju-gui already, you'll need to first destroy it, before deploying again. | 20:53 |
marcoceppi | juju destroy-service juju-gui | 20:53 |
marcoceppi | juju deploy --to 0 juju-gui | 20:53 |
zzecool | how long does it take to deploy for the first times? | 20:53 |
marcoceppi | juju terminate-machine 1 | 20:53 |
marcoceppi | zzecool: it depends on the cloud provider | 20:54 |
zzecool | it locally | 20:54 |
marcoceppi | and the charm, some charms can take a long time, others are pretty quick | 20:54 |
marcoceppi | zzecool: ah, for local deployments it needs to download the ubuntu cloud image, it's about 250MB so it can take some time | 20:54 |
zzecool | i see | 20:54 |
zzecool | really thank you | 20:54 |
zzecool | :) | 20:54 |
marcoceppi | zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine | 20:54 |
zzecool | ohh | 20:55 |
marcoceppi | zzecool: the 0 machine (bootstrap node) is technically your computer for the local provider | 20:55 |
marcoceppi | you wouldn't want to put the gui directly on you machine! | 20:55 |
marcoceppi | since you're not "paying" for the machines in the cloud provider, having an extra machine for the gui won't hurt | 20:55 |
marcoceppi | zzecool: what version of juju are you using? (juju version) | 20:56 |
zzecool | the latest | 20:56 |
zzecool | i used the ubuntu ppa | 20:56 |
marcoceppi | zzecool: cool, perfect | 20:56 |
zzecool | i think i misunderstood the juju thing | 20:57 |
zzecool | so all tha deployment thing doesnt install the actual applications on my pc | 20:57 |
zzecool | is it something like a sandbox? | 20:57 |
marcoceppi | zzecool: no no, it's using LXC to create contianers that simulate a cloud on your machine | 20:58 |
marcoceppi | except, with the local provider, machine 0 is technically your laptop/desktop. The rest of the machines juju deploys are LXC machines running on your laptop/desktop | 20:58 |
zzecool | LXC is like virtual machines right? | 20:59 |
zzecool | so i guess i cant use machine 0 to deploy things for security reasons ? | 20:59 |
zzecool | marcoceppi: Thank you for you help :) | 21:00 |
marcoceppi | zzecool: right, only with local provider, machine 0 is off limits | 21:01 |
zzecool | i see | 21:02 |
marcoceppi | LXC is conceptually the same thing as virtual machines, just faster and lighter weight | 21:02 |
zzecool | then my whole concept is a fail . The reason i installed juju was to easily install observium locally to monitor my pc | 21:03 |
zzecool | so i guess if i install observium i will monitor the LXC machine (1) and not my real pc... | 21:04 |
marcoceppi | zzecool: correct | 21:09 |
marcoceppi | zzecool: juju is a service orchestration tool, meant to drive bare metal and cloud machines | 21:09 |
marcoceppi | the local provider is a way to develop and hack on charms, what juju deploys, without needing lots of servers or spending money on a cloud | 21:09 |
zzecool | It is so nice though , it should be used for local install as well instead of synaptic or apt-get | 21:10 |
thumper | marcoceppi: you there? | 21:34 |
thumper | marcoceppi: FWIW, you can't deploy units to machine 0 with the local provider | 21:34 |
thumper | as the host machines IS machine-0 | 21:34 |
thumper | zzecool: if this is the first time with the local provider, it is probably syncing the underlying ubuntu cloud images | 21:35 |
thumper | marcoceppi: with the local provider, machine-0 isn't able to host units (and should be constrained in the data model) | 21:36 |
thumper | if it thinks it can, it is a bug | 21:36 |
zzecool | thumper: yeap he allready told this | 21:36 |
thumper | zzecool: cool, just flicked through the chatter without reading in depth :) | 21:37 |
zzecool | <marcoceppi> zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine | 21:37 |
zzecool | :) | 21:37 |
zzecool | thumper: np thanks though | 21:37 |
dpb1 | If I'm on a juju deployed unit, is there anyway for me to signal that I'm leaving the service and shutting down? | 21:39 |
thumper | dpb1: not yet | 21:47 |
thumper | dpb1: I'm assuming you mean the unit itself wants to leave? | 21:48 |
dpb1 | Hey thumper, I just wrote to juju@ as well. Yes, right. | 21:48 |
thumper | what is the use case for this? | 21:48 |
thumper | ah... | 21:49 |
dpb1 | I guess the same use case that shutdown on the command line of an instance presents | 21:49 |
thumper | just read the email | 21:49 |
dpb1 | ok, I think that makes it more clear | 21:49 |
thumper | I think we added 'destroy-machine --force' | 21:49 |
thumper | which works in this case (I think) | 21:49 |
thumper | not sure which release that is coming in though | 21:50 |
dpb1 | ok... I can check that | 21:50 |
dpb1 | thx for the pointer. | 21:50 |
davecheney | marcoceppi: really ? that is le suck | 21:57 |
marcoceppi | davecheney: which is? | 22:03 |
davecheney | marcoceppi: the fact we don't document all the options in juju init > sample.yaml | 22:07 |
marcoceppi | davecheney: oh, yeah. It is | 22:07 |
marcoceppi | we don't document it anywhere | 22:07 |
marcoceppi | you have to crawl through the code | 22:07 |
davecheney | i'm sure evilnickveitch won't appreciate us dropping that one in his lap | 22:08 |
marcoceppi | at least, we don't document all the fringe options | 22:08 |
* marcoceppi makes a bug | 22:08 | |
marcoceppi | I might do it, well I might gather all the config options, I'll let someone smarter figure out what they mean | 22:08 |
davecheney | `juju is` | 22:11 |
marcoceppi | <3 easter eggs | 22:14 |
=== gary_poster is now known as gary_poster|away | ||
=== CyberJacob is now known as CyberJacob|Away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!