=== fuzzy__ is now known as Poyo === Poyo is now known as Ponyo === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away [02:48] how does one define a custom interface in metadata.yaml ? I see interfaces like mysql and http, but don't know where these are defined. [03:03] SimplySeth: interfaces are kind of arbitrary - think of them as loosely coupled contracts [03:03] i gave a talk over this, let me find the slides === kadams54 is now known as kadams54-away [03:04] https://speakerdeck.com/chuckbutler/service-orchestration-with-juju?slide=24 [03:05] SimplySeth: an important thing to remember about your interfaces as you start to design your relationship models - its a bi-directional communication cycle between the two services, you can have any number of different relations that consume the same interface but behave differently, and if you send for example - 3 data points on one of the relationship over that interface, every time the interface is used the same 3 data poitns should be [03:05] exchanged. [03:05] if you're changing teh data handoff, the interface should change. [03:05] ergo: you'll se interfaces like db, and db-admin [03:10] * goes to read [03:12] mesos communicates on 5051 and 5050 and 2181 [03:13] that very well could be modeled as 3 independent relationships [03:13] or, if all of those have the same concern, you can encapsulate that as a single relationship - i'm not really familiar with how mesos talks to its minions [03:14] what i typically do when i'm speccing out a charm - is i break down the concerns, get a mental data model on paper and start to sketch how that looks between services - pencil and paper are good tools for this - or a whiteboard if you ahve one [03:14] once you've got a decomposed service diagram, you then name them, and define your data exchange between the services, and have a good representation to stuff in your metadata.yaml === axisys is now known as axisys_away [08:08] bodie_: re: iptables, you could probably achieve a similar affect with an iptables subordinate [08:45] good morning to everyone! [08:48] having problems with bootstrap... can anyone help me? [08:49] I'm trying to use juju with an openstack in a private cloud [08:49] when I try to bootstrap [08:49] Juju successfully creates the VM on the openstack [08:50] seems to be able to communicate with this instance [08:50] but... at a certain point, bootstrap fails [08:50] with this log: [08:52] http://paste.ubuntu.com/10139228/ [08:57] hi dimitern [08:57] hi Muntaner [08:58] I'm having some bootstrap problems... may you help me? [08:58] Muntaner, can you remind me - were you having issues behind a proxy? [08:59] no dimitern no proxies issues :) [08:59] well, simply I can't bootstrap juju over a private OpenStack cloud [08:59] logs: http://paste.ubuntu.com/10139228/ [09:02] ah ok [09:04] Muntaner, it seems you can't fetch the tools [09:05] dimitern: need environments.yawl? [09:05] Muntaner, nope, let me check something first [09:05] Muntaner, have you tried running juju sync-tools before bootstrap? [09:06] I'll try now [09:06] Muntaner, wait a sec [09:07] ok dimitern, tell me what to do :) [09:07] hey dimitern, aren't you off today? [09:07] Muntaner, try following this guide https://juju.ubuntu.com/docs/howto-privatecloud.html [09:08] jam, hey, yes - officially, just checking how things are going [09:09] dimitern: maybe I need to create this metadata on the server? [09:09] 'cos actually, I'm trying to use juju with my laptop on the openstack server via LAN (10.0.0.0/24) [09:09] Muntaner, yes, it seems bootstrapping finds the correct image to start, but the juju tools metadata is missing [09:10] aw ok dimitern, I can't get the workaround btw :( [09:10] Muntaner, so try juju metadata generate-tools -d $HOME/juju-tools (create that dir as well beforehand); then juju bootstrap --metadata-source $HOME/juju-tools [09:11] dimitern: all of this in my laptop, right? [09:12] dimitern: having some strange debug messages in this command, want to check this? [09:13] mike@mike-PC:~$ juju metadata generate-tools -d /home/mike/juju-tools --debug [09:13] -> http://paste.ubuntu.com/10139486/ [09:13] but, in the folder, it created some jsons [09:14] com.ubuntu.juju:released:tools.json, index.json, index2.json [09:15] Muntaner, that's fine, try bootstrapping now with --metadata-source [09:15] dimitern: I get an earlier error :( [09:16] http://paste.ubuntu.com/10139519/ [09:17] maybe... I need to upload tools? [09:18] Muntaner, no, just --metadata-source [09:19] Muntaner, ha!, ok [09:20] dimitern: what should I try now? [09:20] Muntaner, you'll need juju metadata generate-image -d -i [09:21] ok, gonna do that and give feedback [09:21] Muntaner, you can run this multiple times - one per image (e.g. trusty amd64, i386, etc.); then run juju metadata validate-images (check that page I sent and the commands --help) [09:22] Muntaner, the idea is to have a bunch of images and tools metadata in that metadata dir so that at bootstrap juju can find you images and tools [09:23] dimitern: all of this should happen on my laptop, right? [09:28] dimitern: got a new error :) [09:29] http://paste.ubuntu.com/10139639/ [09:30] Muntaner, did you try validate-images and validate-tools successfully before trying to bootstrap? please do that [09:31] dimitern: I did validate images - not validate tools, will try it now [09:32] Muntaner, yeah, if both are successful you've a better chance of bootstrapping ok [09:35] dimitern: same errors [09:35] Muntaner, with validate-tools? [09:35] seems like juju can't find /home/mike/juju-tools/tools/releases/juju-1.21.1-trusty-amd64.tgz [09:36] in fact, I don't have that file [09:36] Muntaner, hmm ok, then let me ask someone what we're missing here [09:36] yep - with both validate-tools and validate-image [09:38] ok dimitern, thanks === Tribaal_ is now known as Tribaal [11:59] Hi, guys and gals. [11:59] How do I check which charm version has deployed my services/units? Is that info displayed in 'juju status' ? === perrito667 is now known as perrito666 [12:10] Mmike, it should be yes [12:11] jamespage: found this: http://juju-docs.readthedocs.org/en/latest/internals/charm-store.html#charm-revisions [12:12] but not sure how to link charm revisions with bzr revisions [12:12] Mmike, you can't [12:13] Mmike, charmstore revisions are created when the bzr source branch is automatically imported [12:13] cs:trusty/swift-storage-92 for example [12:13] but there is not direct correlation with bzr versions [12:14] its an intentional decoupling to avoid tying charm authors to a single vcs [12:16] those docs should get taken down, they are the ancient pyjuju impl docs [12:16] hazmat: heh... but they reveal some of the internals [12:16] which is a good thing [12:17] Mmike: we have internal docs for the current version in the source tree as well [12:17] covering different topics.. but also useful for lighting up internals.. https://github.com/juju/juju/tree/master/doc [12:18] the old docs are are hit or miss, especially the internals which may no longer apply === SaMnCo is now known as SaMnCo-desktop [12:20] actually almost none of those docs on internal apply anymore [12:20] from juju-docs.readthedocs.org site [12:39] hazmat: how can I get the internal docs, are they on launchpad somewhere? [12:40] actually what I need is to test upgrade path for some charm - so I see the bzr commit that was merged into 'trunk' (main?) and I'd like to deply that particular 'commit' - how do I figure out what charm revision that was? [12:49] Mmike: there's no correlation to bzr commit and charm revision. You could probably figure that out with some API queries but it's not apparent [12:49] hazmat: I agree, any idea on who setup that site? [12:53] marcoceppi: ack, thnx. I'll try to figure it out manually. [12:54] marcoceppi: not sure, maybe brendan? [12:54] er.. i mean brandon [12:55] Mmike: that github link above has the current internal docs [12:56] hazmat: missed that one, thnx! :D [12:56] Mmike: nutshell you can use query the bzr version of a particular charm store revision, but there's no mapping from bzr rev to charm store rev. effectively the charm store rev, is a monotonically increase integer managed by the store, in response to pushes/puts of a charm. [12:58] ie. something like https://api.jujucharms.com/v4/mongodb/meta/extra-info [12:58] i see [12:59] thnx, lads [12:59] Mmike: actually this is a bit better https://api.jujucharms.com/v4/mongodb/meta/revision-info [13:00] rick_h_: are there public docs on the store api? [13:01] Mmike: fwiw, docs on the store api https://github.com/juju/charmstore/blob/v4/docs/API.md [13:04] hazmat: rick_h_ is out today, swap days. But yes, that are the store API. [13:04] urulama: danke [13:05] Mmike: there is a /meta/extra-info endpoint in CS that holds BZR digest information for a given revision. You can always check the correlation between charm revision and BZR revision. [13:05] Mmike: if you need a hand, let me know which charm you're interested in [13:32] urulama: it's percona-cluster... current revision is -15, and I'd like to know what charm revision was at branch revno 40 [13:36] Mmike: ok, revision 15 has bzr revision 44, looks like percona-cluster-12 had bzr revision 40, but there is no such revision in the system. [13:37] Mmike: have to check a bit what happened to all revisions < 13 [13:37] Mmike: you can just cook up a new revision if you want to deploy rev 12 (bzr 40) of the charm [13:37] branch it, reset to revision 40, deploy charm [13:37] urulama: that's fine, thnx, don't bother. I'll try both 12 and 13 (that is, upgrade 12->15 and 13->15) [13:37] marcoceppi: or that, indeed! thnx! [13:38] I did all those tests before 15 got public, I just want to double-check now that its merged. === fuzzy__ is now known as Ponyo [14:22] hatch: hatched, is you? [14:29] ayr-ton: it is [14:29] assuming you're talking about github :) [14:30] hatch: Ahahaha. So sorry about forget your issue. Working on this right now /o\ [14:33] ayr-ton: haha no problem at all :) I was just going through the issues and adding some features the other day and I thought I'd add a note to that bug [14:33] s/bug/enhancement [14:33] hatch: Do you want me to use in the charm the JUJU_DEV_FEATURE_FLAG=actions for early adoption? [14:34] ayr-ton: yeah I'd be ok with using new features for this functionality - using a config option feels pretty hacky to do this [14:37] hatch: Okay. Just a sec. [14:41] hi :) === negronjl_ is now known as negronjl_afk === scuttle|afk is now known as scuttlemonkey [15:09] marcoceppi: https://github.com/ayr-ton/zabbix-charm | https://code.launchpad.net/~ayrton/charms/trusty/zabbix/trunk [15:09] it is working [15:10] just some bugs when depart a db relation, but I'm already working in fix this. === medberry is now known as med_ [16:24] could anyone help me with the ubuntu openstack installer? [16:24] no matter what install type i choose (Autopilot, Multi, Single) I can't get passed "Bootstrapping Juju" [16:25] MAAS start the installer, etc, etc, seems to create network bridge for Juju, all seems to okay, but then just sits there at the login prompt [16:25] the openstack installer just continues to say "Bootstrapping Juju" [16:25] no idea what I'm doing wrong [16:27] I've been following this guide here http://www.ubuntu.com/download/cloud/install-ubuntu-openstack [16:39] gadago: what architecture/hardware specs are the machines in maas, and how many machines do you hvae? do any machines ever appear allocated in the maas UI? [16:40] mbruzek: you can file bugs against jujucharms.com here => https://github.com/CanonicalLtd/jujucharms.com [16:41] marcoceppi, I have two virtual machines (As nodes in maas), plus a third physical node [16:41] the virtual machines currently only have one NIC each (I can add more of course), the physical node has six [16:42] CPU/memory? [16:42] virtual have 2 cpus, 4gb ram [16:42] physical is 24 cpus 48gb ram [16:43] that should be enough, are they x86 arch? [16:44] I know at one point the minimum server requirement was like 8 machines, but that was a while ago. I'm not versed in all the new offerings [16:44] x86 (amd64) [16:44] you're unable to bootstrap a machine this means there is a juju -> maas issue [16:44] So, this typically boils down to one of two things [16:45] 1) credentials issue [16:45] Either you hvae the wrong credentials, or the wrong server hostname for maas (or you can't actually talk to maas from where you're running the installers) [16:46] that last part is technically not a credentials issue, but we're just going to lump it in there [16:46] marcoceppi, could be the second I feel [16:46] by mass server is on 192.168.255.10 with my deployment network being 10.0.14.0/24 [16:47] default gateway is 10.0.14.10 (other IP on maas server) [16:47] 2) You don't have a machine that juju expects to find. By default juju uses a set of constraints to request a machine: 1 CPU, 700MB RAM, x86_64. Juju asks MAAS for a machine that meets this minimum requirement and MAAS says "OKIE DAY! here you go" or 409 CONFLICT [16:47] gadago: so, from where you're running these isntaller commands, can you ping the MAAS server? [16:48] I'm running the openstack-install command on the maas serer [16:48] I'm using this guide http://www.ubuntu.com/download/cloud/install-ubuntu-openstack [16:48] gadago: well, that's not going to be much of an issue then [16:48] gadago: in the maas web ui, can you see a machine allocated after running the insaller? [16:49] marcoceppi, under nodes, it shows as "Allocated to root" [16:49] gadago: do all of them say that? [16:50] no just the one at the moment [16:51] gadago: interesting [16:51] gadago: that machine that's allocated to root [16:51] can you ping it? does it have an address? [16:51] that one has PXE booted, rebooted then ran the cloud-init script and setup a juju-br0 device on the 10.0.14.0/23 network [16:51] I can ssh to it [16:52] but that's as far as it goes [16:52] yeah, so it bootstrapped [16:52] gadago: do you have a jujud process running? [16:53] no jujud, but there is a curl process of curl -sSfw tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s --retry 10 -o /var/lib/juju/tools/1.21.1-trusty-amd64/tools.tar.gz https://streams.canonical.com/juju/tools/releases/juju-1.21.1-trusty-amd64.tgz [16:54] gadago: ah, does this maas install have outside access? [16:54] I wonder if the http proxy isn't setup correctly [16:54] which would cause it to fail to download the juju binary, which would result in aa hung bootstrap [16:55] marcoceppi, the one odd thing I have had with this test maas setup is that I have to set the gateway on the network for the maas config to the maas server itself, if I actually set the gateway to the network gateway, dns fails and the server wont pxe properly [16:55] so in, short if I try to ping google.com, no reply [16:55] gadago: yeah, maas wnats to run both DNS and networking [16:55] well DNS and DHCP/Networking === kadams54 is now known as kadams54-away [16:57] so what you could/should do. Create an internal netowrk (sounds like you did 10.0.14.0/23) then setup iptables to forward that br device to the main eth device on maas server [16:57] my maas server is of of course not a gateway [16:57] then setup dns forwarding from the maas server to an external dns server [16:57] that's the only way I've run MAAS, I'm not sure how to do it so it just runs DNS [16:57] I already have dns forwarding sorted (google.com resolves on the node) [16:57] there's a drop down in the maas admin panel to switch it to DNS only [16:58] but you'd want to ask in #maas about dns only (non dhcp) maas setups [16:58] once that's all sorted, and you can wget/curl from within a node, bootstrap should work [16:59] thanks marcoceppi, at least that's a bit of progress :) [16:59] #maas has for some reason become an invite only channel [17:00] so no help there [17:00] gadago: yeah, MAAS is pure awesome, but setting it up on less than 5 nodes in a network it doesn't own can take a bit of jumping since it's designed to manage multiple datacenters and racks of servers :) [17:00] gadago: that's, what, let me find someone to fix that [17:03] marcoceppi, any ideas why maas does not work when not using it as the gateway? I know it's a dns issue, but I find that quite bizzare [17:04] gadago: I don't have enough experience. When I do MAAS testings my setup has an Intel NUC on it's own gigabit switch and the maas master works as the gateway [17:04] I'm not home at the moment so I can't try it out with DNS only setup [17:05] marcoceppi, np === dooferlad is now known as jamestunnicliffe === jamestunnicliffe is now known as dooferlad [18:17] marcoceppi, finally got the nodes routing through the maas server (with iptables forwarding), but still not luck :( [18:18] gadago: can you curl a payload when ssh'd on the node? [18:19] not tried curl, but internet access on the node is now fine, adn can ping the maas controller on the other subnet just fine [18:20] even getting a /var/lib/juju/tools/1.21.1-trusty-amd64/jujud process now [18:20] mongod seems to be doing something to [18:20] should this be a long process? [18:23] "Install OpneStack in minutes" lol [18:24] the openstack installer on the maas server has hung as well [18:25] maybe not enough grunt? [18:27] gadago: well, is maas set to use the fast installer? [18:27] if not, it's going to take like 10 mins to get the node up alone [18:27] then about 2-3 mins for juju to bootstrap itself [18:27] but if you've got jujud you're on your way [18:27] do you have anything in /var/log/juju ? [18:27] using the fast installer [18:27] maybe I'm just being impatient then [18:28] the elapsed time on the openstack installer stop counting up that's all, and the little blue/purple graph stops moving [18:31] I'll check the log in a moment [18:35] gadago: well clock stopping is never a good sign [18:36] :) [18:36] WARNING juju.cmd.jujud machine.go:602 determining kvm support: INFO: Your CPU does not support KVM extensions [18:36] this one is a VM [18:38] maybe I should try the Multi option for the installer === kadams54-away is now known as kadams54 === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === kadams54 is now known as kadams54-away [20:49] I've a concern reguarding the ceph charm. It seems when the config ceph-cluster-network is specified in the ceph charms config, and is different from the specified ceph-public-network, the charm will fail deployment because a second network interface is not brought up on the ceph-cluster-network. [20:50] hi guys... one question... how can i destroy my local lxc environment, without using 'juju destroy-environment local' (it is not responding, as i got out of disk space :S) [20:50] Does anyone know a way around this? [20:51] hi all [20:51] i am suffering here with a maas environment, trying to get juju to bootstrap [20:52] i keep getting this curl failed [20:52] Attempt 1 to download tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz... [20:52] curl: (7) Failed to connect to streams.canonical.com port 443: No route to host [20:52] tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz downloaded: HTTP 000; time 2.711s; size 0 bytes; speed 0.000 bytes/s Download failed..... wait 15s [20:52] though i can ping streams.canonical.com from the maas box [20:53] any thoughts [20:54] hi all i am suffering here with a maas environment, trying to get juju to bootstrap i keep getting this curl failed Attempt 1 to download tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz... curl: (7) Failed to connect to streams.canonical.com port 443: No route to host tools from https://streams.canonical.com/juju/tools/releases/juju-1.21.1-precise-amd64.tgz downloaded: HTTP 000; time 2 [20:54] .711s; size 0 bytes; speed 0.000 bytes/s Download failed..... wait 15s though i can ping streams.canonical.com from the maas box any thoughts === kadams54-away is now known as kadams54 [21:03] nicopace: have you tried juju destroy-environment local -y --force ? [21:04] lazyPower: let's see what happens with that command... [21:04] nicopace: if that doesn't work, the next step woudl be to lxc-destroy a container or two - and then triage the environment accordingly (juju wont be aware that you've puled the rug out o n those containers) [21:05] lazyPower: nothing happens [21:06] nicopace: sudo lxc-ls --fancy, find a running machine or two that you can destroy [21:06] so... you say i have to destroy some of the lxc containers by hand? [21:06] the destroy-environment --force should have nuked it from orbit [21:06] but if the containers are still running and you have no disk space left - lets try removing one and seeing if we can get the env to respond [21:06] there is no output [21:06] there wont be [21:06] they are all stopped [21:06] and juju status should say teh environment is no longer bootstrapped [21:06] sudo lxc-ls --fancy [21:06] NAME STATE IPV4 IPV6 AUTOSTART [21:06] --------------------------------------------------------- [21:06] juju-precise-lxc-template STOPPED - - NO [21:06] juju-trusty-lxc-template STOPPED - - NO [21:06] ubuntu-local-machine-48 STOPPED - - YES [21:07] interesting... you have a local machine of id 48, that is hanging around. [21:07] juju status [21:07] ERROR Unable to connect to environment "local". [21:07] Please check your credentials or use 'juju bootstrap' to create a new environment. [21:07] Error details: [21:07] cannot connect to API servers without admin-secret [21:07] * lazyPower blinks [21:07] well thats a new error message on me re: admin-secret [21:07] do you have a local.jenv file in ~/.juju/environments/ ? [21:08] i just want to nuke everything up [21:08] no [21:08] but i had some minutes ago [21:08] ! [21:08] ok, so its supposedly gone then, without a jenv, there is no environment as far as juju is concerned [21:08] but juju doesn't respond [21:09] ok... i can bootstrap a new env [21:09] sudo initctl list | grep juju [21:09] do you see any juju services running? [21:09] nothing [21:09] no [21:09] ok, good - your state server and api are gone [21:09] (i've destroyed the recently created env first) [21:09] you have one hanging machine that i suggest you remove unles that machine is important to you [21:10] no [21:10] sudo lxc-destroy -n ubuntu-local-machine-48 [21:10] so how i destroy it? [21:10] after that you should be g2g to run `juju bootstrap` [21:10] ok [21:10] and what about the other two? [21:11] those are just templateces lazyPower [21:11] ? [21:11] nicopace: you can leave the templates around, unless you want to fetch the 200mb cloud image and wait for them to be recreated again [21:11] no [21:11] ok [21:11] those are created when you request a deployment for the series on the local provider. [21:11] so... now i should be ok to start again [21:11] thanks lazyPower [21:11] ! [21:11] correct [21:12] np nicopace, lmk if you have any further issues [21:12] (y) [21:12] keep up the good work and reports on teh list :) === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [22:05] wwitzel3, how goes bug 1417875? I think I will postpone it to 1.21.3 as I don't think a fix will be available in a few hours [22:05] Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority [22:12] sinzui: sadly I don't think I'll have a fix in the next few hours no [22:12] okay, thank you wwitzel3 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === mwenning is now known as mwenning-rr5 === kadams54 is now known as kadams54-away