=== mwhudson is now known as zz_mwhudson === zz_mwhudson is now known as mwhudson === mwhudson is now known as zz_mwhudson === CyberJacob|Away is now known as CyberJacob === zz_frobware is now known as frobware === CyberJacob is now known as CyberJacob|Away === psivaa-afk is now known as psivaa === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === gary_poster|away is now known as gary_poster [14:07] if a charm has a database relation, how should it handle being in relation to multiple databases at once? should it reject the second join? should it ignore it? [14:19] cargill, well as a client to a database, each of the db conns would have different relation names [14:20] cargill, ie.. mediawiki does this.. it can have multiple mysql relations.. one for read slave and one for db.. it distinguishes the usage based on the relation name (which also means different rel hooks) [14:20] cargill, client/require deps are only satisified once per service. [14:21] hazmat: but if you have a database relation, the user can still join it with multiple charms, right? [14:21] (database charms) [14:22] cargill, right.. the server/provider side can have many instances of the relation [14:23] cargill, in terms of distinguishing between those, you can use relation-ids to list the different instances of that named relation on the server [14:24] cargill, its not clear what your question/use case is.. could you elaborate? [14:24] you say provider can have multiple instances of a relation, but the other side cannot? [14:26] designing a db-relation-joined/departed, I wonder if I have to handle a user setting up a relation to multiple database charms (where the application can only connect to a single database) [14:27] cargill, well... technically it can, its just not common (and certains tools like the gui don't support it) [14:27] cargill, you mean like they can connect to postgres or mysql? [14:29] cargill, maybe this example clarifies http://pastebin.ubuntu.com/6949028/ [14:31] actually that simplifies it too much.. here's a better example http://pastebin.ubuntu.com/6949032/ [14:33] so again, the question is, if someone tries to do that (add a second db relation, where one is already active), what's the right response? [14:34] (from the *joined/departed hooks) [14:56] cargill, i'd error so it draws attention from admin [14:56] thanks [14:56] cargill, and log an appropriate error msg [14:56] sure :) === freeflying is now known as freeflying_away === frobware is now known as zz_frobware [15:43] hi, is it bad if it says "instance-state: missing" after deploying a charm? [15:43] agent-state is "started" :-) [16:17] tomixxx3: is this on local provider? [16:17] hi marcoceppi [16:17] what do u mean with "local provider" ? [16:18] tomixxx3: the instance-state: missing, what provider are you using? Local, amazon, hp cloud, etc [16:18] openstack [16:18] tomixxx3: interesting, does it still say missing? [16:18] yep, btw nodes has internet-access now :-) [16:19] i had to set "router ip" in maas dashboard to the same ip of the MaaS-Server [16:19] figured this out with jtv in #maas [16:19] tomixxx3: ah, good to know [16:20] marcoceppi: right now, i have deployed a bunch of charms and i'am waiting until the all have "started" [16:20] tomixxx3: well that means it simply can't figure out if the instance is running or not. missing could mean the instance is gone or it can't get a status [16:21] marcoceppi: oh no, sounds not good [16:21] but let's see [16:21] tomixxx3: could you show me your juju status? [16:21] tomixxx3: also, in the horizon dashboard do you see instances launched? [16:21] i mean, i have deployed multiple charms on a single node, because i have not that much nodes [16:22] with lxc-creat if u remember [16:22] so, are you using openstack or maas? [16:23] both ? ^^ [16:23] https://help.ubuntu.com/community/UbuntuCloudInfrastructure [16:23] tomixxx3: can you pastebin your juju status please [16:24] one sec [16:24] http://pastebin.ubuntu.com/6949593 [16:25] as u can see, cloud2.master is still booting [16:25] (ok i cann see the node is booting ^^) [16:25] tomixxx3: Okay, so this is on the maas environment [16:25] however, nova-volume failed [16:25] instance-state missing is probably a known issue with lxc containers, the agent-start is started and that's all that matters [16:25] yep [16:26] cloud2.master probably needs to be power cycled depending on how long ago you commisioned it [16:26] nova-volume is in error, so try running juju resolved --retry nova-volume/0 see if that helps [16:26] cloud2.master is installing ubuntu right now [16:26] i have it in front of me [16:26] tomixxx3: gotchya [16:27] tomixxx3: also, could you pastebin the log from nova-volume/0 [16:27] kk [16:27] it'll be in /var/log/juju/unit-nova-volume-0.log [16:28] on nova-volume/0 [16:28] i have to login on nova-volume/0 for this i guess? [16:29] tomixxx3: if you recall, co-locating most all services to LXC /might/ work but isn't recommended. You might need to do some re-jiggering to get it to work [16:29] tomixxx3: yes, run juju ssh nova-volume/0 [16:29] "re-jiggering" ? [16:30] tomixxx3: you might have to massage the node a little bit to get it to setup [16:30] at home, i have two physical nodes lying around, maybe i attach them to the cloud [16:30] tomixxx3: it might not be needed [16:30] kk [16:30] it depends on why nova-volume errored out [16:33] do u know how i can Strg+A the content of a file openend with vi [16:33] ? [16:33] tomixxx3: you can install pastebinit [16:33] then run cat /var/log/juju/unit-nova-volume-0.log | pastebinit [16:33] and it'll give you a pastebin url [16:34] a nice ^^ [16:36] here it is: http://pastebin.ubuntu.com/6949659 [16:37] tomixxx3: okay, so this is the error [16:37] nova-volume ERROR: /dev/xvdb is not a valid block device [16:37] nova-volume needs a block device to take over [16:37] like ceph [16:38] I don't know if you actually need nova-volume [16:39] jamespage: do you actually need cinder or nova-volume to deploy openstack? [16:39] marcoceppi, you can elect to not have block storage and drop it [16:39] jamespage: cool, thanks [16:39] also nova-volume is < folsom btw [16:39] btw, all other charms are started now :-) [16:39] jamespage: right, cinder is recommended for folsom right? [16:40] and should not be carried through to 14.04 [16:40] marcoceppi, that's correct yes [16:40] jamespage: cool, thanks! [16:40] tomixxx3: what you can do, for the sake of getting your openstack demo running, is remove nova-volume and continue on with the deployment [16:41] nova-volume needs its own machine, i guess? (i have read sth like this a few weeks ago, if i remember correct) [16:41] in tests, when I've changed a condif value, how do I find out when the change has been carried out so that I can test the result? [16:41] tomixxx3: yeah, though in future deployments you'll want to use cinder instead [16:41] cargill: are you using amulet? [16:41] not yet [16:41] cargill: then there really isn't a way at the moment [16:42] but can do if it makes things like that possible [16:42] cargill: well, it's not perfect, but it strives to resolve that problem by monitoring the hook queue for all the services to know when the environment is idle [16:43] cargill: otherwise you'll just have to put a sleep or something in your test for X seconds you think it takes on average for the config-change to occur [16:43] marcoceppi: More abstractly, later on, i want to upload sth to my cloud, process sth on my cloud and download sth from my cloud. so, is nova-volume not a kind of cloud-storage which i need? [16:44] (for now, i will remove nova-volume) [16:44] marcoceppi: well, a config change can be a change in the deployed version, that means a redownload, there's no telling really, then [16:44] tomixxx3: you'll probably use an object store, nova-volume is for attaching drives and blocks to your servers [16:44] kk [16:44] where as the object store can be used to upload stuff, have your servers process stuff, then place the results there [16:44] swift is the object store used in OpenStack [16:44] cargill: exactly [16:45] cargill: that's why I started amulet, to be able to intercept relation values and validate those values and to know when an environment was idle [16:48] where's the docs for amulet? can't find it in the juju docs [16:48] hmm i have executed "juju destroy-service nova-volume" but it does not disappear when i call "juju status" [16:48] cargill: https://juju.ubuntu.com/docs/tools-amulet.html [16:49] tomixxx3: because it's in an error state [16:49] tomixxx3: just keep runnin juju resolved nova-volume/0 [16:49] kk [16:51] if i do "juju add-relation nova-compute rabbitmq-server" i get ambiguoos relation [16:52] tomixxx3: what's the ambiguous relation output? [16:52] http://pastebin.ubunut.com/6949742 [16:52] sorry [16:52] no worries [16:52] http://pastebin.ubuntu.com/6949742 [16:53] tomixxx3: nova-compute:amqp rabbitmq-server:amqp [16:53] tomixxx3: use `juju add-relation nova-compute:amqp rabbitmq-server:amqp` [16:53] kk [16:58] ok, all relations added [16:59] (except those with nova-volume) [16:59] now, i should point to http://node-address/horizon [16:59] i got an "Internal Server Error" when calling 10.0.0.109/horizon [17:00] tomixxx3: you may have to wait for a few mins [17:00] kk [17:00] this leightweight-container thing is quite interesting, they have their own ips ^^ [17:02] marcoceppi: amulet is awesome, it actually allows one to look into the service unit and see whether things are ok or not [17:02] latest juju state: http://pastebin.ubuntu.com/6949780 [17:03] cargill: glad you think so, there are still a few bugs being worked out with how subordinates function, but it's coming along quite nicely [17:03] where anything else would be a lot of boilerplate around ssh duplicated between charms [17:03] tomixxx3: is the dashboarding working now? [17:04] marcoceppi: no, not yet. do i have to expose some charms? [17:04] according to guide: 5. Expose the services you want (optional) [17:04] but i have maas, not? [17:05] guide: https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Install_Juju === zz_frobware is now known as frobware [17:06] tomixxx3: maas has no firewaller, so it doesn't matter [17:06] ok, btw, http://10.0.0.109 works [17:07] and it says, it has no content yet [17:07] tomixxx3: what version of openstack did you deploy? folsom? grizzly? [17:08] dunno [17:09] tomixxx3: what does juju get openstack-dashboard show for openstack-origin? [17:09] default: true [17:10] tomixxx3: what does value show? [17:10] distro [17:11] okay, so you have folosm, which means you ran into the django bug [17:16] ok, is this a bad bug? [17:17] tomixxx3: well it prevents the dashboard from working [17:17] which is kind of annoying [17:19] k, is there a way to fix this or can i deploy another openstack version? [17:19] i want a dashboard, i have seen already the dashboard on the usb-all-in-one-node-cloud-demo and it looked nice :-) [17:20] gives me the feeling everything works as it should [17:23] Hi! is anyone here available to help me with a problem?. [17:23] I'm using juju 1.16.6 [17:24] and the I'm getting the old index file contains no data for cloud, error. [17:24] I have generated imagemetadata.json and index.json [17:24] and uploaded them, using swift, to my cloud public bucket [17:25] which is named juju-/streams/v1/ [17:25] then the two json files are there [17:25] yet I still get an error when running juju bootstrap [17:25] any ideas? [17:26] horizon is folsom, right? [17:27] is this a possible fix to the dashbaord error: https://lists.launchpad.net/openstack/msg17255.html [17:29] marcoceppi: ok, i have to go now! however, today we made good progress :-) ty for all your help so far! [17:29] tomixxx3: np, I'll look for a patch for your django issue [17:29] marcoceppi: kk, ty! [17:29] xp1990: can you run juju bootstrap --show-log -debug and pastebin the output? [17:47] marcoceppi, the dashboard is hosed with juju deployment prior to havana [17:47] marcoceppi, cloud-tools contains a new version f django [17:47] jamespage: yeah, I remember, this is just because the cloud archive has a more recent version of django, right? [17:47] it should be fixed soon - I think its commited in juju-core [17:48] marcoceppi, yeah - you got it [17:48] there should be a way to lower priority remove and reinstall django though, right? [17:51] marcoceppi: I just juju ssh'd into the node and removed django 1.5. 1.3 mostly works, though it also bombs on a few pages :/ [17:52] roadmr: bummer, I guess it's best to just use havana if possible [17:53] marcoceppi: that'd be ideal! I'm lazy and I just juju deployed openstack-dashboard. Is there a way to point juju to charms that use havana? [17:53] roadmr: yeah, so you'll have to change the openstack-origin to havana for each charm, but that should trigger an upgrade [17:54] marcoceppi: oh cool! so it will just upgrade my existing charms/services? (if it destroys stuff that's OK, I don't have anything important there yet) [17:55] roadmr: well, something like openstack-origin: cloud:precise-havana/updates [17:55] roadmr: but yeah, it'll just upgarde the services and it shouldn't break anything or lose anything in the process [17:56] marcoceppi: awesome! I'll give it a try, thanks! [18:04] roadmr, it will upgrade yes - but openstack upstream only officially supports serial release upgrades [18:04] so you need to step [18:04] cloud:precise-grizzly [18:04] cloud:precise-havana [18:04] some things might double jump [18:05] its an area the server team is doing some work on for icehouse [18:05] jamespage: oh ok, I'll keep that in mind === BradCrittenden is now known as bac === CyberJacob|Away is now known as CyberJacob [22:09] marcoceppi, et al: Is there a way with the local provider to pass in an lxc bind mount (or a way to edit the bindmounts and restart the container?) [22:09] * med_ needs to attach a larger drive in an lxc/local-provider [22:09] and/or is sherpa (ssh provider) now available? [22:15] med_: not with lxc/local [22:15] med_: but manual provider (previously ssh/sherpa/null) is now available [22:15] recommended you use 1.17.2 release for manual provider as it's still relatively new [22:15] marcoceppi, thanks [22:15] nodz. [22:16] marcoceppi, https://juju.ubuntu.com/docs/config-manual.html the right place to start with manual/sherpa? [22:16] med_: yeah, except it's not called null anymore [22:16] looks good to me. [22:17] thanks marcoceppi, giving it a whirl. [22:17] * marcoceppi files a bug to fix docs === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === zz_mwhudson is now known as mwhudson === freeflying_away is now known as freeflying [23:11] * JoshStrobl asks marcoceppi for a link to the bug so he can track it. [23:17] JoshStrobl: which one? [23:18] marcoceppi: all of them! :P Well, any that are specific to fixes / improvements to Juju documentation, particularly if there are anything regarding improving documentation for local environments, promoting the use of Vagrant, etc. [23:18] If there aren't any bugs regarding promoting the use of the Vagrant container, I'd be more than willing to file the bug if you just point me in the right place. [23:21] JoshStrobl: there's none about that in particular, you can file bugs here: https://bugs.launchpad.net/juju-core/+filebug make sure to target the "docs" branch of juju-core [23:22] JoshStrobl: we're also in the process of migrating the docs to gh, so eventually I think we'll track issues there as well === mwhudson is now known as zz_mwhudson [23:22] noted! [23:27] Hey marcoceppi, by branch do you mean apply the "docs" tag in the tag section the file bug form in juju-core? [23:28] JoshStrobl: no, there's a way to target a specific series [23:28] the docs are a series of juju-core [23:31] I see it listed on the right side of https://bugs.launchpad.net/juju-core/docs/+bugs as a "Series-targeted bugs" but when you click "docs" and then go to file a bug, still shows the same form with no input area for providing the series. Is there a way to do that post filing the bug? [23:32] * JoshStrobl thinks marcoceppi is probably face-palming right now [23:34] JoshStrobl: you have to first submit the bug before changing it [23:34] it's just a limitation of lp bugs [23:35] Well, hopefully that'll get resolved in the future. Or maybe I should file a bug (if there isn't one already) for that too :P [23:36] launchpad is feeling a touch unloved, 92 critical bugs, 655 high importance bugs, https://bugs.launchpad.net/launchpad/ [23:57] marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1281345