=== CyberJacob is now known as CyberJacob|Away [00:34] waigani, around ? [00:34] smoser: yep [00:34] can you answer my query in canonical irc ? [00:35] waigani, ^ [00:36] smoser: you mean the #canonical channel? I don't see your query sorry? [00:36] canonical irc. (not freenode). === FourDollars_ is now known as FourDollars === alexlist` is now known as alexlist === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob === vladk is now known as vladk|offline === rogpeppe2 is now known as rogpeppe [09:06] Is there a way to tell which serie to use when deploying to the local provider with "--to=kvm:0" ? [09:06] I mean, my kvm instance is running trusty (like my host) but I'm deploying using precise charms === natefinch-afk is now known as natefinch === vladk|offline is now known as vladk [09:42] caribou: "juju deploy precise/charm --to kvm:0" [09:42] caribou: generally you define the series by the charm you are deploying [09:43] jam1: yeah, but the kvm instance still get built on trusty [09:44] jam1: in the nova-compute charm, it does a lsb_release to get the distro & sees trusty whereas the charm is for precise so it fails [09:57] caribou: I've honestly only heard of this bug: https://bugs.launchpad.net/juju-core/+bug/1302820 [09:57] <_mup_> Bug #1302820: juju deploy --to lxc:0 cs:trusty/ubuntu creates precise container [09:57] which should be the opposite of what you are seeing [09:58] jam: indeed; well I hacked around it by providing trusty/nova-compute and it works [09:58] same bug [09:59] fix isn't in a release [09:59] thumper: ah that is that your creating LXC with the same lsb-release as the host [09:59] he is on trusty and wants precise [09:59] original bug was on precise and wanted trusty [09:59] gotcha [10:00] thumper: should we be backporting the fix to 1.18.1 ? [10:00] jam: probably [10:00] thumper: I can live with my workaround, I was just curious since that --to kvm:0 is rather new stuff (to me at least) [10:00] * thumper nods [10:04] right now, I'm stucked on something else I keep getting "Host '192.168.122.154' is not allowed to connect to this MySQL server" when I try to add a relation & both services are on the same machine [10:04] (mysql & keystone for now, but I get this with other services) [10:08] if mysql & keystone are on separate machines it works [10:17] caribou: you are using the local provider? [10:17] thumper: yes, but my colleague sees the same thing on a maas deployment [10:17] thumper: he's just opened https://bugs.launchpad.net/charms/+source/mysql/+bug/1305582 [10:17] caribou: do you have the network-bridge in the local config set to 'virbr0' ? [10:17] <_mup_> Bug #1305582: relation with mysql fail when mysql and glance are deployed on the same node [10:17] thumper: yes [10:18] thumper: but both my machines are LXC container, I only use kvm for the nova-compute charm [10:18] ok [12:11] Hi guys. I'm trying to use juju with my own openstack cloud. Bootstrap finished correctly but when i'm trying to deploy mysql charm I'm getting this: "ERROR error uploading charm: cannot upload charm to provider storage: cannot make Swift control container: failed to create container: juju-123456789" But swift is accessible and bootstrap was able to create this container already and populated it with some files. What I'm doing wrong? Thanks [13:19] lazyPower: ping [13:19] jose: pong [13:19] lazyPower: looks like I fixed it! [13:19] well, kinda [13:19] wooo! [13:19] * lazyPower dances [13:20] the only problem we have is it doesn't migrate apps [13:20] like, people would have to re-enable them manually [13:21] but let's say, data is preserved, I think, the calendar app did preserve data [13:29] lazyPower: ^ [13:29] jose: ok. Did you update your MP yet? [13:30] s/yet// [13:30] I can pull it and take a look after hours today and see what the status is and look at how difficult the plugin migration would be [13:32] ok, I'll update the MP asap [13:32] the branch is updated === Ursinha is now known as Ursinha-afk === hatch__ is now known as hatch [14:07] jose: no rush [14:07] i wont be able to get to it until after my workday === Ursinha-afk is now known as Ursinha [14:53] I'm getting an error that I don't understand at all when trying to run my amulet test: http://pastebin.ubuntu.com/7231112/ and my test, in case it's relevant: http://pastebin.ubuntu.com/7231117/ [14:54] juju status just after shows only the machine and no indication of any units being created. Unfortunately, since it's LXC, I can't get access to juju log at all [14:56] cory_fu, can you check ~/.juju/local/log/ for the logs [14:56] Well, apparently I can, and that's good to know [14:56] But it doesn't seem to have anything useful in it [14:57] cory_fu Can you patebin all-machines-log.0? [14:58] mbruzek: http://pastebin.ubuntu.com/7231138/ [15:01] OK cory_fu I don't see anything in that log. It looks like a deployer or amulet bug to me. Does anyone else have insight on the error http://pastebin.ubuntu.com/7231112/ [15:03] cory_fu, You can enable higher trace on your juju environment by running a command after bootstrap. The command only turns up debugging for one boostrap/destroy-environment session, but it might help print out more details. [15:03] juju set-env 'logging-config==DEBUG;juju.provider=DEBUG' [15:04] cory_fu: mbruzek https://bugs.launchpad.net/amulet/+bug/1293878 [15:04] <_mup_> Bug #1293878: Amulet should work with local charms that are not in version control [15:05] oh oops that is a bug I reported [15:05] Yep, that looks like it [15:05] Thanks [15:05] thanks marcoceppi [15:05] cory_fu, if I remember correctly the solution was to add the charm to bzr under my own namespace [15:06] jsut being in bazaar will fix it, doesn't need to be pushed or anything [15:06] well a workround. [15:06] I guess I have to move my bzr learning phase up to before making sure the test passes. :-p [15:06] deployer expects bazaar, so in the future amulet will transparently move non versioned and other versioned charms to bzr [15:07] cory_fu if you have bzr questions send them to marc... me [15:07] :-) [15:07] cory_fu, It was really easy to add it to a personal branch [15:10] Hrm. Do we need to make and upload new public keys for launchpad because of heartbleed? [15:10] Oh, no, that shouldn't affect ssh [15:11] Hrm. Then why did bzr start giving me a publickey error [15:13] cory_fu: just now? [15:13] cory_fu: positive that the current identity is uploaded to launchpad? [15:14] Well, I swear it was working yesterday, but just now I got a publickey error [15:15] cory_fu: and its yoru standard ~/.ssh/id_rsa right? [15:16] Well, it's a different one but I have a Host launchpad.net section in my .ssh/config [15:16] Ok, and your config is owned by your user, with proper permissions? [15:17] Yep [15:17] Interesting. that should be fine [15:17] Hrm. Maybe I should generate a new pair. :-( [15:18] You can also try removing the pubkey and resetting it [15:18] i doubt thats it but worth a shot [15:23] Ah. I needed the full bazaar.launchpad.net on the Host line [15:23] Could swear it worked yesterday. Maybe because I hadn't done bzr whoami yet [15:24] Or some other config to indicate which account to use [15:25] * lazyPower thumbs up [15:26] from fastimport.helpers import ( [15:26] ImportError: cannot import name single_plural [15:27] o_O === whit is now known as whit|run [16:47] Is there any workaround for getting "ImportError: cannot import name single_plural" from bzr when trying to use fastimport to import from git? [16:48] mbruzek: ? === 21WAABD33 is now known as sarnold [17:08] sarnold: ping [17:09] hey lazyPower :) [17:10] So, you're a random number in the ether until you 'officially' arrive? as predeciated by your status update at 13:04:25 EST [17:11] lazyPower: maybe several confounded random functions -- if my pandaboard hangs, then I arrive -- if my pandaboard doesn't hang, then I'm always here :) [17:11] i need to 1:1 with you at some point about building an in house low power maas cluster sinc eyou've got more micro board experience than I do [17:12] more along the lines of what to look at and what to stay away from [17:12] before i let that statement run away with context [17:13] lazyPower: heh, i've just got the one pandaboard, and I think my experience with it is clear to find something else entirely :) [17:13] haha [17:13] Fair enough [17:13] lazyPower: back when our buildds were using pandaboards, someone from IS had to poke them just about daily to keep them building :( [17:15] hey do we have a charm that just deploys a base line server? I swear we did [17:16] ppetraki: was that 'ubuntu'? [17:16] ppetraki: its the ubuntu charm [17:16] deploys a no frills ubuntu server installation [17:18] sarnold, thanks [17:22] sarnold: also, last night i learned that you cannot colocate a juju-local installation on your maas region controller if you're using bridged ethernet devices without doing some serious voodoo in the juju config [17:22] it tanked networking on my server until i removed the juju-local package :P [17:22] lazyPower: hah, yikes [17:22] yeah im like, poking around at really strange configurations [17:22] lazyPower: somehow I'm not too surprised, the assumptions of all the different tools involved are pretty strong [17:23] i went from having my maas-master as a virtual machine to using bare metal as the region controller, controlling kvm instances. [17:23] this is all for a blog post about what configurations i found that work, for someone that wants to build an "all encompassing juju lab" [17:24] oh nice [17:24] some more 'real world' stories of maas and juju use would be pretty cool [17:24] well, i've got 3 special interest groups from CMU that contacted me about running juju workshops [17:25] it's either "hey look at this wordpress install" or "yes we have customers with a few hundred or thousands of machines doing this on private clouds and they seem to like it"... [17:25] sweet! [17:25] I'll try to get them to publish their experiences to the list so we can reblog and promote their use. The biggest use case i see is a LUG on campus is offering free VM's with openstack [17:25] they want to do the full maas + juju + openstack path [17:29] oooo [17:29] man, when I was at school, we had one "linux lab" of machines that were castoffs from the windows labs but worked great for us.. [17:39] can we support bundles from git hub? [17:39] err do we? [17:42] sarnold: make ingest faster [17:43] lazyPower? [17:43] i'm waiting for a bundle i just published to ingest so i can deploy it using deployer... make ingest run faster! [17:43] i know you're secretly the wizard behind all of this [17:44] * sarnold waves his hands meaningfully [17:44] wooo juju genie powers [17:47] Having issues with Ceilometer HTTP connection exception: [Errno 111] ECONNREFUSED with a fresh charm install, is there anything special that needs to be done to get keystone to authorize it? [17:48] Kupo24z: have you exposed the service? [17:48] jose: No, not yet [17:48] Kupo24z: can you try exposing it? [17:49] Same issue after exposing [17:50] Kupo24z: what environment are you running on? [17:50] and can you reach the instance with juju ssh ? [17:51] Yes [17:52] juju ssh ceilometer/0 [17:52] So the jenkins charm on trusty listens on ipv6 only. We can can add "-Djava.net.preferIPv4Stack=true" to JAVA_ARGS and restart the server to get it to listen on ipv4 but is this the preferred way of doing this? [17:52] restart the service* [17:52] this is on ubuntu 12.04 LTS with openstack-origin: cloud:precise-havana [17:53] timrc: ideally it should be a boolean option in the config and the charm should handle adding that option [17:53] lazyPower, I can add that. Sounds like a good idea [17:54] Kupo24z: ok, so it really is with just ceilometer, i'm not 100% familiar with the charm, but I would check the service to ensure its listening on the public address [17:54] i'm willing to bet its only exposed to the private address [17:55] actually [17:55] since you're ssh'd in, try to curl get the address on the private ip vs public ip [17:55] and see if it responds on one or the other [17:55] this is the only thing (non -sshd) that its listening [17:55] tcp 0 0 0.0.0.0:8777 0.0.0.0:* LISTEN 8118/python [17:56] PID 8118 is /usr/bin/python /usr/bin/ceilometer-api --log-dir=/var/log/ceilometer [17:57] Kupo24z: i've asked in #ubuntu-server, waiting on a response. I've not managed openstack outside of using the horizon dashboard === vladk is now known as vladk|offline === mattgriffin is now known as mattgriffin-afk === mattgriffin-afk is now known as mattgriffin [20:03] Can someone assist with the ceph charm? are OSD devices physical disks or partitions to use for the ceph storage cluster? [20:05] Kupo24z: block storage devices. [20:05] eg: /dev/sda2 [20:05] Do they need to be unpartitioned disks or anything that is a traditional block device will work? [20:06] Kupo24z: i'm fairly certain it runs a format for you if it does not report as having a filesystem [20:07] and im assuming all ceph nodes need to have the same block devies avalible if spawning multiple nodes since they all rely on the same config file [20:21] correct, unless you want to deploy named ceph nodes [20:23] Kupo24z, If you use a more recent source with the ceph charm you are able to use a directory as the storage device. [20:25] Kupo24z, I ran into a problem where the block devices were difficult to create, so I set the "source" configuration option to "cloud:precise-updates/havana" which gives you a more recent ceph [20:27] Kupo24z, With that version of ceph I was able to specify another config option 'osd-devices' to a non existing directory "/srv/osd/" and ceph created a block device there. [20:31] mbruzek: wouldnt that create additional overhead if you are going through the filesystem for ceph storage? [20:32] eg partition -> ext4 -> ceph vs partition -> ceph [20:32] yes but I thought you were asking how to create devices. If you already have devices then you can safely ignore my comments. [20:38] mbruzek: thats good to know! [20:38] you told me that last week, however i forgot. WRiting that down. [21:26] Hello, I'm trying to deploy juju-gui on local-provider (all in a lxc), but when I expose the service, agent-stat-info returns: '(error: error executing "lxc-clone": lxc_container: failed mounting /var/lib/lxc/juju-precise-template/rootfs onto /var/lib/lxc/juju-precise-template/rootfs; lxc_container: Error copying storage; clone failed)' [21:27] I'm running juju 1.18-0-trusty-amd64 [21:28] seepa: you shouldn't need to expose things in the lxc environments. They don't support it [21:29] rick_h_: oh, I see. [21:34] rick_h_: The error occurs after juju deploy juju-gui. [21:36] seepa: oh hmm, yea looks like your error is with creating the lxc machine. I'm not sure on that end [21:40] how can juju even mount /var/lib/lxc/juju-precise-template/rootfs? /var/lib/lxc is owned by root rwx------ ... [21:41] well it can't, but tries to mount it [21:57] Seems I cannot destroy a service if the relation is still open. Ive got Ceph as 'life: dying' however it just hangs there, probably because of an existing relation [21:57] however when I try to remove the relation nothing happens, is there a force? [22:24] Kupo24z: what's the state of the service? [22:28] jose: http://pastebin.ubuntu.com/7232718/ [23:05] Kupo24z: show me the full output of your status [23:05] if you haven't resolved it [23:06] lazyPower: I just destroyed the environment and started over [23:07] Ok. If a dependent service that it's related to is in an error state [23:07] Nothing was in error state [23:07] that error will need to be resolved before that service will continue being destroyed [23:07] It wasnt removing a relation for some reason, no errors at all on juju status [23:08] Ok, if you run into it again ship me the full output listing from juju status and we can investigate from there [23:08] like, ping me. I'm half in half out tonight on IRC. [23:08] jose: did you get that branch pushed? [23:38] lazyPower: yep, and MP updated [23:38] jose: awesome. I'll take a look after i eat dinner. [23:38] enjoy! [23:38] thanks for the quick turn around and effort on that [23:38] I hope it's something that actually works and follows the charm store policy [23:39] If it needs doctoring I'll be happy to doctor it up and submit a MP to your branch [23:39] then we'll poke matt or marco to take a look as confirmation [23:39] awesome then :) === CyberJacob is now known as CyberJacob|Away