=== thumper-dogwalk is now known as thumper [02:08] anyone know how to get juju to support arm64? I've got maas setup with the arm64 images, but telling it to deploy to an arm64 node gives me "no matching tools found", even though I can see arm64 tools on the state server [09:52] hello. I have a charm which I was told would end up int he charmstore but it still has not happened: https://bugs.launchpad.net/charms/+bug/1538573 and http://review.juju.solutions/review/2426 [09:52] Bug #1538573: New collectd subordinate charm [09:53] can one of the charmers have a look at tell me if anytnign is blocking it? [09:59] tinwood, hey - I've pushed up you serverstack-dns changes into the PPA [09:59] https://launchpad.net/~openstack-charmers/+archive/ubuntu/test-tools [11:39] What's the usual deployment time of juju-gui charm? I'm facing 15+ minutes of deployment time. Is that normal ? [11:41] not really [11:41] few minutes usually [11:43] magicaltrout: Maybe I'm doing something wrong with my environment. I'm using a simple VM with 2 cores and 4096 ram and manual provider pointing to itself. Is that ok ? [11:43] It's been a while since I played with the manual provider but I'd have a look in /var/log/juju/.... and find the log for juju-gui [11:43] deanman: you can juju ssh juju-gui/0 and tail the /var/log/juju/unit-* to see what's going on [11:44] yeah.. what he said [11:46] rick_h__: Ok i will try that and see what happens. Is there a way to time the actual deployment time instead of using watch juju status to see changes in status ? [12:18] rick_h__: You cannot ssh to juju-gui before the containers gets assigned an IP. I notice it takes a lot of time to install the agent on the container. And debug-log doesn't produce any logs to see whether somethings goes wrong. Any other hints? [13:39] thedac, gnuoy, tinwood, beisner: ok so we should be on auto-matic bug management via Closes-Bug: XXXX in commit messages now [13:41] jamespage, tip top [13:42] hey [13:42] so, is there a guide for writing a charm? [13:43] nvalerkos: yup [13:43] nvalerkos: https://jujucharms.com/docs/devel/developer-getting-started [13:44] thanks marcoceppi: [14:05] okay, rolling back up to last night [14:05] what status do I need to watch to find out when the next leader election routine is run? [14:08] * magicaltrout reckons amulet should have a bunch of constants which map to various Juju status' which would be easy to find and map into test classes === _Sponge is now known as MagicSponge [14:23] cool jamespage, sounds good. :) [14:23] Scenario - I have juju running on a dedicated local machine, runniung great. (Thanks to #juju-gui channel help)... I want to have OpenStack devstack running on the same box, and have the local juju orchestrate instances on that devstack. My question is a "whos on first"... should I have juju deploy openstack/devstack on the local box, or should I i [14:23] nstall devstack seperately on the box, and somehow configure juju to see it? [14:23] thanks for the merge gnuoy on charm-helpers. [14:30] np [14:32] techgesture, I'd say deploy devstack and the tell juju about your devstack installation. Having juju deploy openstack on your laptop in containers is coming very soon but not quite there yet [14:34] gnouy - I was thinking that running OpenStack inside containers would be goofy [14:35] would a quick google-search for "connect juju to OpenStack" show me how to do that? or do you happen to know a good link? [14:37] techgesture, https://jujucharms.com/docs/1.25/config-openstack [14:37] techgesture: https://jujucharms.com/docs/stable/config-openstack [14:37] awesome - thank you kindly [14:37] np [14:40] kwmonroe: with the xenial vagrant image, did it mount /vagrant for you? Mine did not. :/ [14:41] just an observation - I work for a major cloud provider - we are bringing in Canonical OpenStack and Juju - and I'm the guy helping with allot of PaaS stuff to say the least - I don't think I realized until just yesterday that Juju is in itself a PaaS... that natively it is controlling containers, not too much different than CloudFoundr [14:41] CloudFoundry [14:42] I've gone through a number of Juju seminars, and I'm telling you I don't think the message is clear that it can be used as a PaaS... usually the focus is on Orchstration of other clouds and launching services [14:43] I bet there is room to do some documentation on that - - I'll see if there is something I can do to contribute back to the community [14:43] techgesture: well, it's really like Juju can deploy services you market as paas, but it's really modelling those applications - so if you turn those around and offer it as a PAAS, then it's now managing your paas ;) [14:44] but it's true, it's not a story we highlight very well [14:49] http://askubuntu.com/questions/735182/does-services-like-trove-ceilometer-et-cetera-come-with-ubuntu-openstack-autopi [14:49] beisner: this one's all you ^ :D [14:54] kwmonroe: cory_fu: FYI, not sure if this is a bug or not: http://askubuntu.com/questions/740094/adduser-fails-when-relating-hadoop-plugin-to-hbase-master [14:56] bug? surely not crom corey [14:56] -c +f [14:57] bah [14:57] back to tests [14:58] negative aisrael, current vagrant xenial fails for me with "ifup eth1" error, "RTNETLINK answers: File exists". i've tried adding eth1.cfg's strategically in my system, some with just "auto eth1", some with full config to my 172 static address. when i configure with an ip address, i see eth1 is up and pingable, but no shared vagrant folders :/ [14:58] kwmonroe: I'm switching to trusty/juju, and dist-upgrading from there. [15:04] yeah jcastro, that's a bug. the problem is that our promulgated apache-hadoop-plugin has an old charmhelpers bundled in it :( [15:05] and that old CH doesn't have the new adduser signature committed here: http://bazaar.launchpad.net/~bigdata-dev/charm-helpers/framework/revision/445 [15:05] the good news jcastro, is that it looks like he sorted himself by changing the sig. [15:06] the better news jcastro, is that this goes away when we push our layered charms to our promulgated space sometime this week. [15:08] kwmonroe: yeah I'm glad he fixed it, I just wanted to point it out [15:09] ack [15:11] http://askubuntu.com/questions/742214/openstack-maas-second-hard-disk-recommendation [15:11] beisner: easy bounty ^^^ [15:21] I am trying to build a charm layer between tomcat7 and mysql, I cannot find any example tomcat7 interface. I've built my charm, but I am getting some subordinate errors. [15:23] anyone wanna help out ? or point a way towards the light? [15:29] jcastro: The root cause was mixing hbase from bigdata-dev with plugin, et al. from charmers. There was a known incompatibility and the layered versions should be future-proofed against that sort of thing happening again. [15:29] I'll comment on the question [15:30] nvalerkos: not sure I understand your question [15:30] I found the spagobi https://jujucharms.com/u/spagobi-charmers/spagobi/trusty/4 I will reference that and see from there.. [15:31] nvalerkos: spago is a good example, but it's not a layer, so be aware that it's not a 1x1 matchup [15:42] marcoceppi: I want a java webapp example to look at that uses mysql [15:43] nvalerkos: I can possibly sort you out [15:43] marcoceppi: I made my charm, most of it with python, I am not [15:43] planning to deploy it on community [15:44] http://bazaar.launchpad.net/~f-tom-n/charms/trusty/saikuanalytics-enterprise/trunk/files [15:44] uses tomcat as a subordinate charm and allows connections to mysql [15:44] its not layered though its old style [15:46] magicaltrout: thanks [15:46] "classic" charming ;) [15:46] "hacked up and somewhere mid review" is how i like to position it ;) [15:47] actually it got sent back to me to fix which I did, but then didn't know how to get it back in the review queue [15:47] so it stopped :) [15:47] but once i get the tests finished for PDI, I need to roll Saiku 3.8 at which point I will get that finished and recommended because then we'll start pushing Juju as the deployment method [16:18] Does this "error fetching public address: "public no addres" ring any bells ? [16:20] deanman: is that still juju manual? [16:21] magicaltrout: Yeah, unfortunately i've tried all sorts of configurations but still this long delay. [16:23] weird [16:23] dunno, sounds like a funky networking issue though [16:24] magicaltrout: single VM (vagrant) with private address 192.168.11.11. [16:25] sorry deanman never used vagrant, although I know a bunch of canonical employees do [16:25] it might be worth dumping a query on the mailing list with the logs and see if anyone knows [16:25] magicaltrout: nothing fancy about it, it just spins a VM with virtualbox. [16:25] magicaltrout: ok i will do that, thanks! [16:44] marcoceppi: tvansteenburgh ping [16:44] magicaltrout: yo [16:44] hello there [16:44] I need some clarification because I believe you're lying :) [16:44] magicaltrout: it's happened before [16:44] https://github.com/OSBI/layer-pdi/blob/master/tests/tests.yaml#L10 [16:44] reset: true [16:45] now it was mentioned the other day, this should give me a clean machine per test [16:45] tvansteenburgh: ^ [16:45] https://github.com/OSBI/layer-pdi/blob/master/tests/01-deploy.py I was also told a test == 1 def test_xyz function [16:45] not per test, per test file [16:45] oh okay you two didn't lie [16:45] someone else lied about the scope of the test :P [16:45] i was careful to say per file :) [16:46] * magicaltrout splits stuff up [16:46] thanks chaps [16:47] other quick q [16:47] does bundletester run in 2.0 yet? [16:48] nope [16:48] shiny [17:19] lazyPower, do you know where run_action for amulet is defined? [17:20] cholcombe: it's defined on the unit [17:20] run_action is called in the tests folder for ceph-mon but i can't find the function definition anywhere [17:20] cholcombe: ah, that's in sentry.py [17:20] oh? [17:21] marcoceppi, i don't see it in there https://github.com/juju/amulet/blob/master/amulet/sentry.py right? [17:21] i'm just trying to figure out what params it takes so that i can call an action that takes parameters [17:21] cholcombe: bleh [17:21] marcoceppi, exactly haha [17:21] I remember this now, someone contributed this and it landed but I wasn't happy with the implementation [17:21] https://github.com/juju/amulet/blob/86b893ad5f96dd71637da8db7631bb3d50a118c9/amulet/deployer.py#L362 [17:22] marcoceppi, yeah that looks similar [17:22] cholcombe: so it's d.action_do(unit, action, args) instead of d.service['s'][0].do(action, args) [17:23] * marcoceppi submits a patch to link it to UnitSentry [17:23] yeah everywhere in ceph-mon i see it called as action_id = u.run_action(sentry_unit, 'action-name') [17:23] i'll try passing it a dict and see if it explodes haha [17:25] no idea where run_action is [17:25] cholcombe: you sure that's not from charmhelpers.contrib.amulet or somehting else? [17:25] marcoceppi, maybe. i'll check. [17:26] marcoceppi, damn you hit it. it was buried in there [17:26] * marcoceppi shakes angry fist at why there is even a charmhelpers.contrib.amulet when everyone can just contribute to the upstream project [17:27] cool, so competing implementations of the same feature, and I don't like either one of them === zul_ is now known as zul [18:32] juju + lxd seems to fail a lot to me; I deploy a few services, some with multiple units, and a couple of the machines go into error state without ever coming up [19:58] icey: do they end up with a 127.x address? [19:59] not sure, going to run again and I can dig in [19:59] there was a patch that went into the lxd branch yesterday to fix ip addressing [19:59] other than that I've not see any issues myself in testing, apart from one where in Xenial we have to umount and remount a kernel mount point to enable SSHD on trusty images [20:00] magicaltrout: I've had to deal with the debugfs already, I'm building my juju from master, should I be running it from a branch? [20:00] i don't know what the merge state is [20:00] there is an lxd branch in there that I have built off [20:01] but it could have been merged back in, I dunno [20:07] hi i saw that there was a release for juju-core 1.25.4 but i cant seem to find it in a repo. Is there a specific repo i need to use for that version? [20:08] magicaltrout: it looks like there's some problem and it doesn't actually provision [20:09] http://pastebin.ubuntu.com/15330108/ magicaltrout [20:11] magicaltrout: http://pastebin.ubuntu.com/15330117/ [20:12] hmm not seen that icey [20:12] it looks like it did get created, but juju did not like it or something and baile [20:12] bailed* [20:12] magicaltrout: that last container just started [20:17] current state: http://pastebin.ubuntu.com/15330147/ ; the container is up, Juju isn't picking it up [20:46] hi team any one tested the juju bootstrap with xenial 16.04 in my case cloud init is failing. [20:58] icey: i would recommend you post a bug/mailing list post or prod jam and see if he's responsive [21:02] Any objections to https://github.com/juju-solutions/layer-basic/pull/41 [21:10] cory_fu: lgtm [21:26] tried to shorten some of my training sessions at apachecon [21:26] failed [21:26] big juju demo will be appearing :P [21:29] magicaltrout: lol [21:30] actually it'll be pretty cool rick_h__ I selected Tutorial when submitting so I get forced to do 2 hours, so if people actually show up, we're running through OODT stuff, but as its interactive training stuff, we can install juju and do a whole juju local type thing [21:30] so people will leave with juju installed [21:44] thats the plan at least [22:06] kwmonroe: I think I know why shared folders aren't working in the xenial box. 1) /vagrant is explicitly disabled in the image. You can flip that (and /charms) on in your Vagrantfile. Second, no guest additions installed. `apt-get install virtualbox-guest-additions-iso`, (to get the dependencies), and then download the guest additions iso from virtualbox.org, mount, and install. [22:07] kwmonroe: I now have my shared folders back. Huzzah. [22:20] hey! nice deal aisrael. [22:20] roryschramm: a mailing list post just answered your question [22:37] juju 2.0 beta1, `juju add-credential` isn't a valid command but is listed in the devel docs. Which one is more current? [22:38] juju add-crednetial isn't listed in the commands for the trunkbuild aisrael [22:38] but i can't figure out whats replaced it :) [22:38] magicaltrout: good to know, thanks man! [22:39] oh [22:39] add-user [22:39] seems to be it [22:39] along with add-ssh-key and some other stuff [22:42] Looks like those commands may be planned, but just not implemented yet. The release notes mention that the credentials.yaml needs to be managed by hand for the time being [22:49] jamespage you around? [23:08] firl, no he's long gone for the day [23:09] firl, what did you need? [23:11] i think i broke amulet [23:22] is it possible to do rolling updates for juju? ie updating ceph version one node at a time and not the entire ceph cluster? ie some of the openstack charms have the action-managed-upgrade config option to facilitate this [23:57] cholcome I was just going to ask him about shutting down a set of openstack nodes properly through maas [23:57] because I have a power outage scheduled thursday