[14:08] how do i scp files to a juju box? [14:10] juju scp file 1: [14:10] or something like thi? [14:12] g3naro: juju scp local_file unit_name/num:/remote/path [14:12] g3naro: ex: juju scp myfile.tgz myservice/0:/tmp [14:14] jcastro: ping [14:14] yo [14:18] jcastro: you pinged me a couple days ago - haven't been on IRC for around a week [14:18] you needed something? === redelmann is now known as popi_ === popi_ is now known as redelmann === redelmann is now known as s0plete === s0plete is now known as redelmann [14:38] gnuoy, thedac - so afaict, that cluster fix resolves the cluster races i was seeing (with LE) re: bug 1486177 [14:38] Bug #1486177: 3-node native rabbitmq cluster race [14:40] beisner: great. I will be working on a fix for pre leadership election versions today [14:40] thedac, beisner, tip top, thanks [15:07] coreycb, can you review/land this? https://code.launchpad.net/~1chb1n/charms/trusty/swift-storage/amulet-update-1508/+merge/268788 [15:08] coreycb, heads up too - swift-proxy, openstack-dashboard shortly behind that. [15:18] beisner, sure. I need to get liberty stuff done but then I'll look. [15:19] anyone know about juju-gui? [15:19] im trying to debug an issue [15:19] on ec2 and maas juju-gui is logging" {"RequestId":5,"Error":"unit not found","ErrorCode":"not found","Response":{}}" [15:20] juju debug-log: error stopping *state.Multiwatcher resource: unit not found === scuttle|afk is now known as scuttlemonkey === sarnold_ is now known as sarnold === wendar_ is now known as wendar === urulama is now known as urulama__ [17:06] help, trying to re-connect to my old canonistack environment after a long time of ignoring it, now juju gives me: WARNING unknown config field "tools-url" [17:07] and doesn't do anything [17:10] maybe try removing that option from your config ? [17:40] beisner: if you have time can you independently test juju < 1.24 against lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes and also make sure it did not regress for >= 1.24. I'll be running similar tests as well. [17:59] Which branch of charmhelpers should I propose my changes in if I want them in each of the openstack charms? === scuttlemonkey is now known as scuttle|afk [18:31] thedac, thank you. yes, i'll cycle both. [19:18] hi, i'd like to use the openstack provider.. is there an option to specify the object store endpoint? [19:18] i can't seem to find it [19:32] Juju office hours in 30 minutes! [19:33] jcastro: is there a topic or general Q/A? [19:34] general office hours [19:34] so like if someone shows up with an agenda that becomes the agenda [19:38] rick_h_: we haven't had a UI guy in a while if you want to fill us all in [19:38] jcastro: ok, debating showing up but I don't have an agenda. Just to cheer or such :) [19:38] well, jrwren shows up but he never knows what he's working on [19:38] :P [19:38] jcastro: k, linky me happy to jump in [19:39] rick_h_: I'll file up the hangout in about 15 [19:39] alexisb: what were we talking about hte other day about getting notice about? [19:39] also if anyone from juju-core wants to hop in that'd be awesome [19:39] wwitzel3: ^^^ [19:39] beisner: if you've got time for some openstack charm updates since you guys just had a release ... [19:41] jcastro: sure [19:45] https://plus.google.com/hangouts/_/hoaevent/AP36tYd2-532QvR_YgYczuO1Np1AHT7LT9PBI5Hw-YeiJNflAe0_bQ [19:45] rick_h_: wwitzel3: cory_fu ^^^^ [19:46] jcastro: i can't talk about what I'm working on :p [19:46] kwmonroe: ^^ [20:02] linky: https://jujucharms.com/docs/devel/charms-bundles [20:11] jcastro: linky: https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md [20:14] jcastro: https://jujucharms.com/docs/devel/wip-systems [20:14] jcastro: https://jujucharms.com/docs/devel/wip-users [20:18] hey rick_h_, is "bundle" the right source branch name for bundles? or "trunk", or will either work? [20:19] kwmonroe: it's bundle I think. [20:19] kwmonroe: trunk is for charms [20:19] cool [20:19] ack [20:19] kwmonroe: I think the diff was done as part of 'telling what's what' but it's history and not sure tbh [20:24] workload devel branch: https://github.com/juju/juju/tree/feature-proc-mgmt [20:24] jcastro: ^ [20:25] realtime syslog analytics bundle: https://jujucharms.com/u/bigdata-dev/realtime-syslog-analytics [20:41] http://interfaces.juju.solutions/ [20:42] jcastro: https://jujucharms.com/q/db-admin [20:43] jcastro: https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#search [20:48] jcastro: https://insights.ubuntu.com/event/juju-charmer-summit-2015/ [20:49] cool walkthru thx for stream :) [20:59] jcastro: what does "agent-state: down" mean? Does it mean the instance is down, or just something with juju? [20:59] it means the juju agent itself is down [21:00] the controlling node? [21:00] is this on a new deployment? [21:00] no, the agent on that node [21:00] no, old canonistack one that I haven't touched in months [21:00] mhall119: it means that juju can't speak to the agent that machines is running on [21:00] either the agent crashed or the machine is no longer reachable on the network (taken offline, networking changed, etc) [21:01] ok, can I juju destroy-environment when it's like this? or might that leave orphaned instances [21:01] * mhall119 things canonistack might have moved recently === natefinch is now known as natefinch-afk [21:02] s/things/thinks/ [21:07] mhall119: if you remove machine --force it should do the trick, if its the bootstrap node, yeah destroy env force should do the trick (sans orphans) [21:26] marcoceppi: if your around on wednesday, i'm doing an ansible talk at the modev meetup .. its right off the silver line at mclean stop [21:26] hazmat: sounds sweet! [21:28] hazmat: I just RSVP'd thanks for the heads up [21:33] https://insights.ubuntu.com/2015/08/24/a-midsummer-nights-juju-office-hours/ [21:40] rick_h_: just read through new bundle thingy in jorge's link above, doesn't support containers as machines per description [22:21] hazmat: nested lxc's were fixed in a pr I believe. It's not landed yet. Waiting on review? [22:21] hazmat: I know we had to fix something with that for the OS bundle case and we're running a deployer fork atm for that to work. [22:22] hazmat: if I'm misunderstanding let me know/have an example and we'll get it fixed up. [22:30] marcoceppi, jcastro: thanks for hosting the most recent office hours and sending out highlights with minute markers