[00:37] topic needs changing ;) i am on xenial ask me for testing :) === natefinch-afk is now known as natefinch === thumper is now known as thumper-bjj === Guest68352 is now known as spammy === frankban|afk is now known as frankban === thumper-bjj is now known as thumper-eod [09:57] gnuoy, two ticks for a sync +1 ? [09:57] https://review.openstack.org/#/c/322097/ [10:00] when will juju-deployer or charm-tools/-helpers/amulet support juju 2.0 properly [10:00] :/ [10:02] jamespage, +1 [10:03] icey, cholcombe: need one of you to pickup https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1577519 again [10:03] Bug #1577519: ceph-radosgw "Initialization timeout, failed to initialize" [10:03] apparently our first fix did not resolve quite suffiently [10:59] Hey guys .. I have a juju env which had some connectivity issues for short time .. hence i got the "agent is lost, sorry!" error [10:59] now the nodes are back up .. [11:00] i can ping and ssh into those nodes aswell .. using ssh ubuntu@ip [11:02] how can i restart the jujud service on a node? [11:06] hi bbaqar, if you write "ls -la /etc/init | grep juju" you can see all the juju services and restarted [11:09] D4RKS1D3: cool thanks alo t [11:11] Hi, I am trying to add a machine is in maas, but i received this message "ERROR juju.provisioner provisioner_task.go:655 cannot start instance for machine "40": cannot run instances gomaasapi: got error back from server: 409 CONFLICT (No available node matches constraints: name=imaginative-attention.datacentre tags=datacentre)" [11:17] gnuoy, https://review.openstack.org/#/c/323264/ [11:18] pls :-) [11:18] sure [12:05] can't you upgrade charms on a development branch? [12:19] magicaltrout: yes, you should be able to. [12:19] magicaltrout: what did you originally deploy? [12:20] hey rick_h_ [12:21] magicaltrout: howdy [12:21] just testing some homebrew stuff, but when I do "juju upgrade-charm drillbit2" [12:21] I get [12:21] ERROR cannot resolve URL "cs:~spicule/drillbit": charm or bundle not found [12:21] but then if I try and do: juju upgrade-charm drillbit2 cs:~spicule/drillbit [12:21] for example [12:21] I get invalid args [12:23] magicaltrout: hmm, let's file a bug on jujucharms.com please. Looking at the urls for this: https://jujucharms.com/u/spicule/drillbit/11 is peachy [12:23] but remove the 11 and it 404's [12:23] magicaltrout: which is a bug in it resolving there [12:23] yeah i noticed it was being a bit funny there as well [12:23] will do [12:23] magicaltrout: now for your original question, you're supposed to be able to deploy/upgrade following the development channel if it's public and if you deployed with the --channel flag w/ juju 2.0 [12:24] magicaltrout: this bug might be getting in the way, but not sure atm. The question is how you originally deployed it though and older Juju doesn't understand the channels [12:24] they're always looking at the published stable channel [12:24] ah it seems to know juju upgrade-charm drillbit2 --channel development [12:25] magicaltrout: yea, that for if you want to cross channels [12:25] e.g. go from stable to development [12:25] hmm well this would be same channel just new rev... [12:26] I'll file the bug and move on, its no big deal its all LXD anyway so its nearly as quick to tear it down and start again [12:26] magicaltrout: ok, but let me know if things aren't working as expected. Or at least the steps to reproduce and I'll chase it down so it's set [12:37] magicaltrout: Hey Tom how is the Zookeeper issue going? [12:40] hey kjackal well it works, not sure why I'm only seeing one unit [12:40] and my understanding of debugging reactive hooks is lousy [12:41] magicaltrout: I am fairly sure you should be getting all three units [12:41] I have a vague memory of another service where I thought that was the case and it transpire it wasn't [12:41] but I'll trust you on that one [12:41] but I have logs in place and its only iterating once [12:42] or I've cocked up my function [12:42] which is perfectly likely [12:42] 2 mins I'll dump it in GH [12:42] magicaltrout: I could take a look if you point me to the charm [12:43] thanks magicaltrout [12:43] its nothing special, I just really want Apache Drill for Saiku so I figured I'd crack on with it :) [12:44] https://github.com/buggtb/layer-drillbit [12:44] https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py#L37 [12:45] watch out if you run it there is a blank line on line 45 where I was about to add some bootstrap stuff [12:45] magicaltrout: taking a look, thanks [12:46] the rest of it is pretty cool, drill installs and apart from the missing exec code, starts up with its configured ZK client in place [12:59] look at all this collaboration first thing in the morning ;D [12:59] \o/ [12:59] its very much mid afternoon :P [13:00] until next week at which point i'll still be in bed [13:00] well, in EDT its very much morning, and super cool to see this thread history on what by any other name, is my monday. [13:00] so given that context... awww yeeee [13:01] made me sad how many emails i got from the USA yesteday :P [13:01] I was only responsible for one, and Ben set me straight [13:01] anyway, yeah, kjackal is kindly unbreaking my understanding :) [13:01] i was hoping my 2 hour stint of looking at code on a holiday would go un-noticed [13:01] * lazyPower shakes a tiny fist @ magicaltrout [13:01] I'm on a big data <-> sql sprint so I can hook up Saiku 3.9 to loads of charms when its released in a week or 3 [13:02] nice! [13:02] I'm working on a signifcant extraction and test coverage bump on etcd at the moment. x58 found some fun race conditions in there [13:03] we've finally built a half decent schema designer, so you should be able to deploy a big data bundle + Drill/Phoenix etc and then attach that to Saiku and inside saiku do a few click schema generation and be good to go [13:03] so 10 click analysis I'm hoping at most [13:03] Wait isn't drill a MapR only thing? [13:03] its sponsored by MapR, but its not MapR only [13:03] its an ASF project [13:03] ah ok. i thought it had the maprfs requirement [13:03] so its as generic as it comes [13:03] #TIL [13:04] you can run over any hbase, parquet, csv, mongodb, generic jdbc etc [13:04] it works nicely as a data federator as well. If you have data in Hbase or somewhere and want to run a join on something in a mysql db [13:04] you can do exactly that [13:04] since i'm moving in under a month, i may vanish on you magicaltrout. I have junk-takers showing up to grab the excess posessions in the next 20 minutes. [13:05] hehe [13:05] moving somewhere nice? [13:05] Planning on going full nomad for the remainder of the year [13:05] bouncing from AirBNB to AirBNB [13:05] nice [13:05] We'll see. this was a bit more expensive than originally estimated [13:05] i may bounce somewhere and find a lease just to save on living expenses [13:06] but i'm stuffing all the good stuff in a POD. its like a shipping container (see what i did there? container all the things?!) and have it shipped wherever i land [13:06] cool stuff [13:06] its been a learning experience :) [13:06] staying in the US, or moving countries? [13:07] closer to end of year I want to do a grand European tour [13:07] start in oxford and migrate west over the course of a few months and see what i can see over there [13:08] well there's certainly some nice remote spots in the UK for nomadic working ;) [13:11] magicaltrout: http://pastebin.com/UaYB0LxB I can see all two units reporting their IP. What Zookeeper charm did you deploy? [13:11] lazyPower: So.... when are you visiting Greece? [13:12] https://jujucharms.com/apache-zookeeper/trusty/3 [13:13] magicaltrout: how is it possible? I mean... I got my zookeeper instances reporting .... its in the pastebin.... [13:13] kjackal good question, lets figure out what my conf season looks like towards fall and see where i can land up :) [13:13] kjackal: dunno, just redeploying anyway, i'll grep the logs in a mo [13:14] kjackal: you just deployed a single service then add units? [13:15] magicaltrout: yes [13:19] kjackal: god knows [13:19] I only get 1 entry [13:19] "juju-log" ["Template:\"10.106.143.126:2181\""] [13:25] magicaltrout: let me see [13:25] you deployed apache-zookeeper [13:25] then you deployed drillbit [13:26] then you added the relation between ZK and drillbit [13:26] then you added a second ZK unit [13:27] not quite [13:27] I deployed ZK [13:27] then I added 2 more nodes [13:27] could you paste me the history of you rcommads [13:27] then i deployed drillbit [13:29] i'm tearing it down, i'll try again and log the input [13:30] magicaltrout: thank you [13:33] jamespage, https://github.com/openstack-charmers/release-tools/pull/1 [13:35] kjackal: http://pastebin.com/QE00v5wZ its still in flight [13:35] but thats the command list [13:38] magicaltrout: btw the signature of configure should look something like http://pastebin.com/63ebzdwi [13:40] yeah in my published version its corrected [13:40] no its not [13:40] #fail [13:43] magicaltrout: I think I suspect what is going worng [13:43] magicaltrout: look.... [13:44] the first ZK unit that comes online becomes a quorum by itself [13:45] at this point the configure method of drillbit is called and it sets the 'drillbit.configured' [13:45] then the rest of ZK units join the quorum but the 'drillbit.configured' is already configured so it does not get updated [13:46] so, how to fix this [13:46] ah yeah sounds sane [13:47] what if you do not set the 'drillbit.configured' at all? [13:47] or at least you do not have this when_not('drillbit.configured') at configure [13:47] yeah but then you're restart the drillbit on every execution [13:48] indeed, so you want to restart when you detect a change in ZK quorum [13:48] yeah [13:48] so I guess we could remove that and save the quorm dict to check against [13:48] have a look at this: https://github.com/juju-solutions/layer-apache-spark/blob/master/reactive/spark.py#L159 [13:49] yeah [13:49] like that [13:49] okay cool [13:49] I can do that [13:49] thanks kjackal [13:49] cool... [13:49] lets see what do we say in the list... === erlon_ is now known as erlon [14:04] https://ibin.co/2j3WCAhHi9Yn.png https://ibin.co/2j3WHn8H9RA1.png [14:04] there you go kjackal [14:04] need to tidy it up a bit but its up and running on the ZK cluster [14:06] but its certainly cool, i now have SQL over Mongo, CSV, Parquet, HDFS, S3, Hbase and Hive for free [14:10] although for HBase I certainly want to bring Phoenix into the CS as well [14:20] I'll also add some hooks hopefully to auto register various data sources [14:29] fscking balls [14:29] why can't LXD local set hostnames correctly yet [14:30] rick_h_: where to juju core tickets go github or launchpad? [14:30] magicaltrout: launchpad please === rodlogic is now known as Guest89868 [14:40] grr [14:40] filed [14:48] woop multiple drillbits... aka a cluster \o/ [14:48] nice \o/ [14:48] aye [14:48] magicaltrout: <3 [14:49] now i need some hooks so I don't have to google adding data sources every time ;) [14:49] juju actions ftw [14:50] this is the stuff though. Juju makes it easy to setup a cool hadoop cluster, but until now there is no easy way for analyists to get the data back out unless they are Zeppelin/Spark/IPython "scientists" [14:50] providing SQL interfaces that are stable is key to getting people using the big data stuff Juju offers [14:50] because they can plugin they analytics tool of choice. [14:51] Of course I biased as I write one, but you can instantly hook the charms up to Tableau or whatever the big spenders want to connect to it === Guest61314 is now known as CyberJacob === CyberJacob is now known as Guest68658 === Guest68658 is now known as CyberJacob [14:53] so here's a question rick_h_, do you use actions, or do you use relations or do you use a bit of both? :) [14:55] magicaltrout: I guess it depends [14:55] magicaltrout: on what the need is there, when you metion adding data sources, I got thinking of an action to add a new one [14:56] yeah, I think you end up with a combo, some charms like Mongo you can just drag to add a relation, there's not much drill needs to know. Some other stuff is far more involved. Also you could connect to services on different networks which wouldn't work with relations [14:57] so I reckon you end up with a mix of both. If relations work, it should be supported to make it really simple. If they don't you could defer to an action to pass it some additional information it might need [15:00] anyway mongo is the simplest, so we'll start there and see how we get on [15:00] i have 60 minutes to get it working before dinner..... [15:00] Go! [15:08] cory_fu: so, when we will be cheking out the bigtop branch, we will have to then copy/mv/link the layer directories under the LAYER_PATH to reach to a working workspace [15:09] I guess we could automate this process with a script or something. [15:10] kjackal: Charm layers don't technically need to be under LAYER_PATH, only base layers. So you should set your LAYER_PATH to the directory where your base layers reside (which, due to restrictions in the build tooling, cannot currently be a part of an upstream repo) but build your charm layers from the charm layer directory inside your bigtop repo checkout [15:10] IOW, don't change your workspace at all, just clone bigtop somewhere else and build from there [15:11] cory_fu: yeap that makes sense [15:31] cory_fu, kjackal, kwmonroe: In the zookeeper charm, we fire off a "rest_config" handler when the state "config.changed.rest" is set. I don't see that state getting set anywhere, though (I'm grepping in a dir that contains both the bigtop base charm, and the zookeeper charm). [15:31] I see the same thing for the client_config handler, though in this case the mystery state is "zkclient.joined". [15:32] Am I missing a charm that does relevant things, or are both of those handlers broken? [15:32] petevg: They'll be set by base layers. [15:33] Right ... I didn't see it set in the bigtop base layer, though. [15:33] Specifically, the config.changed.X states get set here: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L126 [15:33] petevg: And zkclient.joined is set by the interface layer: https://github.com/juju-solutions/interface-zookeeper/blob/master/requires.py#L24 [15:34] Sorry, zkclient.joined would actually be set by this: https://github.com/juju-solutions/interface-zookeeper/blob/master/provides.py#L25 [15:34] Since the Zookeeper charm is the one providing that relation, not requiring it [15:35] Got it. That makes sense. [15:35] Thx. [15:50] mbruzek: [15:50] arosales: pong? [15:51] Have any documentation outstanding for terms? [15:51] https://github.com/juju/docs/pull/1122 [15:51] arosales: aisrael did that part, let me find the link [15:51] only to be or resources but has a navigation bar entry [15:52] did Adam do term and you do resourcs [15:52] ok [15:52] https://github.com/juju/docs/pull/1048 [15:52] I had those confused [15:53] mbruzek: [15:53] Thanks I'll see if any of the juju core folks can take a look [15:53] arosales: Yes I believe that is the necessary next steps [15:54] arosales: I saw you ping them already last week for this. [15:55] arosales: I pinged them on the issue, so they should have got a notification, I also asked katco to review via IRC last week. [15:56] mbruzek: i also pinged again in juju-dev [15:56] mbruzek: thanks for the info and work on it [15:56] arosales: ack. [15:58] evilnickveitch: let us know if you see any major issues with https://github.com/juju/docs/pull/1122 [16:01] arosales, nothing major, was waiting for a +1 on technical content [16:03] evilnickveitch: ack, and thanks for taking a look [16:03] cory_fu: kwmonroe kjackal petevg what state should i be using from the bigtop plugin to ensure its ready before installing pig? [16:03] bigtop.available [16:04] Beat me to it ... it's in the new README :-) [16:05] kjackal: petevg so if bigtop.available, hadoop is installed? [16:06] no wait, you need to run puppet apply after the bigtop.available [16:07] however for hadoop I guess you will need to wait for the hadoop.ready state of the plugin [16:07] kjackal: ok, thanks [16:13] kjackal: where does the hadoop.ready state come from? [16:21] admcleod-: The hadoop.ready state comes from the plugin interface layer: https://github.com/juju-solutions/interface-hadoop-plugin#provides [16:21] cory_fu: thanks === frankban is now known as frankban|afk === redir_afk is now known as redir [18:40] anyone know if the mongodb interface actually works? [18:40] it appears to be missing a provider [18:40] cmars: ? [18:41] magicaltrout, it works for the requires side [18:41] okay thats all I need, but I don't seem to be getting hostnames and stuff [18:41] * magicaltrout checks some more [18:43] magicaltrout, use the .connection_string() method on the arg passed to your reactive handler [18:43] magicaltrout, the state you'll want to handle is @when('.database.available') [18:45] magicaltrout, for example: https://paste.ubuntu.com/16872701/ [18:45] ah right, hadn't quite guessed the .connection_string() bit [18:45] magicaltrout, i usually just update unitdata and re-run a setup() function that gets called on relation hooks, config-changed, etc [18:45] thanks for that [18:45] magicaltrout, sure thing [18:48] bdx: ping [18:54] odd behaviour with the MySQL charm on the Trusty vagrant juju image... says the InnoDB plugin cannot allocate memory for its buffer pools... the machine doesn't seem to be out of resources... anyone else getting this? [18:55] maybe need to bump the box size. [18:57] j_king: dunno if its the same or if it even applies any more but: https://jujucharms.com/mysql/ [18:57] caveats section [18:58] magicaltrout: ah, ty. I'll check that out too... seems relevant given what the logs are spitting out. [19:05] cmars: i'm hitting the same error I had before with your tweaks: https://gist.github.com/buggtb/599e536da5e58d5180fe72e183f12c66 [19:05] TypeError: Can't convert 'NoneType' object to str implicitly [19:05] seemingly its returning the None object [19:06] any idea what the requirements are for it to not return None ? :) [19:07] magicaltrout, both hostname and port would need to be provided in the relation [19:08] magicaltrout, you could get None if you've just joined the relation but the handshaking hasn't "settled" yet [19:08] magicaltrout, wait, that's not right.. [19:09] people often say that to me [19:09] magicaltrout, if .available is set, connection_string() should not be None in the first place [19:11] magicaltrout, i wonder if this charm is in a weird state from a series of upgrades in development? [19:11] could be [19:11] i shall tear it down and start again [19:11] thanks cmars [19:11] magicaltrout, but i'd program defensively in this case, and test the connection_string() at the top of the function [19:12] k [19:13] mbruzek: sup [19:13] mbruzek: thanks for the review ... I fixed her up [19:14] bdx: I already wrote tests for you [19:14] bdx: https://code.launchpad.net/~mbruzek/charm-helpers/add_uid_gid/+merge/296132 [19:14] mbruzek: thats awesome, just seeing that now. rawk! [19:14] I was meaning to put this merge in all morning but was swamped with meetings [19:15] bdx: Thanks for the charm-helpers submission. [19:15] Feel free to extend/enhance the tests [19:16] bdx: Also I don't see the "<<<<< TREE" stuff in my host.py file, I think that is an artifact of the bzr diff. [19:16] the tests look great, yea, that was the one question I had ... ok [19:17] mbruzek: what does the patch decorator do? [19:17] bdx: it makes it so if the code in that method call the class, python steps in and lets you define the return value or throwing exceptions [19:18] nice, perfect [19:18] bdx: We use that to patch out the system calls that would return different users on different systems. [19:19] We can then check the patched objects were called with what we expected. [19:19] And python does not make the call to subprocess and create a bunch of users on the test systems. [19:20] ahh, so patch basically mocks os level ops? [19:20] bdx: https://docs.python.org/3/library/unittest.mock.html [19:20] bdx exactly! [19:20] The patch() decorator / context manager makes it easy to mock classes or objects in a module under test. [19:21] mbruzek: makes total sense, and is super cool. thanks for the heads up on that [19:22] bdx: No problem. Thanks for putting in some work on charm-helpers, *we* really appreciate that [19:23] entirely! [19:24] the uid and gid specification will make it easier to nfs mount and match uid/gid for the provisioned/permissioned user across multiple hosts [19:25] bdx: Right. I saw the merge proposals for the big data stuff using this stuff [19:27] totally ... I want to create a user/groups provisioning layer .. I see myself lifting a bit from DistConfig in jujubigdata [19:32] i saw that chat. User provisioning layer would be super useful [19:34] but also making users a "thing" that juju can manage globally(outside of charms) would be very helpful as well [19:35] magicaltrout: yea, I feel like juju-core is so close to providing that functionality ..... [19:36] the disconnect is model-users vs os-users on the machines [19:38] model users all assume the ubuntu user with the addition of the model users key in authorized_keys .... which is a no no for PCI compliance :-( [19:39] adding users as a first class citizen is a huge step for juju in general ... I foresee the provisioning of model users down to the machine level to be a logical next step eh? [19:42] yeah absolutely. Charm builders need users for installing services in specific user spaces, thats granted, but hackish if you do it in python or whatever [19:42] charm consumers need users so they can grant access to boxes for specific users by ssh keys/passwords whatever [19:42] but the grail is also connectivity OOTB to LDAP or something more "enterprisey" [19:43] magicaltrout: totally [19:43] when i do stuff with NASA for example, all their boxes are secured with Kerberos, unsurprisingly [19:44] and it would be great to roll out a charm and have it hook up to a kerberos realm without me pissing about with a bunch of hacks [19:44] magicaltrout: yeah, a problem I face in my organization as well [19:46] bdx - wanna tune into the fridge again later today? ~ 5:30 normal EOD time for me i'll kick up a stream [19:46] being able to say "here's my service/unit, here's the users/group in my kerberos setup I want to give access to, now make it so" would be fscking amazing [19:47] magicaltrout - all that is theoretically doable. Kerb is controlled by env vars right? [19:47] lazyPower: cheaaa! [19:47] magicaltrout: yea, so a kerberos subordinate! [19:47] bdx aight i may be a bit late. hacking on a layer for a blog post atm. I'll hit you with a link when its got frequency [19:48] nicccce! [19:49] oooh it works cmars, must have been a failed upgrade [19:49] magicaltrout, nice! [19:53] okay next random question for anyone who wants it [19:54] I'm creating a relation and I need to create a dynamic name for it, preferably based on the relation id [19:54] when i'm mid-hook in a reactive charm, can I get the ID? [19:54] does it even have an ID? [19:54] :) [19:55] my thinking being, say a model has 2 different mongo db services [19:55] a user creates a relation between both and drill [19:56] I need to create an entry for them within drill, but i also need to know to which relation it belongs so I can remove it [19:58] come on cory_fu you know crap like that! :P [20:01] magicaltrout - you use the conversation scope and relation-name to give that clearly defined paths of behavior and communication [20:01] magicaltrout: Each relation does have an ID, and you can get them, however in the reactive "world" there may be multiple different relations in a given state at a time, so you're not encouraged to deal with the relation IDs directly [20:01] magicaltrout: I'm also not sure what you mean by "remove it" [20:01] yeah, what cory_fu said [20:02] okay, so I create a drill service, and a mongo db service [20:02] I connect the two [20:02] on the drill side, i run a small rest call to the server to register the datasource [20:02] then when I remove the relation, i need to prod the same rest service with the name of my blob to delete it [20:03] this in a 1:1 model where you could only have 1 mongo and 1 drill for example is easy [20:03] but if I have 2 mongos and 1 drill, i need some dynamic naming convention that I can track to a relation [20:05] of course mongo could be anything, and the id can be anything, i just want a way to relate my datasources to a specific relation so i can de-register them [20:05] Ok, so if you have 2 mongos connected, I'm going to assume that it will be two different mongo services connected and not the same one connected twice. If that's true, you could use the name of the service to identify them (which is what lazyPower referenced when he said "conversation scope") [20:05] alrighty, sounds far more sensible [20:05] magicaltrout: You could also generate a UUID and store that either on the relation data or in the conversation's local data (https://pythonhosted.org/charms.reactive/charms.reactive.relations.html#charms.reactive.relations.Conversation.set_local) [20:06] service name sounds far more sensible and human understandable [20:10] I'm all for having a simple implementation! [20:10] indeed [20:10] I'm happy that little blurt of jargon helped actually [20:10] cory_fu *ta* for cleaning up another one of my messes :) [20:11] from an outsiders/newbie perspective its about trying to figure out the correct patterns [20:11] because I/we just make it up as we go along [20:12] mostly from prodding around in python docs and random stuff [20:13] so Service Name, if i want that lazyPower do I use hookenv.service_name ? [20:13] that works [20:13] make sure you're using distinct names in metadata.yaml for those relations too [20:13] hmm but the docs say the service this unit belongs to [20:13] oh you're looking for remote_unit [20:13] * lazyPower looks [20:13] 1 sec [20:13] sounds better [20:14] http://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.remote_service_name [20:14] oh yeah [20:14] what d'ya know [20:14] who gave it such a sane name [20:14] Oh hey [20:14] Be careful using that in reactive [20:15] +1 to that sentiment as well [20:15] That will only work in an active relation hook context, which may not be the case in reactive [20:15] what does that mean in english? [20:16] You should instead use Conversation.units or Conversation.relation_ids [20:16] for conv in self.conversations(): [20:16] and you can scope it with each conversation id and pull data in/out of the bag [20:16] It means that reactive handlers might be triggered in, say, update-status, and remote_service_name will return None [20:16] yeah, remote_service_name from hookenv relies heavilly on only being used in the hook context, and reactive is a large abstraction away from that. [20:17] You can use conv.scope, as lazyPower suggests, but conv.units (a list) is probably better [20:17] Ok, in a meeting [20:17] i know enough about interfaces to be really dangerous [20:17] cory hasn't hit me over the head about any of them yet... but i don't think he's seen them all [20:28] okay so help me out here lazyPower if you're not moving too much. I've seen self.conversation() in interfaces [20:28] so I can call that in a standard reactive charm as well? [20:29] so long as you have an active instance of the interface, and that interface is scope: global [20:29] i think i may be wrong on the last part [20:29] but humor me until we find out otherwise [20:31] someone needs to write a book on this stuff [20:31] blimey its confusing [20:38] Reactive Juju Charms cookbook [20:38] that is whats needed in this world === natefinch is now known as natefinch-afk [20:43] jose was going to then we changed literally -everything- on him [20:56] gnuoy: ping [20:58] lazyPower: okay so i have a unit list like cory_fu suggests: {'', 'mongodb/0'} [20:58] so I should get a non null one and extract the service side of it, for a sane naming convention/ [21:01] are charmstore revision numbers ever going to be useful anymore? [21:01] zing! [21:01] icey: not really, the goal is to move to channels being meaningful [21:02] ta [21:02] icey: revisions are just to access stuff not stuck into a channel [21:02] magicaltrout - sounds about right. [21:02] rick_h_: people sometimes refer to us with the rev number so that they can tell us what they had deployed :-/ [21:02] oh well [21:02] icey - i dont think that changes. revisions are still a linear thing right? [21:02] icey: that's meaningful if it came from the store, you can stick that in a bundle and replicate their exact setup [21:02] right [21:02] revisions still go up up up [21:02] yeah, the channel just references a revision number [21:03] uhm apparently nope: https://jujucharms.com/ceph-osd/ [21:03] rev 2 [21:03] Hi folks, is there any IRC channel for discussing charm of solutions, e.g. OpenStack, Big Data ....? [21:03] after 2+ years of deploying... [21:03] godleon: you're in it [21:03] godleon: here is as good as any :) [21:03] icey it reset the revision number... did the charm recently change owners? [21:03] that was a known bug at one point [21:03] all day the charm store has said no owners to a bunch of stuff [21:04] maficaltrout, rick_h haha, really? I though it should be discussed in mailing list [21:05] has someone ever tried to deploy multiple hypervisors OpenStack before ? [21:05] godleon: there's a juju mailing list [21:05] godleon: so the newest juju2 and the openstack team are working on multiple hypervisors with a lxd and kvm solution [21:06] godleon: so the openstack folks have definitely been trying it and working on it [21:08] rich_t oh? Canonical openstack team is working on multiple hypervisors(lxd + kvm) now? [21:08] godleon: yes [21:09] rick_t, I tried to do deploy that yesterday, but I found it's a little bit buggy. [21:10] godleon: https://jujucharms.com/lxd/xenial/1 ? [21:10] rick_t, yes, that the one of the charms. [21:11] rick_t, I found the nova-scheduler can not work properly, maybe I just used the wrong charm relationship, but I am not sure about that. [21:11] rich_t, I used the LXD openstack bundle as base to modify it to a multiple hypervisors bundle. [21:12] godleon, we have a proof-of-concept (dev/test) multi-hypervisor test scenario that's in pretty good shape. although, it is currently only validated with juju 1.25.x. there are some specific configs and image attributes necessary in order to get the scheduler to place instances appropriately. [21:12] rick_t, I can see the different hypervisors in Horizon, but it can not work correctly as I expected. [21:13] godleon, ie. we can fire up kvm and lxd instances and they do get scheduled to the right type of hypervisor based in image attributes. [21:13] beisner: ok, what image attributes should I input? and where should I input??? [21:14] beisner: hey! thats awesome! [21:14] beisner: where is that being developed? [21:15] beisner, yes, I saw that in the youtube video, but bcz I didn't see the demo, so I don't know how to set the image attributes. [21:15] as soon as launchpad comes back to life i can link you guys to the dev/test (ymmv/bleeding) bundle [21:15] which includes a lil readme to describe the deploy/post-deploy config/basic test process [21:15] hey bdx ! [21:16] beisner: do you mean readme for multiple hypervisors? [21:17] godleon, it's a poc test scenario. not official documentation, nor is it intended for production as it uses the tip/master charm set. but this has been working since ~January so the stable charms should have the same basic functionality. [21:17] one could substitute our git: blah charm branches for pinned charmstore revisions (stable charms) [21:18] i think at some point we may publish an official bundle, but i suspect seeing our test approach will give all the info one would need to take it for a spin. [21:26] bdx, godleon, there is a network outage at the moment, not sure how long that will last. i'll paste a link back here for reference later. o/ [21:28] beisner: ok, thank you very much! :) [21:30] woop thanks lazyPower. That seems way harder than it should be (for a newb) but it appears I have a working mongodb relation [21:30] \o/ [21:30] magicaltrout the first one is always the trickiest :) [21:30] * magicaltrout dumps some data into mongo to test SQL access [21:31] and pat yourself on the back knowing you've accomplished one of the hardest things to do in juju according to > 3/4 of the people i've onboarded [21:32] like the actual code, I don't have a problem with its easy enough to grep your way around. But I come from a java world where when stuffs broken we just attach a remote debugger and step through it in the IDE. In python & debug-hooks whilst its a great way to stab stuff, its still pretty complex to be able to debug you code, especially now in reactive world [21:33] again thats probably more lack of understanding as opposed to it actually being stupidly complex, but it does make life hard [22:21] https://ibin.co/2j5wKx7V1x6C.png there you go lazyPower [22:21] sql over mongodb powered by some juju hacking [22:21] nice! [22:21] i actually have a proposal in with a client to create an elastic search adaptor for drill [22:22] i don't think they'll go for it, but if they did, it could be SQL over beats ;) [23:36] lazyPower: as I don't see mbruzek around atm I wanted to ask if there was a pull request for updating k8 upstream [23:36] * arosales didn't see one @ https://github.com/kubernetes/kubernetes/pulls but I may have missed it [23:37] lazyPower: also https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/layers/kubernetes is missing a LICENSE file which I think needs to be Apache [23:37] and copyright [23:40] and I guess the bundles @ https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/bundles should have something similar to https://api.jujucharms.com/charmstore/v5/bundle/hadoop-processing-3/archive/copyright