[08:03] <kjackal> Good morning Juju world
[09:55] <BlackDex> zeestrat: Thx for the info!
[09:56] <zeestrat> BlackDex: No worries. Feel free to add some logs to the fire (bug) :)
[09:56] <BlackDex> will do when i can! :)
[15:01] <CarlFK> marcoceppi: do you plan on merging the branch you made for me, or should I keep a local copy? https://github.com/marcoceppi/charm-ubuntu/tree/carlfk                                                        "example of hostname "
[15:03] <marcoceppi> CarlFK: I wasn't sure if you were interested in keeping it around, was looking for a +1 on the PR :)
[15:08] <CarlFK> marcoceppi: I think I just submitted a PR to myself.. https://github.com/CarlFK/charm-ubuntu/pull/1/commits/0022fb1f424168f60ac2a34c37e99701b1bf7137
[15:10] <marcoceppi> CarlFK: I merged that into my pull request
[15:10] <marcoceppi> https://github.com/marcoceppi/charm-ubuntu/pull/1
[15:10] <CarlFK> also, can you give me the 'right way' to do this: https://github.com/xfxf/video-scripts/blob/master/carl/ansible-misc/mk-hosts.py#L70-L73  # ssh ubuntu@streambackend3.video.fosdem.org "sudo cp .ssh/authorized_keys /root/.ssh
[15:11] <CarlFK> brb, need to grab some breakfast
[15:12] <marcoceppi> CarlFK: `juju scp` is one way
[15:41] <themagicaltrout> marcoceppi: i know i've asked before, how do i register an interface on juju.solutions?
[15:43] <bugg> oops
[15:48] <tvansteenburgh> magicaltrout: click the + next to the "interface:" header (assuming you're logged in)
[15:56] <magicaltrout> never noticed that login button in all my life
[15:56] <magicaltrout> thanks tvansteenburgh
[16:06] <magicaltrout> nice
[16:06] <magicaltrout> works and everything
[17:17] <CarlFK> marcoceppi: I am still getting "unauthorized: access denied for user "carlfk"  from:  juju deploy ubuntu --channel edge
[17:17] <CarlFK> I think you said you did something, but I never tested
[17:18] <CarlFK> I did get a browser login ... "login successful as user carlfk"
[17:41] <marcoceppi> CarlFK: about to push it to the stable channel
[18:36] <arosales> jcastro: is the k8 sig starting @ zoom?
[18:40] <jcastro> it was supposed to start 10 minutes ago
[18:40] <jcastro> but there's no one here, so I posted on the mailing list
[18:40] <jcastro> I'm not crazy right, it's 10:40 PST right now right?
[18:40] <jcastro> and it's tuesday
[18:40] <arosales> jcastro: I am there, but it says "waiting for host to start"
[18:41] <arosales> It is 10:40 pst
[18:41] <arosales> and it is Tuesday, Jan 3 :-)
[18:43] <jcastro> same with me
[18:44] <jcastro> sigh, they did this last time too
[18:44] <jcastro> they apparently haven't had a meeting since 12/13
[18:44] <arosales> jcastro: ok, thanks for confirming. I'll close down zoom. Also thanks for posting to the list and following up/
[18:45] <jcastro> I don't really have a choice
[18:45] <jcastro> it's like, I need to escalate to them
[18:45] <jcastro> because my thing has been sitting in github for over a month
[18:45] <lazyPower> blerg :|
[18:45] <lazyPower> being blocked on others is the pitts
[18:53] <jcastro> ugh
[18:53] <jcastro> it's a biweekly meeting
[18:53] <lazyPower> oh :) thats fun then. so we're probably just off skew by a week right?
[18:53] <jcastro> yeah
[18:54] <jcastro> "This branch has no conflicts with the base branch
[18:54] <jcastro> I AM SO READY CHUCK
[18:54] <lazyPower> i wish i had the button clicking privs man
[18:54] <lazyPower> i'd love to merge that monster doc pr
[18:54] <lazyPower> we're aout to do the same thing to core, we have all this backlog of CDK work that needs to land upstream
[18:54] <jcastro> I just want one thing
[18:55] <jcastro> let us rev as fast as we want in /ubuntu
[18:55] <jcastro> like, doing multiple reviews, etc. is fine
[18:55] <lazyPower> yeah the fact we're blocked and have bad info in those docs is discerning
[18:55] <jcastro> but like, matt and you should be able to ping pong PRs off each other for example
[18:55] <jcastro> without waiting for some dude who has no time to comment on your thing
[18:55] <lazyPower> yar
[18:55] <lazyPower> i agree with you
[18:55] <lazyPower> the effort to organize reviewers is still WIP though
[18:56] <lazyPower> which is understandable given the size of the project
[18:56] <jcastro> on the plus side
[18:56] <jcastro> there's only 67 PRs now
[18:56] <jcastro> it was like 110
[18:57] <lazyPower> noice! I didn't notice that
[18:57] <lazyPower> i'm still clearing the 400+ notifications in github
[18:58] <jcastro> we're half way on page 2 now, so I think we're moving up lol?
[19:52] <jcastro> rick_h: wanna sync up tomorrow and bust out this wikipedia page?
[20:00] <CarlFK> marcoceppi: "Message": "not found: URL has invalid charm or bundle name: \"~marcoceppi/xenial,trusty,precise\"",  https://api.jujucharms.com/charmstore/v5/~marcoceppi/xenial,trusty,precise/ubuntu/archive/layer.yaml
[20:02] <rick_h> jcastro: can see if we can find space
[20:13] <marcoceppi> CarlFK: what version o fjuju?
[20:14] <CarlFK> 2.0.2-xenial-amd64
[20:14] <marcoceppi> CarlFK: so you're typing `juju deploy ubuntu`?
[20:14] <CarlFK> marcoceppi: er.. fjuju what?
[20:15] <marcoceppi> s/o fjuju/of juju/g ;)
[20:15] <CarlFK> no - saw that on the jujucharms page
[20:15] <marcoceppi> CarlFK: you shoul djust be able to `juju deploy xenial/ubuntu <name of app>`
[20:16] <CarlFK> that works.  I was clicking around https://demo.jujucharms.com/?store=cs%3Aubuntu-8   and saw that message
[20:16] <marcoceppi> CarlFK: weird
[20:20] <rick_h> mbruzek: I can't seem to find the original conversation: https://bugs.launchpad.net/juju/+bug/1623217
[20:20] <mup> Bug #1623217: juju bundles should be able to reference local resources <juju:Triaged> <https://launchpad.net/bugs/1623217>
[20:20] <rick_h> mbruzek: I know you hit me up in some channel heh
[20:22] <CarlFK> marcoceppi: is there a similar thing that uses debian?  (I am fumbeling with debops, wondering "maybe it would work with debian?" )
[20:22] <marcoceppi> rick_h: do we have debian in juju agent support?
[20:23] <marcoceppi> CarlFK: anything I could help out with? I mean, Debian and Ubuntu are so similar, but I've only really used Ubuntu so I might be able to help clarify
[20:23] <rick_h> marcoceppi: no, we don't at the moment.
[20:24] <rick_h> marcoceppi: I think that it's something that we're very interested in community involvement for enabling debian agents
[20:25] <CarlFK> marcoceppi: welp.. sure.. this stuff is kinda nifty...   https://docs.debops.org/en/latest/debops/docs/index.html
[20:25] <marcoceppi> Oh, debops, I thought that was a devops typo
[20:25] <marcoceppi> I've never heard of debops /me reads
[20:26] <CarlFK> marcoceppi: I am on day 2 with this..  the lead guy has been helping me in  #debops
[20:26] <marcoceppi> CarlFK: sweet, I'll join there as well
[20:55] <magicaltrout> marcoceppi or someone who might know, is it possible to get relation info inside an action?
[20:55] <marcoceppi> magicaltrout: technically, yes
[20:55] <marcoceppi> magicaltrout: you can get relation info whenever you want, if you have the right incantation
[20:57] <magicaltrout> i like the "technically" bit.. makes me suspicious
[20:57] <magicaltrout> if an action needed an ip of a service marcoceppi would you set a kv variable in the main charm reactive code, or pick it up inside the action?
[20:58] <marcoceppi> I would query directly
[20:59] <marcoceppi> so, there's two hook tools that make this possible. The first is relation-ids, the second is relation-list. `relation-get` has two extra parameters, that are taken as environment variables when in a relation context, but can be set in a hook context (or action) on the CLI in order to scope the call correctly, it's the `-r` flag and the JUJU_REMOTE_UNIT positional argument
[21:00] <marcoceppi> relation-ids gives you the `-r` flag values, or all the JUJU_RELATION_ID that exist for a given relation. For example, if you have a relation called "database" you could run `relation-ids database` which would return a list of >=0 items of the unique lines for that relation. So if you have the database relation connected to two applications, you'd get back two ids. If it's only one, then one, none - none, etc
[21:01] <magicaltrout> cool yeah found the stuff lurking in the docs
[21:01] <magicaltrout> thanks
[21:01] <marcoceppi> magicaltrout: you can then use the relation-list command to list all the units in a given relation id, `relation-list -r $JUJU_RELATION_ID` is a list of the units there
[21:01] <marcoceppi> magicaltrout: finally, there's charm-helpers that make all that easy to manage
[21:01] <marcoceppi> here's an example of that in action
[21:07] <marcoceppi> magicaltrout: https://gist.github.com/marcoceppi/193a8e4c37463cae95807499160ea7df
[21:11] <magicaltrout> thanks marcoceppi
[21:11] <magicaltrout> also regarding actions as i have you on the horn. I saw the openstack folk write bash scripts and then call a python script from within them
[21:12] <magicaltrout> is that the sensible/correct way, or should I just write a python script with a main function?
[21:12] <magicaltrout> i've only written bash actions before but I may as well python-ise them
[21:15] <marcoceppi> magicaltrout: you can do either
[21:15] <marcoceppi> magicaltrout: I prefer all python
[21:15] <magicaltrout> cool
[21:16] <magicaltrout> seemed a bit ott to have a  bash script  just run a  python script but figured i should check
[22:14] <magicaltrout> okay i'm trawling the random requirements tonight
[22:14] <magicaltrout> marcoceppi: is there a `juju scp` alternative for charmhelpers?
[22:19] <magicaltrout> or another way to pass files between charms
[22:22] <magicaltrout> lazyPower you must know!
[22:22] <lazyPower> oh oh
[22:22]  * lazyPower reads backscroll
[22:23] <lazyPower> ah
[22:23] <lazyPower> err
[22:23] <lazyPower> no, we tend to either proxy data over the relation wire (text based). If you're wanting ot push files we dont have a really good pattern for that
[22:23] <magicaltrout> hmm nice
[22:24] <lazyPower> unless i'm mistaken, has the big software team done any pioneering work around that question kwmonroe? (re: juju scp for charms to copy files among themselves)
[22:24] <lazyPower> i know we haven't over here in k8s land, we're using resources to ensure everything is present before it kicks off (save for tls certs and the like)
[22:25] <magicaltrout> yeah what i'm wanting though is to start Solr then when the relation is joined the other end can copy in its own config and stuff into solr
[22:25] <magicaltrout> technically its all text based but its a directory with a bunch of arbitrary text files in and its just easier to send over a zip or something
[22:25] <lazyPower> oh sure, sure, i understand the desire
[22:26] <lazyPower> i just dont think we've established a good, functional, repeatable solution for this.
[22:26] <petevg> lazyPower: afaik, we're using resources or relation data everywhere, too.
[22:26] <magicaltrout> aww you all make me so sad
[22:26] <lazyPower> yeah, i thought that was the case
[22:26]  * petevg sheds tears
[22:26] <lazyPower> magicaltrout openstack is still a hope, they have done some cool stuff in charmhelpers that might be lingering to help
[22:26] <vmorris> juju 2.0, manual provider and a few added machines -- if I wanted to setup LXD bridges to the physical network for each of these machines, what would be the appropriate method? I am already altering the default lxd-profile for juju on each of the machines, but the LXD containers are just picking up a 10.0.0.0/24 address
[22:27] <kwmonroe> magicaltrout: lazyPower:  super simple solution for sharing charm data.  simply deploy hadoop for all your workload needs.  everybody can see hdfs://tmp/myresource.
[22:27] <magicaltrout> you make me even sadder kwmonroe
[22:28] <kwmonroe> when all you have is a hammer, everything looks like hadoop.
[22:28] <vmorris> ^^
[22:31] <CarlFK> not sure this is juju problem - I need help connecting a .. container to a bridge network so that I can pxe boot a vm from the dhcp server running in a container
[22:33] <vmorris> to answer my own question, I think I need to be changing /etc/default/lxd-bridge to suit on each of the manually added machines... perhaps there's a better way
[22:33] <CarlFK> the container started with juju deploy ubuntu t3, installed dnsmasq dhcp server into it.  now I want to test it with a vm
[22:33] <kwmonroe> vmorris: i *think* it's all in how you setup the lxd bridge (sudo dpkg-reconfigure lxd).  that's where you can answer questions like "what subnet to use?" and "do i need to NAT my ip4 addresses" etc...
[22:34] <vmorris> kwmonroe: ah yea.. i was kinda hoping that there was something in juju that would let me do this when adding the machines, but i suppose this is appropriate & the same solution i'm looking at now
[22:41] <kwmonroe> CarlFK: your vm may need to be on the same subnet as your container.  iirc, pxe blasts out over udp (at least for the tftp part) and doesn't cross subnets so well.
[22:42] <CarlFK> yup.  so they would be all on the same.. something
[22:47] <magicaltrout> lazyPower: i was thinking one option currently might be to write a charm that is subordinate of solr which basically just has the resource for my other charm
[22:47] <magicaltrout> it would be a bit $hit but works I guess
[22:48] <lazyPower> i dont like that solution
[22:48] <lazyPower> it seems clunky
[22:49] <magicaltrout> well the other solution currently is a bunch of juju scp && juju run blocks
[22:50] <lazyPower> yeah neither are appealing
[22:50] <kwmonroe> cough... hdfs... cough.
[22:51] <lazyPower> magicaltrout i'll have a deeper think on this, but I dont know that i'll have a good suggestion. We've experimented with NFS and SSH in the past
[22:51] <kwmonroe> magicaltrout: i'm only 80% kidding about hdfs.  is a shared filesystem of any kind an option?  nfs?
[22:51] <lazyPower> and it was non eligant
[22:51] <kwmonroe> s3 over sshfs
[22:52] <magicaltrout> kwmonroe: yeah i know but its a one shot event, adding in a subsystem like that seems like overkill to ship a tarball from x to y
[22:53] <kwmonroe> magicaltrout: netcat | gzip /etc/foo is always fun.. assuming you care nothing of the integrity of your payload.
[22:55] <kwmonroe> fwiw, i also don't like the subordinate approach.  that feels like you're making 2 charms just to ship a tarball from x to y
[22:56] <magicaltrout> well.... i would be :P
[22:56] <kwmonroe> lazyPower: what's the max size of relation data?  65k?
[22:56] <magicaltrout> but I also want to make this stupidly simple for some DARPA love and buy in so I'm trying to avoid anything more than juju deploy my-bundle
[22:56] <jhobbs> is there a way to set default constraints for all new models after bootstrap?
[22:57] <kwmonroe> jhobbs: juju set-model-constraints i think
[22:58] <kwmonroe> oh wait.. nm.. that's not gonna handle new models.
[23:00] <kwmonroe> magicaltrout: is the thing connecting to solr always going to have the same resource?
[23:00] <rick_h> jhobbs: model-defaults
[23:00] <kwmonroe> magicaltrout: iow, can you just include that in the solr charm, and then on relation, move it or enable it in some way?
[23:00] <rick_h> jhobbs: juju help model-defaults for help/etc
[23:01] <magicaltrout> in this context kwmonroe then yeah i could put it in the solr charm, but then they use solr for loads so you'd end up having a bunch of bespoke resources for different things in a generic solr charm
[23:01] <magicaltrout> guess it would work for now though
[23:01] <magicaltrout> enough to blag them through the demo phase
[23:01] <kwmonroe> and that's what we're shooting for in 2017.  just enough to blag.
[23:01] <magicaltrout> hehe
[23:02] <petevg> We of the soaring ambition.
[23:02] <magicaltrout> was 2016 the year of the out of arms reach demo?
[23:02] <magicaltrout> 2017 just enough to blag through it when people start to use things
[23:02] <magicaltrout> 2018 maybe approaching usable... ;)
[23:02] <petevg> Something like that :-)
[23:03] <magicaltrout> i like it
[23:03] <jhobbs> rick_h: that doesn't work for me, i get this warning and then the constraints don't apply
[23:03] <jhobbs> rick_h: http://paste.ubuntu.com/23736102/
[23:03] <petevg> cory_fu, kwmonroe: speaking of breaking things in the name of preparing us for our glorious future, I've got that that log dumping, crash reporting branch of matrix working: https://github.com/juju-solutions/matrix/pull/63
[23:04] <petevg> At least, it works great on my computer. Would appreciate some verification :-)
[23:04] <jhobbs> hmm maybe that was because i didn't make a new model, hold
[23:04] <rick_h> jhobbs: or sorry, I thought you meant config. I missed "constraints"
[23:05] <rick_h> jhobbs: so...no, I don't think so. I think it defaults to a set of constraints and then is overridden at the set-model-constraints level
[23:05] <jhobbs> ok
[23:05] <jhobbs> thanks rick_h
[23:07] <jhobbs> rick_h: bug filed https://bugs.launchpad.net/juju/+bug/1653813
[23:09] <kwmonroe> jhobbs: rick_h:  isn't there a bootstrap-constraints that can be different than future model constraints?
[23:10] <kwmonroe> jhobbs: i think "juju bootstrap --bootstrap-constraints mem=2G --constraints mem=4G" would mean your bs node gets a 2gb instance, and all future machines get 4gb.
[23:11] <rick_h> kwmonroe: right but he wants to change them after bootstrap?
[23:11] <kwmonroe> i don't think he knows what he wants
[23:11] <kwmonroe> crazy texans
[23:11] <jhobbs> kwmonroe: my understanding is that bootstrap constraints applies to bootstrap in addition to constraints
[23:12] <jhobbs> kwmonroe: so that the constratins i specify with 'constraints' also apply to the bootstrap node, in addition to the bootstrap constraints
[23:12] <jhobbs> i do not want the 'constraints' to apply to the bootstrap node - if that was the case, that would solve my problem too
[23:13] <kwmonroe> ahhh, i'm really not sure jhobbs.  gimme 2 minutes.  i'll bootstrap with -bs-c and -c and see what happens.
[23:14] <kwmonroe> maybe more than 2 minutes:  ERROR detecting credentials for "azure" cloud provider: credentials not found
[23:15] <jhobbs> i will test kwmonroe
[23:15] <jhobbs> thanks
[23:15] <kwmonroe> oh, nm, i typed the region name wrong.. i'm on it.  start the clock!
[23:15] <jhobbs> ah ok :)
[23:19] <kwmonroe> aight jhobbs, bootstrap-constraints were honored.. i did this:   juju bootstrap azure/centralus --bootstrap-constraints mem=2G --constraints mem=8G  and got a bootstrap node with 3.5G (smallest size that fullfilled mem=2G).  i'm deploying ubuntu now to see if my default model constraints are set to 8.
[23:21] <jhobbs> kwmonroe: i tested and it doesn't seem to work the way i want
[23:21] <jhobbs> kwmonroe: juju bootstrap --config agent-stream=devel integrationmaas --to hayward-00 --constraints="tags=hw-jhobbs" --bootstrap-constraints=""
[23:21] <jhobbs> kwmonroe: i don't want the tags requirement to apply to bootstrap but it does
[23:21] <jhobbs> "No available machine matches constraints: mem=3584.0 name=hayward-00 tags=hw-jhobbs"
[23:21] <kwmonroe> jhobbs: what about --bootstrap-constraints="tags=''"
[23:21] <jhobbs> ahh good call
[23:21] <jhobbs> i'll try that
[23:22] <kwmonroe> not saying that's right, but i wonder if it'll override the value if given a key.
[23:23] <jhobbs> success! it didn't like the single quotes, but this worked: juju bootstrap --config agent-stream=devel integrationmaas --to hayward-00 --constraints="tags=hw-jhobbs" --bootstrap-constraints="tags="
[23:23] <jhobbs> thanks kwmonroe
[23:24] <kwmonroe> np jhobbs.. now to determine if that's by design or not.  it seems like you're right -- constraints are passed as bootstrap-constraints if not explicitly overriden.
[23:24] <kwmonroe> i dunno if that's how rick_h wanted it or not.
[23:25] <jhobbs> seems like you should be able to change that setting after bootstrap still
[23:25] <jhobbs> i updated the bug with that work around though
[23:25] <jhobbs> thanks!
[23:25] <kwmonroe> np