[00:03] <anastasiamac> wallyworld: looking :D
[00:05] <anastasiamac> wallyworld: there is nothing to do with regards to pricelist and stuff that axw did before holidays?..
[00:05] <wallyworld> anastasiamac: otp, sec
[00:05] <wallyworld> anastasiamac: that pricelist stuff is orthogonal
[00:06] <wallyworld> you can bootstrap just fine
[00:06] <anastasiamac> wallyworld: lgtm'ed
[00:06] <wallyworld> ty
[00:07] <mup> Bug #1634289 opened: new AWS region: us-east-2 <juju:In Progress by wallyworld> <juju-core:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1634289>
[00:17] <anastasiamac> wallyworld: i seem to recall u had a cunning plan to specify what machine gets picked from maas based on some user desire... where is it at?
[00:23] <thumper> time to take the dog for a wander
[00:26] <anastasiamac> alexisb: if u have a sec, I'd love ur advice on something
[00:34] <wallyworld> anastasiamac: sorry, was otp. we have had the ability to use maas agent as a placement directive. that's been there for a while. not sure if that's what youo mean
[00:38] <anastasiamac> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1345440
[00:38] <mup> Bug #1345440: add-machine does not check for duplicates <add-machine> <maas-provider> <juju-core:Won't Fix> <https://launchpad.net/bugs/1345440>
[00:39] <wallyworld> anastasiamac: right, that's what I think the agent name placement directive is used for
[00:39] <wallyworld> but i'm not 100%
[00:39] <wallyworld> would need to look at the code
[00:40] <anastasiamac> wallyworld: i know it's an old bug, but if u could comment on it with what you know currently, i'd really really apreciate
[00:40] <anastasiamac> (like coffee-worthy appreciate \o/)
[00:40] <wallyworld> ok, i'll need to do it later today after i look at the maas code
[00:40] <anastasiamac> \o/
[00:41] <anastasiamac> i guess, i'll pay coffee in credit then ;D
[01:12] <lazyPower> anastasiamac juju wait was owned by stub iirc
[01:21] <alexisb> anastasiamac, fyi, I gave wallyworld a task that may prevent him for having time to update lp 1345440 today
[03:03] <anastasiamac> lazyPower: tyvm \o/
[03:04] <anastasiamac> alexisb: ack
[03:08] <thumper> damn
[03:08] <thumper> can't deploy anything from the charmstore with beta 7 as I get unknown channels
[03:08] <thumper> and juju panics
[03:08]  * thumper sighs
[03:09] <veebers> thumper: I think beta13 is the oldest that can be used to deploy from charmstore (I hit this a little while back :-|)
[03:10]  * thumper grabs marcoceppi's ubuntu charm directly from github
[03:11]  * thumper hopes it deploys locally into beta 7
[03:12] <thumper> well... something is happening
[07:19] <balloons> anastasiamac, did you see my comments about juju-1-switch
[07:19] <anastasiamac> yes, u r awesome: marked as duplicate
[07:19] <anastasiamac> balloons: i was also told that u have a plan \o/
[07:19] <mup> Bug # opened: 1577556, 1592887, 1597318, 1614633, 1615986, 1616149, 1616832
[07:20] <balloons> anastasiamac, it seems like juju-1-switch should have worked or>>
[07:20] <balloons> anastasiamac, the update-alternatives issue can be solved, but I'm afraid the juju-1-swith command still won't work
[08:43] <dimitern> frobware: ping
[08:43] <frobware> dimitern: pong
[08:43] <frobware> dimitern: your PR is on my list
[08:43] <dimitern> frobware: I've forked your kvm-maas repo and I'm close to sending a PR your way to handle multiple networks
[08:44] <frobware> dimitern: I have a PR pending for that too. :)
[08:44] <dimitern> frobware: yeah, a final look on my PR will be good at some point ;)
[08:45] <dimitern> frobware: nice ;) we could integrate the approach later I guess?
[09:25] <dimitern> frobware: there it is - even works :) https://github.com/frobware/kvm-maas/pull/1
[09:28] <frobware> dimitern: \o' ok, entering review mode now. first up is your net port prober.
[09:28] <dimitern> frobware: cheers
[09:31] <frobware> dimitern: ah, actually need to raise a bug first. :/
[09:33] <dimitern> np :)
[10:27] <frobware> dimitern: reviewed. just a few comments.
[10:28] <dimitern> frobware: tyvm!
[10:34] <dimitern> frobware: updated to use conn.Close() instead of defer conn.Close()
[10:35] <frobware> dimitern: close early, close often. :)
[10:36] <dimitern> frobware: yeah - originally I had it like this because the results chan was unbuffered, now it doesn't matter
[10:39] <dimitern> ow ffs! maas 1.9 DNS can resolve only its nodes' hostnames, not its own hostname :/
[10:45] <dimitern> frobware: oops sorry - I didn't see your "outdated" comments
[10:46] <dimitern> frobware: compromise? ReachableHostPort() ?
[10:48] <dimitern> or even SelectReachableHostPort
[10:48] <frobware> dimitern: yep, prefer Reachable.
[10:48] <frobware> dimitern: first of your alternatives.
[10:48] <dimitern> frobware: ok, pushing in a momemnt.
[10:49] <frobware> dimitern: less words. select.. what? ;)
[10:51] <perrito666> morning
[10:54] <dimitern> frobware: :)
[10:57] <dooferlad> https://github.com/juju/juju/pull/6465 <-- dimitern, frobware, voidspace instead of messing with hostnames, just reverting an earlier change. Seems like something else fixed the container DNS issue.
[10:59] <dimitern> dooferlad: LGTM
[11:01] <voidspace> dimitern: I didn't get to your review yesterday, did you get one?
[11:01] <dimitern> voidspace: I got one from frobware, but feel free to have a look :)
[11:03] <voidspace> dimitern: you have the link handy?
[11:03] <frobware> voidspace: https://github.com/juju/juju/pull/6454
[11:04] <frobware> dimitern: I was trying your kvm-maas PR - "error: could not determine IP address for PXE network br-enp1s0f1 br-enp1s0f0[0]'"
[11:04] <frobware> dimitern: need to investigate a bit
[11:08] <dimitern> frobware: where is that? kvm-maas-add-node ?
[11:08] <frobware> dimitern: not sure; flipping between too many things, but yes, my first add-node failed.
[11:08] <dimitern> voidspace: sorry, just pushed the last change related to frobware's review
[11:08] <voidspace> dimitern: whilst you're feeling talkative...
[11:09] <voidspace> dimitern: I'm diagnosing issues with vsphere and xenial. Any unit gets an fe::80 address (or similar) - which is machiine local ipv6
[11:09] <dimitern> frobware: why "br-enp1s0f1" ? I'd expect to see e.g. virbr42 instead (well in my case bridge-name==virt-net-name)
[11:09] <voidspace> dimitern: and from my logging, as far as I can tell machine addresses are *never* set
[11:09] <voidspace> dimitern: so juju/vsphere/xenial is unusable
[11:09] <voidspace> dimitern: do you have suggestions as to my next step in debugging
[11:10] <voidspace> dimitern: the machines get ipv4 addressses from vsphere
[11:10] <dimitern> voidspace: fe80:: addresses are link-local ones
[11:10] <voidspace> dimitern: right, link local
[11:10] <voidspace> dimitern: I can't ssh to the machine via juju
[11:10] <dimitern> voidspace: they always exist when ipv6 is enabled - so somewhere we're not filtering them properly
[11:10] <frobware> voidspace: is vsphere/trusty usable?
[11:10] <voidspace> frobware: yes
[11:11] <frobware> voidspace: last time I looked at this I could only do *anything* on vsphere as long as it was trusty
[11:11] <voidspace> dimitern: it's not that we're not filtering - we don't have *any other addressess* for the machin ein state
[11:11] <voidspace> frobware: I believe that's still the case
[11:11] <dimitern> voidspace: ah.. well if that's the case something is quite broken
[11:11] <voidspace> dimitern: at least that's my current conclusion - adding more logging to confirm
[11:11] <voidspace> dimitern: yep :-)
[11:11] <dimitern> voidspace: on the provider side I'd guess
[11:11] <dimitern> rick_h_: ping?
[11:12] <rick_h_> dimitern: pong
[11:12] <dimitern> rick_h_: should we do our 1:1?
[11:12] <rick_h_> dimitern: I'm there
[11:12] <dimitern> rick_h_: ok, omw
[11:12] <voidspace> dimitern: yep, agreed - but it maybe a problem with provisioning xenial images on vsphere
[11:12] <voidspace> dimitern: I'd like to get into the machine to see
[11:13] <voidspace> dimitern: will try the controller machine as I can contact the state server
[11:13] <frobware> dimitern: http://paste.ubuntu.com/23343319/
[11:13] <frobware> dimitern: haven't looked into why
[11:13] <dimitern> voidspace: let me get back to you a bit later
[11:13] <voidspace> dimitern: sure
[11:26] <dimitern> frobware: found it! pushing update to the PR
[11:30] <dimitern> voidspace: if you can ssh into the controller, what address(es) did you use?
[11:30] <voidspace> dimitern: just hacking up some more logging, trying again shortly - will let you know
[11:30] <dimitern> voidspace: ok
[11:32] <mup> Bug #1324841 changed: Improve isolation in utils/file_test.go <juju-core:Won't Fix> <https://launchpad.net/bugs/1324841>
[11:32] <mup> Bug #1325837 changed: juju run is updating ~root/.ssh/known_hosts <run> <ssh> <juju-core:Won't Fix> <https://launchpad.net/bugs/1325837>
[11:39] <frobware> dimitern: closer... http://paste.ubuntu.com/23343396/
[11:41] <frobware> dimitern: ahhh
[11:41] <frobware> dimitern: that's because my virt-network is now a bridge. Ho-hum.
[11:44] <dimitern> frobware: yeah - I use the same names for the bridge and the network; btw pushed another
[11:45] <frobware> dimitern: different, my virt net work definition is actually a bridge. http://paste.ubuntu.com/23343406/
[11:46] <frobware> dimitern: for that kind of definition this virt_network_address()" will always fail
[11:47] <dimitern> frobware: that looks almost the same as what I have locally, e.g. http://paste.ubuntu.com/23343418/ (for maas-int19)
[11:48] <frobware> dimitern: but you have 'ip address=1.2.3.4'
[11:48] <dimitern> frobware: ah, because it's a bridge without an address
[11:48] <dimitern> yep
[11:48] <frobware> dimitern: it's actually the hosts bridge (which does have an address)
[11:49] <balloons> .query stgraber
[11:49] <frobware> dimitern: I think that's a separate commit/fix/enhancement. when I first started doing this I only need libvirt-derived bridges. Now I want them on my various VLANs
[11:49] <dimitern> frobware: re https://github.com/juju/juju/pull/6454 - good to land?
[11:50] <frobware> dimitern: yep
[11:50] <frobware> dimitern: thanks all around
[11:52] <dimitern> frobware: cheers!
[11:53] <dimitern> jam: are you around?
[11:53] <frobware> dimitern: I wonder whether we should just paas the QEMU connection string as an argument.
[11:54] <dimitern> frobware: there are 2 of those actually - from the host POV and maas's POV
[11:54] <frobware> dimitern: are they not the same?
[11:54] <dimitern> frobware: nope
[11:55] <frobware> dimitern: well, I guess that really depends on your initial MAAS setup. My MAAS setup always connects as 'me' to the host.
[11:55] <dimitern> frobware: the former can be qemu:///system as a sane default, while the other is likely different, with qemu+ssh://$USER:$PXE_IP/system being a reasonable default, except if it's not :)
[11:56] <frobware> dimitern: they are essentially always the latter, no?
[11:56] <frobware> dimitern: or it can just degenerate to the latter
[11:56] <dimitern> frobware: I used qemu+ssh://maas:$IP/system before, as I had a maas user only the vmaas-es can use to ssh into my laptop
[11:56] <frobware> dimitern: if MAAS can do power-on/off, then you can use the same string to add/remove-node
[11:57] <dimitern> frobware: if you're calling virsh locally, qemu:///system is assumed to be what you'd want
[11:57] <dimitern> frobware: however, overriding it is useful if you're e.g. setting up a remove kvm host
[11:58] <dimitern> in which case it's likely to be the same inside the vmaas host as well (assuming it's configured ok)
[11:58] <frobware> dimitern: I never tried that, but I think it is a reasonable thing to assume would just work.
[11:58] <dimitern> I was thinking of adding a simple check that qemu+ssh://$USER:$PXE_IP/system is reachable from the local host
[11:59] <frobware> dimitern:  I think I'm going to make it a required arg. A bit sucky, but less magic.
[11:59] <dimitern> but really we need such a check more importantly for the vmaas host
[11:59] <frobware> dimitern: that would happen in kvm-maas-host - a new repo to setup MAAS controllers.
[11:59] <frobware> dimitern: or, I combine them.
[11:59] <dimitern> frobware: as long as it can be exported once and then used in the same shell - sure
[12:00] <dimitern> frobware: yeah, I forked that one as well :) nice work so far - haven't tried it though
[12:00] <frobware> dimitern: wouldn't bother. fundamentally broken until cloud-init is fixed.
[12:02] <dimitern> frobware: oh, too bad :/
[12:02] <frobware> dimitern: for reference, it is bug #1576692
[12:02] <mup> Bug #1576692: fully support package installation in systemd <sts> <verification-done> <cloud-init:Fix Released> <cloud-init (Ubuntu):Fix Released> <init-system-helpers
[12:02] <mup> (Ubuntu):Fix Released by pitti> <cloud-init (Ubuntu Xenial):Fix Released> <init-system-helpers (Ubuntu Xenial):Fix Released> <https://launchpad.net/bugs/1576692>
[12:02] <dimitern> frobware: I see though you've thought about allowing lxd-based maas-es to be deployable - nice
[12:03] <dimitern> frobware: well, I can see cloud-init 0.7.8-1... in xenial-updates now
[12:04]  * frobware notes that this is now fix-released. 
[12:04] <dimitern> frobware: so I'll give it a try later today
[12:04] <frobware> dimitern: the focus is still wrong. We need to drive this from a network spec.
[12:06]  * frobware lunches
[12:20] <dimitern> mgz, balloons: how often/when does github-merge-develop-to-staging run?
[12:29] <mup> Bug # changed: 1325946, 1329256, 1329480, 1329578, 1331691, 1332048, 1365665
[12:43] <hackedbellini> Hey guys! A local charm failed is failing on install hook. I fixed it, updated the charm and marked it as resolved. But on debug-log I see that it is still using the old install-hook. How can I force it to use the new code?
[12:45] <balloons> dimitern, it runs whenever there is a bless on develop
[12:45] <balloons> dimitern, I believe there was a bless this morningish?
[12:46] <dimitern> balloons: ah, I see - ok
[12:46] <dimitern> balloons: I was wondering how a multi-PR fix will work if some PRs land in staging from develop before others
[12:50] <balloons> dimitern, the ci run should fail if it gets picked up
[12:50] <balloons> and thus, it shouldn't hit staging until it's all ok
[12:52] <dimitern> balloons: right.. or if it doesn't fail, the later PR in the pipeline could be based on staging + cherry-picked PRs yet-to-land on staging
[12:52] <dimitern> s/the later PR/the later PRs/
[12:53] <balloons> dimitern, the need for rebasing might happen; see the discussion on the mailing list about this
[12:53] <balloons> but from a job perspective, you understand what will happen now :-)
[12:54] <dimitern> balloons: yeah, cheers :)
[12:59] <mup> Bug # changed: 1365675, 1368254, 1369638, 1369900, 1369909
[13:11] <mup> Bug #1373592 changed: When bootstrapping or deploying dont spec zone <bootstrap> <ec2-provider> <papercut> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1373592>
[13:11] <mup> Bug #1373768 changed: Juju doesn't inform users when MAAS is out of nodes <maas> <orange-box> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1373768>
[13:11] <mup> Bug #1375110 changed: "maintenance in progress" detection in the API server needs improving <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1375110>
[13:27] <voidspace> help for debug-log shows --tail and --no-tail as arguments, yet they don't seem to be defined
[13:28] <voidspace> hmm, they are on my local box - yet not on the remote one using the same (allegedly) version of juju
[13:28] <voidspace> gah, my fault
[13:29] <voidspace> "juju help debug-log --no-tail" doesn't work, perhaps unsurprisingly
[13:38] <voidspace> dimitern: I see this in the logs: provider addresses: []state.address{state.address{Value:"fe80::1", AddressType:"ipv6", Scope:"public", Origin:"provider", SpaceName:""},
[13:39] <voidspace> dimitern: so at some point we're getting an address with value fe80::1 come in with a Scope of "public"
[13:39] <voidspace> dimitern: which is why we're setting it as a public address
[13:39] <dimitern> voidspace: right!
[13:40] <dimitern> voidspace: I remember seeing something nasty like using network.NewScopedAddress(..., network.ScopePublic) in the vsphere provider
[13:40] <voidspace> dimitern: some smoking guns from the logs: http://pastebin.ubuntu.com/23343774/
[13:40] <dimitern> to fake some address as a public one
[13:40] <voidspace> dimitern: I will hunt that out
[13:40] <voidspace> dimitern: ouch
[13:41] <voidspace> dimitern: yep, environInstance.Addresses makes addresses public
[13:41] <mup> Bug # changed: 1375268, 1376576, 1380659, 1380989, 1382063, 1382276, 1383260, 1384013, 1384336, 1384348, 1384369
[13:41] <voidspace> dimitern: provider/vsphere/instance.go:58
[13:41] <voidspace> dimitern: shall I just change that to always use a derived scope instead of the two explicit scopes?
[13:42] <voidspace> dimitern: in fact dammit, I'll just try it
[13:44] <voidspace> dimitern: if Type was a method and we *always* derived it then we wouldn't have this issue
[13:45] <dimitern> voidspace: let me have a look
[13:46] <voidspace> dimitern: I've made the change and I'm just scp'ing the binaries up to try it
[13:46] <voidspace> dimitern: only takes ten minutes
[13:46] <voidspace> dimitern: although, please look to see if there's any reason why we shouldn't rely on a derived scope there
[13:46] <rock_> hi . we developed "cinder-storage driver charm. We want to install "git" not as part of install hook. But someone suggested "layer apt". In my charm folder I created layer.yaml file as pasted. http://paste.openstack.org/show/586196/
[13:47] <rock_> when layer.yaml will execute?
[13:47] <rick_h_> natefinch: ping
[13:47] <mup> Bug #1384549 changed: Running Juju ensure-availability twice in a row adds extra machines <canonical-bootstack> <canonical-is> <ha> <improvement> <maas-provider> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1384549>
[13:47] <mup> Bug #1385277 changed: malformed urls as environment variable values need to be handled better <tech-debt> <juju-core:Won't Fix> <https://launchpad.net/bugs/1385277>
[13:47] <mup> Bug #1386222 changed: Usability: machine provisioning timeouts <deploy> <scalability> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1386222>
[13:48] <rock_> Before install script (or) after install script?
[13:48] <dimitern> voidspace: yeah, I think we should be using NewAddress() instead of NewScopedAddress() there
[13:48] <voidspace> dimitern: bootstrapping now
[13:48] <voidspace> dimitern: thanks
[13:49] <rock_> could anyone help me in this.
[13:49] <dimitern> voidspace: it's commendable that whoever implemented the provider tried to convey the public vs. private distinction to juju with the scope, but ...
[13:49] <natefinch> rick_h_: howdy
[13:49] <voidspace> dimitern: yep, "but"
[13:49] <voidspace> dimitern: hah, and with that change no tests fail...
[13:50] <voidspace> or at least, no vsphere provider tests fail
[13:50] <rick_h_> natefinch: can you do me a fav please? Can you generate a new fallback-clouds.yaml file with the change ian landed overnight and get it to abentley to test out please?
[13:50] <dimitern> voidspace: sweet! :)
[13:50] <natefinch> rick_h_: sure
[13:50] <rick_h_> natefinch: ty
[13:50] <voidspace> dimitern: well, either sweet or it was just untested....
[13:50] <voidspace> dimitern: hopefully they pass because using a derived scope is the right thing anyway...
[13:52] <rock_> dimitern: Hi. do you have any idea on my question? please tell me if you have.
[13:56] <katco> rock_: try over in #juju. they usually discuss the charming side of things much more. marcoceppi or lazyPower may be able to help
[13:56] <katco> rock_: or cory_fu
[13:56] <mup> Bug #1384549 opened: Running Juju ensure-availability twice in a row adds extra machines <canonical-bootstack> <canonical-is> <ha> <improvement> <maas-provider> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1384549>
[13:56] <mup> Bug #1385277 opened: malformed urls as environment variable values need to be handled better <tech-debt> <juju-core:Won't Fix> <https://launchpad.net/bugs/1385277>
[13:56] <mup> Bug #1386222 opened: Usability: machine provisioning timeouts <deploy> <scalability> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1386222>
[13:56] <rock_> katco: Thanks.
[13:56] <voidspace> dimitern: all machines started, all have ipv4 addresses, can ssh to them
[13:57] <katco> rock_: this channel is more for developers of juju itself
[13:57] <voidspace> rick_h_: found and fixed the vsphere ipv6 bug (need a PR and tests of course - but verified the fix) - with a bit of help from dimitern as usual
[13:57] <cory_fu> rock_: Hey.  I'm glad to help.
[13:58] <rick_h_> voidspace: <3
[13:59] <rick_h_> voidspace: dimitern frobware macgreagoir natefinch mgz ping for standup
[13:59] <rock_> cory_fu: Hi. Hi. we developed "cinder-storage driver" charm. Our charm dependent on Github. During the execution of the charm it will go and get latest files from Git and it will keep those files in cinder node. We will deploy our charm post deployment of Openstack setup. So during execution of our charm it was giving "git ERROR"[Like git is not there].
[13:59] <cory_fu> rock_: The apt layer will install that package more or less the first chance it gets.  Generally, this will mean during the install hook, though it can sometimes actually happen even earlier (due to leadership, storage, etc).  Essentially, as soon as the reactive framework is bootstrapped and the apt layer sees that the package has not yet been installed.
[14:00] <cory_fu> rock_: That also applies to any of your own reactive handlers that don't have any other unsatisfied pre-conditions (e.g., @when decorators)
[14:01] <cory_fu> rock_: So, what you'll want to do, is ensure that the code that depends on the "git" package being installed has a @when('apt.installed.git') decorator on it.  There's an example of this usage in the apt layer README: https://git.launchpad.net/layer-apt/tree/README.md#n69
[14:02] <cory_fu> rock_: (That example also uses reactive code to perform the initial package install, but you can just as easily use the layer.yaml option definition if there are no conditions or other prerequisites that must be satisfied *before* installing the git package)
[14:02] <rock_> cory_fc: Thanks. Yes I saw this.  But before install script oonly I need to install that git package.
[14:03] <cory_fu> rock_: Right.  So just ensure that your initial "entry point" handler (i.e., the one with minimal or no pre-conditions) does at least have the precondition of @when('apt.installed.git')
[14:04] <cory_fu> rock_: Let me see if I can find you a more concrete example
[14:04] <rock_> cory_fc: I am new to this. didn't get
[14:05] <rock_> cory_fc: better I need to clear you with my requirement
[14:05] <mup> Bug #1384549 changed: Running Juju ensure-availability twice in a row adds extra machines <canonical-bootstack> <canonical-is> <ha> <improvement> <maas-provider> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1384549>
[14:05] <mup> Bug #1385277 changed: malformed urls as environment variable values need to be handled better <tech-debt> <juju-core:Won't Fix> <https://launchpad.net/bugs/1385277>
[14:05] <mup> Bug #1386222 changed: Usability: machine provisioning timeouts <deploy> <scalability> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1386222>
[14:09] <rock_> cory_fu: If we have juju openstack setup. when we deploy our charm and added relation to cinder, then my charm will go and get our cinder-storage driver files from Git and keep them on cinder node. But before cloning from git it was giving "git ERROR". So we need to install git package not as part og install hook.
[14:09] <cory_fu> rock_: Hrm.  All of the examples I can find use the apt.queue_install() method in the code, rather than the layer.yaml option, but it's functionally equivalent.  I think the README is probably the best example, in that case, just know that you can substitute the layer.yaml option for lines 77-79 and it will behave the same
[14:11] <cory_fu> rock_: When using reactive, the idea is to think about the life-cycle less in the terms of hooks, and more in terms of what are the pre-conditions of the block of code (single function / handler) that you're concerned about.  In your case, you have a handler which uses git, and so that block of code needs to be decorated with @when('apt.installed.git') and then it will always be delayed until that dependency is met.
[14:12] <cory_fu> That will likely still happen during the install hook, though, because it can really happen as soon as the apt package is done being installed.
[14:12] <cory_fu> Unless it has other pre-requisites, such as depending on the relation, as you mentioned, in which case it needs to have more conditions specified in its @when decorators
[14:12] <rock_> cory_fu: I developed my charm using shell script.
[14:14] <cory_fu> rock_: That's fine.  You can use the reactive pattern with bash.  Here is an example, although it doesn't use the apt layer: https://github.com/juju-solutions/layer-openjdk/blob/master/reactive/openjdk.sh
[14:16] <cory_fu> rock_: The main things to note with that are the "source charms.reactive.sh" at the top, and "reactive_handler_main" at the bottom.
[14:17] <cory_fu> Otherwise, it's similar to any other reactive example in that you define a set of functions and decorate them with the pre-conditions that are required for each one to be able to run
[14:17] <mup> Bug # changed: 1260247, 1262750, 1263196, 1267298, 1268917, 1270041, 1270858, 1270896, 1271502, 1271504, 1330473, 1386494, 1386926, 1389303, 1389324, 1389418, 1390284, 1391353
[14:17] <cory_fu> Those pre-conditions depend on states (flags) that are set either by other handlers in your layer, or by other layers that you depend on.
[14:18] <rock_> cory_fu: what layer-git-deploy will do?
[14:20] <rock_> core_fu: I am little bit confusing. where to add and what to add?
[14:21] <cory_fu> rock_: I had not seen that layer yet.  To be honest, I'm not sure that it is complete.  Perhaps bdx (James Beedy) will chime in?
[14:23] <rock_> core_fu: Ok Thanks. Simply, What I need to add to my charm to install git package? sorry, really these layer terms are very new for me.
[14:24] <cory_fu> rock_: If you haven't read through it yet, https://jujucharms.com/docs/stable/developer-layers covers the basics of layers and states to give you a better foundation.
[14:25] <cory_fu> rock_: As for your specific question, I think that using the "apt: packages: [git]" in your layer.yaml as you mentioned initially, and adding a @when('apt.installed.git') decorator around the code that uses git should be all you need
[14:26] <mup> Bug #1271923 changed: using lxc containers with maas provider always default to series of host service unit <lxc> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1271923>
[14:26] <mup> Bug #1273216 changed: unknown --series to add-machine breaks provisioner <bitesize> <juju-core:Fix Released> <https://launchpad.net/bugs/1273216>
[14:26] <mup> Bug #1274450 changed: VM locale handling <debug-hooks> <ssh> <juju-core:Fix Released> <https://launchpad.net/bugs/1274450>
[14:26] <cory_fu> rock_: This all presupposes, of course, that you're writing your charm using layers and reactive, which we recommend, but is a very different style than writing traditional charms, in that you don't write the individual hooks directly, just reactive handlers that have preconditions.
[14:29] <rock_> core_fu: Oh. Actually I written my charm in a traditional way. So for installing "git" package only I asked in Chat. One guy suggested "layer apt". So I am asking about this.
[14:29] <cory_fu> rock_: If you're writing your charm using the classic approach and creating your hooks yourself, then you can't use the apt layer, have to call apt-get install yourself, and must manage ensuring the ordered execution of your code yourself with the understanding that hooks are inherently life-cycle events and not procedural code paths.  Thus, I can promise that that approach can get difficult quite quickly
[14:29] <cory_fu> rock_: We recommend writing all new charms using layers and reactive because it makes dealing with these exact coordination issues much, much easier
[14:30] <cory_fu> rock_: I have to step away for a few minutes for a meeting, I'm afraid.  I will try to continue to respond, but may be slower for a little while
[14:31] <rock_> core_fu: Oh. OK. thanks for your help. One final question please.
[14:33] <rock_> core_fu: We already used classic approach right. I will follow this. Because we have to deliver this charm quickly to the client. So at present situation I will use "apt-get install git" in install script of hook. This will be fine right?
[14:35] <mup> Bug #1271923 opened: using lxc containers with maas provider always default to series of host service unit <lxc> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1271923>
[14:35] <mup> Bug #1273216 opened: unknown --series to add-machine breaks provisioner <bitesize> <juju-core:Fix Released> <https://launchpad.net/bugs/1273216>
[14:35] <mup> Bug #1274450 opened: VM locale handling <debug-hooks> <ssh> <juju-core:Fix Released> <https://launchpad.net/bugs/1274450>
[14:46] <natefinch> gah..... why oh why does pastebin.ubuntu.com want me to log in with SSO to download the plaintext of a pastebin that I can see without logging in?  Geez.
[14:47] <mup> Bug #1271923 changed: using lxc containers with maas provider always default to series of host service unit <lxc> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1271923>
[14:47] <mup> Bug #1273216 changed: unknown --series to add-machine breaks provisioner <bitesize> <juju-core:Fix Released> <https://launchpad.net/bugs/1273216>
[14:47] <mup> Bug #1274450 changed: VM locale handling <debug-hooks> <ssh> <juju-core:Fix Released> <https://launchpad.net/bugs/1274450>
[14:48] <natefinch> abentley: new fallback-clouds.yaml http://pastebin.com/raw/rnxGWLjB
[14:48] <abentley> natefinch: Thanks.
[14:51] <cory_fu> rock_: Back, sorry.  So for the most part, yes.  Using `apt-get install git` in the install hook should be fine, but as I mentioned before, you do need to be aware that there are some conditions under which other hooks might run *before* the install hook, mainly storage-attached and leader-elected, I think.  I am not 100% certain, though, that relation hooks will *never* run before install.  So, you may find that you need to manually implement
[14:51] <cory_fu> some sort of flag system to manage that.
[14:53] <cory_fu> rock_: You could do that either with hidden dot files on the unit (I prefer to keep them out of the charm code directory, perhaps one level up, to keep that more clean for upgrades and debugging), or you can install the charmhelpers python library which includes a command-line interface to "unitdata" which makes it easy to manage persistent charm data like that, e.g.: chlp unitdata set foo true
[14:54] <cory_fu> rock_: At any rate, just keep in mind that, while there are some assertions about when certain hooks will run, there is also a lot of uncertainty, which is inherent in the nature of the cloud.
[15:04] <dimitern> frobware, voidspace, dooferlad: a couple of small PRs (the latter includes the former): https://github.com/juju/juju/pull/6467 https://github.com/juju/juju/pull/6468 needed to (finally!) fix bug 1616098, please take a look if you can
[15:04] <mup> Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 <4010> <cpec> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1616098>
[15:06] <frobware> dimitern: will take a look in a bit. raising the LXD bug from yesterday...
[15:06] <dimitern> frobware: sure
[15:09] <rock_> core_fu: Thank you for your valuable information. I will need to get good knowledge on layer charm.
[15:51] <frobware> dimitern: reviewed, though it would be easier with distinct PRs. the last one I looked at had code that I had already reviewed.
[15:51] <dimitern> frobware: I'm open to suggestions how I should propose 2 related PR, so they can still depend on each other, but do not duplicate the diff
[15:51] <dimitern> :)
[15:52] <dimitern> frobware: thanks for the reviews though
[15:52] <frobware> dimitern: I think propose them separately and land a new PR that has both (already) reviewed.
[15:52] <frobware> dimitern: or merge one of them once independently reviewed and approved.
[15:52] <dimitern> frobware: but wouldn't that be awkward to implement?
[15:53] <dimitern> frobware: I mean, if PR1 changes package1, and PR2 changes package2, which uses package1 ..
[15:53] <frobware> dimitern: so what I meant was have two PRs. get both reviewd. Once OK on both land PR2 with PR1 merged.
[15:53] <dimitern> frobware: there needs to be some artificial shim in package2 that uses a faked up version of package1
[15:54] <frobware> dimitern: then I would say they are not independent and would benefit from being reviewed as a single PR.  $shrug.
[15:55] <frobware> dimitern: look at it another way, could either have been independently reverted?
[15:56] <dimitern> frobware: I guess so
[15:56] <dimitern> frobware: well, the second one - not as proposed
[15:58] <dimitern> balloons: ping
[15:59] <dimitern> mgz, balloons: the windows instance used by github-merge-juju is out of disk space, and the lxd one is somehow borked (websocket.Dial wss://10.0.8.25:17070/model/bc2f61b3-fe8d-416a-8bba-19349301efd5/api: x509: certificate signed by unknown authority)
[16:01] <balloons> sinzui, ^^
[16:01] <balloons> Sorry away from computer ATM
[16:02] <dimitern> np, just for the record :)
[16:02] <sinzui> dimitern: balloons I reboot is needed then
[16:03] <dimitern> sinzui: it such cases, and when the PR is approved, I guess it's fine to still hit $$merge$$, right?
[16:04] <sinzui> dimitern: yes, wait a moment for me to get the host backup. windows is slow
[16:05] <dimitern> sinzui: np, I'll check back in an hour or so
[16:06] <mgz> dimitern: it's fine to $$merge$$ again if the failure made it back to the pr
[16:07] <dimitern> mgz: righto ;) cheers
[16:08] <sinzui> balloons: rick_h_ : did someone setup a NEW windows host to for the travis tests. We need a deidicated machine for each job. We seem to bee seeing a lot of temporary disk full errors on windows over the last week
[16:08]  * dimitern EODs
[16:14] <natefinch> FYI, if you're looking for travis-like CI for windows: https://npf.io/2014/07/ci-for-windows-go-packages-with-appveyor/
[16:17] <balloons> sinzui, send me your thoughts about scaling. I know the LXD tests will not scale well ( nor are they being cleaned up, need some magic for that better solved by running elsewhere perhaps)
[16:27] <alexisb> perrito666, how is vshpere treating you?
[16:28] <perrito666> alexisb: nicely, I am reviewing the provider and I believe I have a clue on what is happening on at least one of the bugs and I see voidspace took the other one that larry deemed very important, then the rest will come after
[16:30] <alexisb> perrito666, how do you feel about the user experience for the provider?
[16:33] <perrito666> I feel like you did not tell me something because the one I have been provided works like any other juju
[16:36] <alexisb> redir, ping
[16:36] <redir> pong
[16:36] <redir> alexisb:
[16:36] <alexisb> heya redir, just checking in, any thing you need from me?
[16:36] <redir> not at the moment
[16:36] <redir> thanks
[16:36] <alexisb> k
[16:36] <redir> I'll reach out to katco shortly
[16:37] <alexisb> ok, cool if she is pre-occupied let me know :)
[16:38] <alexisb> and just to verify redir you are picking up the multi series charm bug, correct?
[16:38] <redir> yes
[16:38] <alexisb> coolio, thanks
[16:51] <redir> np
[16:59] <natefinch> heh nice, without even trying my PR is +1,000 −0
[16:59] <rick_h_> natefinch: hah
[17:00] <natefinch> sinzui, mgz: is there a list somewhere of which repos the merge bot handles?
[17:00] <mgz> you can see either by looking at the github project perms or the jenkins job list
[17:00] <sinzui> natefinch: yes, the jobs are named after the repo http://juju-ci.vapour.ws/view/Juju%20Ecosystem/
[17:01] <perrito666> wtf/min increases
[17:01]  * perrito666 cries in spanish
[17:01] <natefinch> sinzui: perfect, thanks
[17:03] <natefinch> sinzui, mgz: can one of you add a merge job for github.com/juju/jsonschema ?
[17:03] <mgz> what deps does it have?
[17:03] <natefinch> mgz: like go version, or like packages?
[17:04] <mgz> packages
[17:04] <sinzui> natefinch: mgz: no deps at this minues
[17:04] <natefinch> I can add a dependencies.tsv if that makes your life easier
[17:04] <sinzui> mgz: export the path to golang1.6 we dont wany any other go being used
[17:05] <mgz> natefinch: that's safest, but if it only have trivial ones the gating script can use one of the other modes instead
[17:05] <natefinch> mgz: it does import 3rd party repos
[17:12] <natefinch> mgz: https://github.com/natefinch/jsonschema/blob/0f97e8fd9f30f6c1e8c1756aacea26cd56792547/dependencies.tsv
[17:13] <natefinch> mgz: one wrinkle - the dependencies.tsv only exists in the PR, not in the root of the repo (this is the first PR to populate the repo)
[17:13] <mgz> natefinch: that will work fine, thanks
[17:14] <mgz> sinzui: I'll go ahead and add this?
[17:14] <sinzui> mgz" please
[17:17] <mgz> okay, perms right on the branch, job created
[17:18] <mgz> changing the config on the juju slave now then we're good to go
[17:20] <mgz> natefinch: all done, you can go ahead and try $$merge$$ on your proposal now
[17:21] <natefinch> mgz: awesome, thanks
[17:24] <mgz> natefinch: one error in the dependencies.tsv?
[17:25] <natefinch> mgz: yeah, my fault, was on a local branch that hasn't landed yet
[17:26] <natefinch> git commit -a --amend --no-edit  is my BFF
[17:29] <natefinch> yay for fast tests
[17:30] <mgz> natefinch: seems to have worked
[17:30] <natefinch> mgz: yep
[17:30] <natefinch> mgz: thanks :)
[18:11] <hackedbellini> gey guys! I deployed a charm here, but had to do some manual changes on it because of an error that it is having. Now I'm trying to upgrade it to my local charm, but it is giving me this error:
[18:11] <hackedbellini> ERROR cannot upgrade application "jenkins-slave-xenial" to charm "local:precise/jenkins-slave-16": cannot change an application's series
[18:11] <hackedbellini> I'm trying to use the --force-series option but with no luck
[18:13] <natefinch> hackedbellini: there should be a series: value in the metadata.yaml with a list of supported series, make sure xenial is in that list, or add it if it's not there
[18:14] <hackedbellini> natefinch: hrm, this is a very old charm. It doesn't have that value. What should I put exactly?
[18:15] <natefinch> series: ["xenial"]
[18:15] <natefinch> that should work
[18:16] <hackedbellini> natefinch: on the root, right? I'll try it right now, thanks
[18:16] <natefinch> yep
[18:20] <hackedbellini> natefinch: I added both "xenial" and "trusty" to the list. After that I got this:
[18:20] <hackedbellini> ERROR series "precise" not supported by charm, supported series are: xenial,trusty
[18:20] <hackedbellini> then i added "precise" too, that made me return to the original issue
[18:21] <hackedbellini> note that the charm is not deployed in a precise machine
[18:21] <natefinch> weird
[18:21] <natefinch> it's deployed to xenial, I presume, based on the name?
[18:22] <hackedbellini> natefinch: yeah! The charm in question is jenkins-slave: https://jujucharms.com/jenkins-slave/
[18:22] <hackedbellini> I deployed 2, one on trusty and one on xenial
[18:22] <hackedbellini> but I had to modify the jenkins-slave.deb for xenial to update it to systemd (it was using upstart)
[18:23] <natefinch> hackedbellini: what's your local series?  i.e. the one you're running deploy from?
[18:23] <hackedbellini> xenial
[18:23] <hackedbellini> https://www.irccloud.com/pastebin/FGgsTLGl/
[18:24] <hackedbellini> natefinch: ^ that is my deploy
[18:24] <natefinch> so, the store thinks this is a precise charm
[18:24] <hackedbellini> yeah the store thinks that
[18:25] <hackedbellini> natefinch: this is how the metadata.yaml of my local charm is looking:
[18:25] <hackedbellini> https://www.irccloud.com/pastebin/YZjKVFvA/
[18:25] <natefinch> are you using upgrade-charm --switch?
[18:27] <hackedbellini> natefinch: I was trying with --path, but --switch gives something very alike. Let me give you some outputs
[18:27] <natefinch> I think switch is what is supposed to work for what you want to do
[18:27] <natefinch> "supposed to"
[18:28] <hackedbellini> natefinch: this is the output when I just have xenial and trusty on series:
[18:28] <hackedbellini> https://www.irccloud.com/pastebin/DkbF6lW0/
[18:28] <hackedbellini> and this is when I add precise too
[18:28] <hackedbellini> https://www.irccloud.com/pastebin/V7DjV3B4/
[18:31] <hackedbellini> natefinch: do you know at least where can I change a file on the existing charm? Because juju keeps running a hook on it that reinstalls the old deb and I loose my modifications
[18:33] <hackedbellini> nevermind, think I found it
[18:33] <hackedbellini> but still want to know how to change the charm to the local one =P
[18:34] <natefinch> --switch is supposed to work for that.  seems like you're hitting a bug
[18:36] <hackedbellini> natefinch: haha I'm hiting a lot of bugs in this deployment =P
[20:27] <natefinch> ahh man, our provider config code is so convoluted
[20:47] <wallyworld> menn0: thumper: look at the last few lines in func addApplicationOps() in state/application.go.... what do you notice?
[20:48]  * menn0 looks
[20:49] <menn0> wallyworld: apart from "svc"?
[20:49] <wallyworld> the Id value in the txn.Op does not match the docID value in the applicationDoc
[20:49] <wallyworld> it uses app name for the txn.Op
[20:49] <wallyworld> and uuid:name for the application doc id
[20:50] <wallyworld> surely that's an issue?
[20:50] <menn0> wallyworld: nope
[20:50] <wallyworld> what am I missing?
[20:50] <menn0> wallyworld: the multi-model txn layer will sort that out
[20:51] <wallyworld> oh, so it even handles Id in txn.Op
[20:51] <menn0> wallyworld: the Id field in the txn.Op gets the uuid: prefix added automatically before the txn is applied
[20:51] <wallyworld> ok, didn;t relaise that
[20:51] <perrito666> why would someone have type mismatches inside the same library?
[20:51] <wallyworld> that example is different to all the others
[20:51] <wallyworld> the others use doc id, which has the model uuid applied
[20:52] <wallyworld> menn0: thanks for clarifying
[20:52] <menn0> wallyworld: it's because you used to have to add the doc id on manually
[20:52] <menn0> wallyworld: but then we improved the multi-model layer
[20:52] <menn0> wallyworld: so there's a mix
[20:52] <wallyworld> ok, i'm happier now, i didn't know about that improvement
[20:53] <menn0> wallyworld: for new work, don't add the model uuid prefix yourself
[20:53] <wallyworld> excellent, ok
[20:53] <menn0> wallyworld: same goes for Find and FindId calls
[20:53] <wallyworld> even betta
[20:55] <thumper> wallyworld: there are many places in the codebase that do things they don't need to do w.r.t. the model-uuid
[20:55] <thumper> and menn0 did a great job with the multi-model layering
[20:55] <wallyworld> thumper: agreed. i wsn't critisising, just very confused
[20:56] <menn0> wallyworld, thumper: I started an effort last year to standardise all this ages ago but got pulled onto other things
[20:56] <wallyworld> as always :-)
[20:56]  * thumper nods
[20:56] <thumper> I have been trying to clean things up too
[20:57] <thumper> whenever I drive by
[20:57] <wallyworld> i'm going to propose the cross model work be merged back into develop and wanted to be sure everything was good in how i did remote applications
[20:58] <wallyworld> that f*cking branch was sooooooo stale
[20:59] <alexisb> wallyworld, are you having a fun morning ? ;)
[20:59] <wallyworld> alexisb: living the dream, wheeeeeee
[20:59] <wallyworld> alexisb: only just woke up, it was more like a fun late night :-(
[21:15] <wallyworld> alexisb: can you join the sts call?
[21:16] <alexisb> possible
[21:51]  * alexisb runs to pick up her son
[21:54] <redir> what mongod do I need for 1.25.x?
[21:54] <redir> 0.6?
[21:54] <perrito666> 2.4
[21:54] <redir> :)
[22:02] <zeestrat> Anyone with a MAAS setup that could take a look at (and perhaps try to reproduce) https://bugs.launchpad.net/juju/+bug/1634390 ? Would be much appreciated.
[22:02] <mup> Bug #1634390: jujud services not starting after reboot when /var is on separate partition  <juju:New> <https://launchpad.net/bugs/1634390>
[22:23] <redir> :)
[22:23] <redir>  
[22:24] <redir> forgot all about 1.x blowing up the terminal with mongo logs