[00:04] <redir> katco: fwereade: http://reviews.vapour.ws/r/5088/ better?
[00:09] <mup> Bug #1593492 opened: Failure bootstrap a controller on openstack reports may report misleading error <openstack-provider> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1593492>
[00:22] <menn0> thumper: I prefer Acquirer interface
[00:22] <menn0> easier to deal with in tests
[00:33] <fwereade> thumper, menn0: while I'm awake: there seems to be more getRawCollection usage around than we should have; is this a sign we need a getUnfilteredCollection, that returns a mongo.Collection not a *mgo.Collection?
[00:33] <thumper> fwereade: while you're up
[00:33] <fwereade> thumper, menn0: because, *arrrgh* direct mongo access with all the lets-unwittingly-destroy-integrity methods
[00:34] <thumper> fwereade: I'd like you to take a quick look at the mutex code again
[00:35] <thumper> davechen1y: sprinkle the Release methods with sync.Mutex?
[00:36] <thumper> davechen1y: easy enough if valuable
[00:36]  * thumper fetches his sparkley sprinkle dust
[00:38]  * thumper pushes
[00:41] <thumper> fwereade: quick Q
[00:41] <thumper> the sole remaining argument with the mutex package is this one:
[00:42] <thumper> Acquirer interface or mutex.Acquire function
[00:43] <fwereade> Acquire(spec) +100 from me
[00:43] <fwereade> thumper, have only got more convinced of that through the day
[00:44] <thumper> fwereade: so what about axw and menn0's reasoning of mocking out in tests?
[00:45] <menn0> fwereade, thumper: sorry was having lunch
[00:46] <thumper> personally I'm partial to mutex.Acquire(spec)
[00:46] <thumper> and have unique spec's for tests if needed
[00:46] <menn0> fwereade: getUnfilteredCollection sounds like a good idea to me (maybe call it getGlobalCollection to match the naming used in allcollections.go)
[00:46] <fwereade> thumper, where mutex is the package?
[00:46] <thumper> the system level mutex should be fast and not really a problem
[00:46] <thumper> fwereade: yeah
[00:46] <fwereade> menn0, COOL
[00:47] <fwereade> well, cool, possibly not *quite* that cool
[00:47] <menn0> haha
[00:48] <fwereade> thumper, my instinct says `Acquire(Spec) (*Mutex, error)`, which gets wrapped up as a `func(mutex.Spec) (Releaser, error)` by clients
[00:48] <thumper> menn0: how stronly do you feel about the Acquirer interface?
[00:48] <fwereade> menn0, axw: horrible? ^^
[00:49] <menn0> thumper: not hugely... tests that want something that creates a mutex instance can always take a callable.
[00:49] <menn0> as above
[00:49]  * thumper will have to go back and bring spec back
[00:49]  * thumper sighs
[00:50] <thumper> who can I get to give a blessing to the work?
[00:50] <menn0> if you have fwereade, axw and me is that enough?
[00:50]  * menn0 has to pick up his son from preschool
[00:50] <thumper> menn0: but you all haven't given agreement
[00:50] <fwereade> I think we're enough
[00:50] <thumper> :)
[00:50] <menn0> you have my blessing with either approach
[00:51] <menn0> they're both workable and I really just want fslock to DIAF
[00:51] <thumper> I'll go back and change Acquire to a function
[00:51] <thumper> and rename Mutex back to Spec
[00:51] <thumper> menn0: me too
[00:51]  * thumper does one last (hopefully) rename dance
[00:55] <axw> thumper menn0 fwereade: I'm a bit beyond caring TBH, let's just do *something* and fix it if it's a problem. it's not going to be hard to change.
[00:55] <thumper> k
[01:07]  * thumper merges this bad boy
[01:09] <mup> Bug #1593506 opened: controller won't die <juju-core:New> <https://launchpad.net/bugs/1593506>
[01:09] <mup> Bug #1593509 opened: Enhance error message when user not logged in <juju-core:New> <https://launchpad.net/bugs/1593509>
[01:40] <davechen1y> thumper: good stuff
[01:40] <davechen1y> what's next ?
[01:53] <thumper> davechen1y: uniter hook lock
[01:53]  * thumper is looking
[01:53] <thumper> wallyworld: I'm back now, did you want to chat?
[01:54] <wallyworld> thumper: can we do it in say 20-30 after another meeting?
[01:54] <thumper> wallyworld: sure
[01:54] <wallyworld> ok, will ping
[01:57]  * thumper grabs the uniter hook execution lock thread and starts pulling
[02:08] <davechen1y> thumper: can I start replacing juju/utils/filelock ?
[02:08]  * thumper looks at that
[02:08] <thumper> davechen1y: yes
[02:09] <thumper> davechen1y: where is that used?
[02:09] <thumper> davechen1y: it doesn't look used anywhere
[02:10] <davechen1y> thumper: famous last words
[02:38] <wallyworld> thumper: free if you are
[02:41] <thumper> wallyworld: coming
[03:48] <thumper> heh
[03:48]  * thumper just made an amusing typo 
[03:49] <thumper> stepFuck
[03:49] <thumper> shoulda been stepFunc
[03:49] <thumper> guess I type that more than I thought
[04:00] <thumper> arghh
[04:00] <thumper> :(
[04:00] <thumper> Just came across uniter_test:68 again
[04:00] <thumper> ffs
[04:00] <davechen1y> thumper: eh ?
[04:01] <thumper> where it builds jujud in setup suite
[04:01]  * davechen1y insert hulk rage gif here
[04:02] <thumper> hmm...
[04:02] <thumper> I think I can just delete it...
[04:02]  * thumper tries
[04:03] <thumper> I'm getting very aware that this thread is getting longer
[04:04] <thumper> but I'm not done pulling yet
[04:05] <thumper>  17 files changed, 134 insertions(+), 410 deletions(-)
[04:05] <thumper> deletions winning...
[04:19] <axw> wallyworld: set-numa-control-policy should be moved to controller config
[04:20] <wallyworld> that's not model specific?
[04:20] <menn0> thumper: bug fix for an issue I discovered during manual testing: http://reviews.vapour.ws/r/5093/
[04:21] <davecheney> thumper: turns out you were right
[04:21] <davecheney> nothing uses juju/itils/filelock
[04:21] <davecheney> PR incoming
[04:21] <axw> wallyworld: nope, only affects how we set up mongo
[04:21] <wallyworld> ah, right, will do
[04:21] <axw> wallyworld: also, does cloudimg-base-url still make sense? I can't remember what the story for lxd is going to be
[04:22] <wallyworld> axw: no, it doesn't without lxc i am pretty sure, i thought that would have been investigated and removed as part of the lxc cleanup
[04:22] <axw> wallyworld: I guess we should remove it and whatever needs to be done for lxd can replace it
[04:23] <wallyworld> yup
[04:23]  * thumper afk for a bit
[04:24] <axw> wallyworld: I'd like to remove storage-default-block-source from model config, and just have environ providers register their default. any objections?
[04:24] <wallyworld> it was in config so users could override though right?
[04:25] <axw> wallyworld: I think so, but I'm pretty sure nobody is using it, or even knows about it. OTOH, we have been bitten by providers not setting it several times
[04:26] <axw> wallyworld: we could keep it and default to whatever the provider registers?
[04:26] <wallyworld> sgtm
[04:26] <davecheney> thumper: http://reviews.vapour.ws/r/5094/
[04:26] <axw> wallyworld: as in the default won't be specified in model config
[04:29] <wallyworld> so can a user specify their own default block source?
[04:29] <wallyworld> if they don't like the provider default
[04:31] <axw> wallyworld: yes
[04:31] <wallyworld> in config as a global thing though?
[04:31] <wallyworld> i guess we don't need it
[04:31] <axw> wallyworld: I'm also thinking that while I'm doing this separation of config, I'll remove name, type, and uuid from model config. they're part of the model's identity, not the config
[04:32] <axw> they'll still be available via environ config of course
[04:32] <wallyworld> yep. so you also doing the numa thing etc too?
[04:32] <axw> wallyworld: not atm, just thinking about the myriad things that need to be done
[04:32] <wallyworld> ok
[05:04] <menn0> axw: would you mind taking a look at this one? it's tiny. http://reviews.vapour.ws/r/5093/
[05:04] <axw> menn0: sure, looking
[05:06] <axw> menn0: LGTM
[05:06] <menn0> axw: thanks
[06:05] <thumper> :-)
[06:05] <thumper> this change is falling out nicely
[06:05] <thumper> not quite there yet
[06:05] <thumper> but getting there
[06:07]  * thumper is done
[06:07] <thumper> very close to replacing the uniter hook lock with a mutex
[06:08] <thumper> this includes the uniter, meterstatus, juju-run, container init
[06:08] <thumper> and reboot
[06:08] <thumper> \o/
[06:08] <thumper> will be ready monday I reckon
[06:08] <thumper> then to backport to 1.25
[06:08] <thumper> phew
[06:08] <thumper> laters peeps
[06:13] <mup> Bug #1593566 opened: Bootstrap reports oath1 not supported with maas 2.0 <bootstrap> <cdo-qa> <cdo-qa-blocker> <maas-provider> <juju-core:New> <https://launchpad.net/bugs/1593566>
[07:15] <wallyworld> axw: i've pushed some changes to that branch; i reckon there were previously bugs in lxd and/or gce that we didn't know about
[07:23] <Yash> hello
[07:23] <Yash> how to solve
[07:23] <Yash> nova-compute/10         error           idle        2.0-beta7 2                      10.100.100.200 hook failed: "install"
[07:31] <axw> wallyworld: sorry need to knock off early today, will try to look later on
[07:37] <wallyworld> np
[08:44] <admcleod_> when is 2.0 stable expected?
[08:58] <Yash1> admcleod: Do let me know if you need any logs of my installation attempt. If yes how?
[08:59] <admcleod> Yash1: the best way would be pastebin.com or pastebin.ubuntu.com or another similar service
[09:00] <Yash1> ok
[09:13] <Yash1> https://10.100.100.17:17070/gui/b4579691-8e4e-4892-8640-c0c5a8a758a6/
[09:13] <Yash1> I can't see xenial option in series
[09:14] <Yash1> admcleod:  Can you please suggest ?
[09:16] <admcleod> Yash1: im sorry, i cant access that internal ip address and i need to go afk - this question is probably better for #juju though i think. i will try to help when i get back if you have not resolved it
[09:21] <Yash1> I manually xenial in url and now able to see that but now
[09:21] <Yash1> Could  not deploy the requested service. Server responded with: no such  request - method Client(1).ServiceDeploy is not implementedCould not add the requested unit. Server responded with: no such request - method Client(1).AddServiceUnits is not implemented
[09:21] <Yash1> GUI message
[09:22] <Yash1> babbageclunk: Hey
[09:23] <Yash1> admcleod: ok c above also
[10:38] <babbageclunk> Yash1: Sounds like a version mismatch between juju client and the running juju controller - have you just upgraded?
[10:39] <babbageclunk> Yash1: service was renamed to application in the latest beta release.
[10:39] <babbageclunk> Yash1: If you've just upgraded juju on the machine you should probably rebootstrap (I think).
[11:26] <wallyworld> admcleod: we're hoping to get a release candidate out soon (maybe 3 weeks). there's no exact date for a 2.0 final. "when it's ready" really :-)
[11:27] <wallyworld> dimitern: do we still use ignore-machine-addresses? well it seems we do, but were we going to remove it for 2.0?
[11:28] <dimitern> wallyworld: it's there as an 'off switch' if it causes trouble
[11:28] <dimitern> wallyworld: but in 2.0 we're getting closer to being able to drop it (not quite there yet..)
[11:28] <wallyworld> ok, thanks. just doing some config yak shaving
[11:58] <frobware> dimitern: do you have time to sync?
[11:58] <dimitern> frobware: yeah, sure
[11:58] <frobware> dimitern: 1:1 HO
[11:58] <dimitern> frobware: omw
[13:02] <mup> Bug #1593708 opened: Why wait for Lp to make agents when testing made them <juju-core:Triaged> <https://launchpad.net/bugs/1593708>
[13:23] <frobware> dimitern: how are you creating the VLANs on the bond?
[13:24] <dimitern> frobware: with the maas ui, why?
[13:24] <frobware> dimitern: 1.9.3? I don't see any option. Create the bond but can only creates aliases on top of the bond.
[13:25] <dimitern> frobware: make sure you're on the right fabric first - then you should see the vlans in the dropdown
[13:29] <frobware> dimitern: in 1.9?
[13:30] <dimitern> frobware: yeah (to clarify I'm talking about the node details page's interfaces section)
[13:30] <frobware> dimitern: well... care to HO...
[13:32] <dimitern> frobware: ok
[13:33] <dimitern> frobware: let me dig out my other headset first..
[13:34] <frobware> dimitern: don't worry... I know exactly why / what I did...
[13:34] <dimitern> frobware: oh? did you manage to sort it out?
[13:35] <frobware> dimitern: yeah, I just reliased I re-installed that MAAS recently and have no VLANs configured... oops
[13:35] <dimitern> frobware: ah :) right
[13:35] <frobware> dimitern: wrong vmass install ;)
[13:36] <dimitern> frobware: so I sent another tarball to lcavassa to verify, this time with a change to the bridge script so it omits and source stanzas while rendering the modified version
[13:37] <frobware> dimitern: it's something I had contemplated. in fact, I think we spoke about this recently
[13:37] <dimitern> frobware: as that infamous eth0.cfg strikes again :/
[13:37] <frobware> dimitern: yep
[13:37] <frobware> dimitern: bbiab (lunch)
[13:38] <dimitern> frobware: enjoy :)
[13:44] <mup> Bug #1593730 opened: Network error after reboot agent <juju-core:New> <https://launchpad.net/bugs/1593730>
[13:57] <babbageclunk> dimitern, voidspace, frobware: feel like a little Friday-afternoon reviewing? You could look at the state/migration part of the workload version change! http://reviews.vapour.ws/r/5095/
[14:02] <mup> Bug #1593730 changed: Network error after reboot agent <juju-core:New> <https://launchpad.net/bugs/1593730>
[14:03] <dimitern> babbageclunk: I'll have a look
[14:03] <babbageclunk> dimitern: Thanks!
[14:42] <dimitern> frobware: replied to your comment btw
[14:42] <frobware> dimitern: lookig
[14:47] <frobware> dimitern: ok, so the source stanza is a pain...
[14:47] <frobware> dimitern: I think this should be an option :)
[14:47] <dimitern> frobware: an argument you mean?
[14:47] <mup> Bug #1593761 opened: Cannot bootstrap in gce using jsonfile in credentials <add-credential> <bootstrap> <ci> <gce-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1593761>
[14:47] <frobware> dimitern: yep
[14:47] <babbageclunk> dimitern, voidspace: nice easy one! http://reviews.vapour.ws/r/5096/
[14:47] <frobware> dimitern: ./add-bridge --omit-source-stanza
[14:48] <dimitern> frobware: sgtm - with 'true' as the default
[14:48] <dimitern> babbageclunk: almost done with the first one
[14:51] <dimitern> frobware: how about --keep-source-stanza ? :)
[14:51] <frobware> dimitern: was experimenting with --ignore-source-stanzas
[14:52] <frobware> dimitern: default=true
[14:52] <dimitern> frobware: as long as it omits them by default, I don't mind the name
[14:59] <dimitern> babbageclunk: both PRs reviewed
[15:05] <katco> ericsnow: standup time
[15:09] <babbageclunk> dimitern: yaythanks
[15:24] <dimitern> frobware: I think I got it to work, pushing updated diff for http://reviews.vapour.ws/r/5087/ in a moment
[15:29] <dimitern> frobware: doing a final test now, just in case.. if you're happy with the approach, let's land it?
[15:30] <frobware> dimitern: I have concerns about all the fixes as one commit
[15:30] <dimitern> frobware: they're in different commits
[15:31] <frobware> dimitern: ok, correction as one big PR
[15:31] <dimitern> frobware: ok, I'll leave it up to you then I guess
[15:32] <dimitern> frobware: I have confirmation it works on bootstack
[15:32] <frobware> dimitern: I couldn't bootstrap with your branch
[15:32] <frobware> dimitern: for reasons I'm unsure of right now
[15:32] <dimitern> frobware: on a bond or ?
[15:32] <frobware> dimitern: vlan on a bond
[15:33] <frobware> dimitern: on real dual-nic h/w
[15:34] <dimitern> frobware: did you deploy ok on the same node without juju?
[15:34] <frobware> dimitern: yes with just a bond.
[15:34] <frobware> dimitern: but that's when I found out I didn't have any VLANs
[15:35] <frobware> dimitern: so I didn't try deploying from MAAS after that, straight to juju
[15:35] <dimitern> frobware: and bootstrap worked ok on a bond with no vlan?
[15:35] <frobware> dimitern: correct
[15:36] <dimitern> frobware: what did you do next then?
[15:36] <frobware> dimitern: created some VLANs - tried to bootstrap which is then what I started reviewing your change
[15:37] <dimitern> frobware: right
[15:38] <dimitern> frobware: I have a kvm node with 2 nics in a bond, and a single VLAN on it, currently being deployed by juju
[15:39] <frobware> dimitern: that is essentially my setup s/kvm/hardware/
[15:39] <frobware> dimitern: let me try again
[15:39] <dimitern> frobware: it looks like it works.. same as on bootstack (as reported by lorenzo - 3 out of 3, added comment to the bug)
[15:40] <dimitern> frobware: I do run sshuttle with all subnets on that maas though
[15:51]  * dimitern is outta here
[15:51] <dimitern> happy weekends everybody ;)
[16:27] <babbageclunk> frobware: I can't bootstrap to AWS - I get this error:
[16:27] <babbageclunk> frobware: ERROR failed to bootstrap model: cannot start bootstrap instance: missing controller UUID
[16:27] <frobware> babbageclunk: well, a new one for me...
[16:28] <frobware> babbageclunk: rewind a few commits... perhaps
[16:29] <babbageclunk> frobware: yeah - checking master.
[16:29] <frobware> babbageclunk: I can't bootstrap on MAAS so ...
[16:29] <babbageclunk> frobware: swap you
[16:30] <frobware> babbageclunk: ah, I wasn't patient enough. this dual-nic celeron ... is just that!
[16:32] <babbageclunk> frobware: nope, still same failure on master. I wonder if there's some new piece of setup I need?
[16:32] <frobware> babbageclunk: you're so far ahead... I'm currently testing 1.25.6
[16:38] <voidspace> frobware: babbageclunk: relatively easy one http://reviews.vapour.ws/r/5098/
[16:51] <frobware> voidspace: dne
[16:51] <frobware> done even
[17:24] <voidspace> frobware: thanks
[17:25] <frobware> voidspace: how do do the "fix it, then ship it" lark?
[17:25] <voidspace> frobware: don't know - I've never been able to do that
[17:27] <balloons> the uuid change is commit f3cf6b
[17:27] <frobware> balloons: is this related to bootstrap failure?
[17:27] <balloons> frobware, yes, it's the cause
[17:28] <alexisb_> we think
[17:28] <frobware> balloons: well... so happy to be testing on 1.25 today. :)
[17:28] <voidspace> frobware: you caught my deliberate typo :-)
[17:29] <voidspace> frobware: in the gateway address...
[17:29] <frobware> voidspace: oh! I didn't catch on that it was deliberate.
[17:29] <voidspace> it's always good to have something to check reviewers are actually reading the code ;-)
[17:29] <voidspace> frobware: it wasn't
[17:29] <frobware> voidspace: hehe
[17:29] <frobware> voidspace: I chuckled becase in the 'real world' the configuration of the interface would have failed.
[17:30]  * frobware thinks celerons are slower than he imagined...
[17:30] <voidspace> frobware: nice
[17:31] <voidspace> frobware: all those issues fixed - thanks for the review
[17:33] <natefinch> gsamfira: you around?
[17:34] <gsamfira> yup. I am now
[17:34] <gsamfira> natefinch: what up?
[17:35] <natefinch> gsamfira: wanted to talk about this PR: https://github.com/natefinch/npipe/pull/20
[17:36] <gsamfira> natefinch: its a really rough PoC :). Tries to keep track of the connections that get made. It should not be merged as is
[17:36] <gsamfira> natefinch: and it will probably fail if there is another process using the same named pipe, and decides to close it while we still try to listen on it. So there should probably be a way to test the pipe, and see if its still open
[17:38] <gsamfira> natefinch: while clients already listening will get the event and disconnect, there is a potential race condition if we assume we are the only process using that named pipe. So starting a wait forever on a named pipe that just got closed, will probably hang the thread.
[17:38] <natefinch> gsamfira: what's the actual problem that it's trying to fix?   I see there's a race condition on close/accept
[17:38] <natefinch> gsamfira: ahh
[17:39] <gsamfira> the best example is the broken test I told you about, that uses both rpc.Listen and implements its own listener. If the named pipe gets closed (by a second goroutine), and you try to wait on it from the first, it will hang forever
[17:40] <gsamfira> the npipe package only keeps track of the last connection it makes
[17:40] <gsamfira> it does not care about the rest
[17:41] <natefinch> I don't think keeping a global map of connections is the way to go... as you said, other processes can still cause that problem.
[17:41] <gsamfira> yap, you are correct
[17:42] <gsamfira> that code is something I slapped together to see if that was indeed the problem
[17:42] <gsamfira> but the solution should be something else
[17:42] <natefinch> it seems like the answer is to give the caller an option for the wait to time out
[17:43] <gsamfira> natefinch: having waitForCompletion wait forever might not be the best thing to do
[17:43] <gsamfira> yeah
[17:43] <gsamfira> or do some kind of poling
[17:44] <gsamfira> natefinch: maybe even check between polls if the named pipe is still there
[17:46] <voidspace> natefinch: maybe you can elucidate something for me
[17:46] <natefinch> yeah, I'm sort of surprised that closing the pipe doesn't cause WaitForSingleObject to fail
[17:46] <voidspace> natefinch: state/sshhostkeys.go - SetSSHHostKeys
[17:47] <gsamfira> natefinch: if you call WaitForSingleObject immediately after closing the pipe, the file descriptor for the named pipe might be allocated to some other process, like notepad....and its going to wait for that :)
[17:47] <gsamfira> for as long as its active
[17:47] <voidspace> natefinch: why have both the insert and update ops - is that because if the doc exists the insert will silently fail but the update will work?
[17:49] <natefinch> voidspace: yeah.  There's no upsert in mongo, so you have to try insert first, and if it exists, then do an update.  It's horrible.
[17:50] <voidspace> natefinch: weird, thanks
[17:50] <gsamfira> natefinch: waitForsingleObject takes a file descriptor...it does not care if that FD is a named pipe or an FD held open by MS Paint :). If you tell it to wait for file descriptor 500, and between the time you disconnect the named pipe and the time you call WaitForSingleObject that FD gets repurposed, you are pretty much up the creek :)
[17:51] <gsamfira> natefinch: so WaitForSingleObject is potentially dangerous
[17:54] <Yash> admcleod: Hey
[17:54] <Yash> juju deploy cs:bundle/openstack-base-43
[17:54] <Yash> ERROR cannot deploy bundle: cannot create machine for holding ceph-mon unit: invalid container type "lxc"
[17:54] <Yash> With fresh install
[17:54] <Yash> 2.0-beta9-xenial-amd64
[17:55] <Yash> What logs are required to log bug?
[17:55] <natefinch> Yash: I know what that is... I thought I'd checked in the fix Tuesday, but maybe not
[17:56] <natefinch> Yash: we dropped lxc support, but juju is supposed to seamlessly translate lxc specification in bundles to lxd
[17:56] <Yash> ok but I just  lxd init and  bootstrap
[17:56] <Yash> then deploy
[17:57] <Yash> so I was not using any lxc as my own
[17:57] <natefinch> Yash: right, but the bundle you're deploying probably specifies putting something in an lxc container
[17:58] <Yash> natefinch: Its little crazy for me. I'm trying all this for past 1-2 weeks and nothing worked
[17:58] <Yash> Should I use 1.x instead
[17:58] <Yash> I used beta 7 and now beta 9
[17:58] <natefinch> Yash: we're kind of in a mad dash to get 2.0 out the door.  We're trying to maintain a working product, but sometimes things slip through the cracks
[17:59] <Yash> And in beta 9 there are several problems in juju gui also
[18:00] <natefinch> Yash: beta is especially beta this time around.  We're trying to make it somewhat less beta ASAP.
[18:00] <Yash> I tried to deploy juju gui unit also and same problem
[18:00] <Yash> ok
[18:00] <Yash> Any release date
[18:01] <Yash> There is no roadmap on site
[18:01] <Yash> Please put a roadmap and milestone. That will help.
[18:01] <natefinch> I expect end of next week to see a lot of stability improvements... and especially these "basic things are broken" will be fixed in the next few days
[18:01] <katco> Yash: there is this, but it is out of date: https://github.com/juju/juju/wiki/Juju-Release-Schedule
[18:01] <Yash> ok
[18:02] <katco> cherylj: ^^^
[18:02] <Yash> Yes already checked. Very outdated :-(
[18:02] <cherylj> yes, it is
[18:03] <Yash> You want me to wait or should I try 1.x instead
[18:03] <Yash> I want to use latest but facing so many problems
[18:03] <katco> Yash: there is also this, but dates are not updated: https://launchpad.net/juju-core/+milestones
[18:04] <Yash> This also checked. Only showing date which are released. :)
[18:04] <Yash> I googled a lot
[18:05] <alexisb_> Yash, in general we will be releasing betas until we feel we are stable enough to go rc
[18:05] <alexisb_> yash beta7 is going to be your most "stable" version for now
[18:05] <Yash> ohh.. That's a scary line
[18:05] <Yash> beta 7 is not stable
[18:05] <alexisb_> as there are a lot of big changes in the upcoming betas
[18:05] <Yash> I worked and can confirm
[18:05] <alexisb_> well it is a 2.0 beta
[18:06] <alexisb_> 1.25 is our stable version
[18:06] <Yash> Yea.:(
[18:06] <Yash> I will try 1.x now
[18:06] <katco> Yash: we've been regularly adding new features in betas; i know that's not traditionally done in betas, but that's what we're calling it
[18:06] <katco> Yash: if you're looking for stability, 1.25 is definitely the way to go; but be aware 2.0 changes a lot of things
[18:07] <Yash> I'm happy to see all features and that's why interested and trying to use for past 1 week or more
[18:07] <katco> Yash: really apologize for the inconvenience. we're working diligently to approach a 2.0 release
[18:08] <Yash> Let me try 1.x for the moment.
[18:08] <Yash> katco: np and Thank you for great work.
[18:08] <Yash> only concern it's not worked even with my lots of effort :(
[18:09] <alexisb_> Yash, thank you for testing things out, we will get on teh lxc fix asap
[18:09] <Yash> If you are looking for external tester I can participate.
[18:09] <Yash> I'm python developer by profession. :)
[18:09] <katco> Yash: we do it for people like you :) i wouldn't be alarmed that you haven't been able to get it to work; there are just a number of bugs that, while seem large, don't have a lot of depth to them. they just require intimate knowledge of juju to understand what's happening.
[18:10] <katco> Yash: i think we're still trying to get a python libjuju going if you feel like contributing :)
[18:11] <Yash> Yea. right. In starting nothing comes in my mind but now I know a lot atleast basic things
[18:11] <Yash> it will be paid contributing or community one :-D
[18:11] <katco> Yash: the biggest difference between 1.x and 2.x is there is only 1 model per controller in 1.x
[18:13] <Yash> katco: https://github.com/juju/juju-bundlelib ?
[18:13] <katco> Yash: no, that's not it. marcoceppi do you know where libjuju is?
[18:13] <marcoceppi> katco: the one we're building now?
[18:14] <marcoceppi> it's pretty heavy dev still
[18:14] <katco> marcoceppi: yes. Yash is a python dev and might want to contribute :)
[18:14] <Yash> I can try atleast
[18:15] <mup> Bug #1593812 opened: Failed to bootstrap: missing controller UUID <bootstrap> <juju-gui> <juju-core:Triaged> <https://launchpad.net/bugs/1593812>
[18:17] <Yash> katco: How to start on it? Is it on github?
[18:17] <katco> Yash: i think marcoceppi is still looking; i don't know where it is unfortunately
[18:21] <Yash> katco: Is it important or just a idea?
[18:21] <katco> Yash: it's definitely more than just an idea. it exists; people are working on it
[18:26] <Yash> ok
[18:45] <mup> Bug #1593506 changed: juju can't kill a controller that's already dead <juju-core:New> <https://launchpad.net/bugs/1593506>
[18:45] <mup> Bug #1593509 changed: Enhance error message when user not logged in <juju-core:New> <https://launchpad.net/bugs/1593509>
[18:45] <mup> Bug #1593828 opened: cannot assign unit E11000 duplicate key error collection: juju.txns.stash <conjure> <juju-core:New> <https://launchpad.net/bugs/1593828>
[18:47] <Yash> katco: I'm waiting now for that thing... :)
[18:48] <katco> marcoceppi: don't leave us hanging :) ^^^^
[19:12] <marcoceppi> katco Yash ask tvansteenburgh1 :)
[19:14] <marcoceppi> katco Yash https://github.com/juju-solutions/python-libjuju though tvansteenburgh1 has a branch he's working
[19:15] <tvansteenburgh1> katco, Yash: it's not really ready for contribs yet. basic architecture still be nailed down
[19:16] <tvansteenburgh1> s/be/being/
[19:17] <katco> marcoceppi: tvansteenburgh1: ta for the status
[19:24] <mup> Bug #1593838 opened: juju beta9 does not support "lxc" notation in bundles <blocker> <bundles> <cdo-qa> <cdo-qa-blocker> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1593838>
[19:33] <natefinch> alexisb_, katco, ericsnow: quick review for a fix for lxc in bundles: http://reviews.vapour.ws/r/5099/
[19:35] <ericsnow> natefinch: LGTM
[19:36] <natefinch> ericsnow: thanks!
[20:00] <mup> Bug #1593850 opened: Deployment stuck in "Pending" for all containers <cdo-qa> <cdo-qa-blocker> <juju-core:New> <https://launchpad.net/bugs/1593850>
[20:00] <mup> Bug #1593855 opened: agent.config file must always have mongodb version <juju-core:Triaged> <https://launchpad.net/bugs/1593855>
[20:21] <Yash> tvansteenburgh1: ok
[20:33] <mup> Bug #1593859 opened: agent config format and cloud-init data test <juju-core:New> <https://launchpad.net/bugs/1593859>
[20:45] <mup> Bug #1593859 changed: agent config format and cloud-init data test <juju-core:Invalid> <https://launchpad.net/bugs/1593859>
[20:57] <mup> Bug #1593859 opened: agent config format and cloud-init data test <juju-core:Invalid> <https://launchpad.net/bugs/1593859>
[21:00] <mup> Bug #1593859 changed: agent config format and cloud-init data test <juju-core:New> <https://launchpad.net/bugs/1593859>
[21:28] <cherylj> is there anyone around who knows the azure provider?
[21:48] <cherylj> wallyworld: are you around?
[21:51] <alexisb> cherylj, andrew is you reman
[21:51] <alexisb> whats up?
[21:51] <cherylj> alexisb: I fixed the first problem with bootstrapping on azure, but now it's running into a different problem because of some changes wallyworld made
[21:52] <alexisb> cherylj, ok
[21:52] <alexisb> do we know what is housing the builds due to wallyworlds commit?
[22:33] <wallyworld> cherylj: i can assign bug 1593812 to myself and fix the fallout in one go if you want
[22:33] <mup> Bug #1593812: Failed to bootstrap: missing controller UUID <blocker> <bootstrap> <juju-gui> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1593812>
[22:33] <cherylj> wallyworld: sure
[22:33] <mup> Bug # changed: 811226, 814974, 906008, 1195187
[22:34] <alexisb> wallyworld, thank you
[22:34] <wallyworld> alexisb: no need to thank me, just fixing something i broke :-(
[22:35] <alexisb> heh well, I was being nice ;)
[22:35] <alexisb> given it is your saturday and all
[23:01] <mup> Bug # changed: 1187803, 1188167, 1193430, 1194880
[23:13] <mup> Bug # changed: 1178770, 1182508, 1183571, 1186264
[23:43] <mup> Bug # changed: 1168154, 1169588, 1176961, 1178306, 1178314
[23:47] <wallyworld> cherylj: http://reviews.vapour.ws/r/5100/