[00:02] tvansteenburgh: ping, you on? [00:02] katco: yup [00:02] tvansteenburgh: heya [00:03] tvansteenburgh: re. bug 1611514, what does the API do now when you try and add a charm and then deploy it via "local:"? [00:03] Bug #1611514: "local" charm schema should allow deploying previously deployed local charms [00:03] katco: it returns a message that it can't find the charm [00:03] tvansteenburgh: what API calls are you using? [00:05] katco: i really should've kept that test script :/ [00:05] tvansteenburgh: haha [00:06] katco: i can add steps to repro but not til tomorrow, i'm well past eod [00:06] tvansteenburgh: same here, but that would help. i'm coming at this from the juju command side of things. i'll probably cover your case as well, but want to make sure [00:06] tvansteenburgh: (i.e. tomorrow is just fine) [00:07] katco: cool [00:08] tvansteenburgh: thanks :) looking forward to getting this taken care of for you all [00:18] redir, are you actually working this bug: https://bugs.launchpad.net/juju/+bug/1614571 [00:18] Bug #1614571: Data race: TestLogOutput [00:18] alexisb_: while waiting for test runs, yes. [00:19] ok, I am putting it in the inprogress lane than, htanks [00:19] but if there's someone else to work on it then take it away! [00:19] nope [00:19] OK [00:34] bbiab couple hours or so. [00:39] wallyworld: ping, got a sec to chat? [00:42] sure [00:42] katco: 1:1? [00:42] wallyworld: yep [00:51] wallyworld: FYI going out soon, maybe be out for an hour or two [00:51] axw: no worries, let's chat briefly when you get back [02:19] wallyworld: http://reviews.vapour.ws/r/5518/ [02:19] +3 −1,688 [02:19] one of the biggest removals I've done in a while [02:20] wallyworld, axw, anastasiamac, perrito666: anyone know of a charm that says it has payloads? [02:25] thumper: no, maybe best to ask charmers? stuart might have an idea :) [02:44] veebers: the change to block access to importing models has landed btw [02:45] menn0: cool, so we shouldn't see any "mongo not in the proper state" or whatever error? (I also added a check on test end to ensure the charm/deploy had fully finished [02:46] veebers: yep, that error shouldn't happen now. the only concern is, what will the test do with the error that gets returned while the model is importing? [02:48] menn0: hmm, I'm pretty sure the minor change I added should over that. we'll see soon enough if there is any further additions needed [02:49] cool === natefinch-afk is now known as natefincgh === natefincgh is now known as natefinch [03:24] thumper: lol, payloads. Sigh. [03:25] they exist, so they need to be migrated :-| [03:27] at least it's a pretty trivial amount of information [03:27] thumper: review done [03:27] ta [03:27] natefinch: yeah [04:01] wallyworld: yt? [04:02] yt? [04:02] young transvestite? [04:02] natefinch: do you recall the hook tool to register a payload? [04:03] thumper: buh [04:03] nm [04:03] I was busy looking in worker/uniter/runner/jujuc [04:03] where all the other hook tools were [04:03] why would you do a silly thing like that? [04:04] wallyworld: you there... Do we still want nil here https://goo.gl/y8F48q [04:04] let me look [04:06] Bug #1616298 opened: DebugMetricsCommandSuite.TearDownTest fails due to "no reachable servers." [04:06] we pass in nil for controller config for that test func, so no harm in also not setting inherited region config either [04:06] good lord that needs an args struct [04:07] heh.... st, err := state.Initialize(state.InitializeParams{ [04:07] redir: bit you'll need to add a test in UpdateModelConfig to ensure that removing a config key correctly picks up a region default [04:07] well at least the production code uses an args struct [04:08] because that's the place that uses the controllerInheritedConfig func that was incorrect and we didn;t have test coverage [04:08] menn0: seems that attempting to call status while model is in migration causes a failure: http://juju-ci.vapour.ws:8080/job/functional-model-migration/118/console [04:08] the failure is because the call to 'juju status' errors [04:09] Bug #1616298 changed: DebugMetricsCommandSuite.TearDownTest fails due to "no reachable servers." [04:11] menn0: should status for the model error or just provide status of 'migrating' or similar? [04:12] Bug #1616298 opened: DebugMetricsCommandSuite.TearDownTest fails due to "no reachable servers." [04:14] hmmm.... does the manual provider do something special with mongo? seems like I may have broken manual when I updated TLS ciphers in 1.25 [04:15] that or I'm just hitting a weird similar-looking bug. I bootstrap manual on 1.25 and it can't connect to mongo [04:16] or rather, it can't connect to itself.... different symptom than what I was thinking of, I think.... still [04:16] 2016-08-23 20:29:01 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG) [04:18] Anyone have any ideas? [04:23] axw: wallyworld: PTAL http://reviews.vapour.ws/r/5502/ :D [04:24] soon, need to get some stuff finished [04:28] wallyworld: nps. i prefer to wait for axw anyway [04:29] menn0, wallyworld - thoughts on my mongo problem above? [04:29] nfi [04:29] could be anything, literally [04:29] network, config [04:30] you need to drill down to find the root cause [04:30] natefinch: that happens sometimes. it's been a problem for a long time. there's a ticket for it. [04:30] natefinch: I've tried to figure it out before but was never able to replicate it. lots of ppl have seen it though. [04:31] natefinch: is this during a test or a bootstrap? [04:31] menn0: hmm.... what I've seen before is that the machine thinks it has some hostname, but then the networking is screwed up so it can't actually connect to that hostname for itself. But not sure if that's it here. I'll try again. [04:31] menn0: bootstrap [04:31] natefinch: are there any logs you can grab? [04:32] menn0: probably. I'll try logging into the mongo console and see what the replicaset collection says [04:32] natefinch: that would be good. given that it's hard to replicate it would be good if you could grab everything you can. [04:33] natefinch: in the past it's been people outside the team who have seen it and they either didn't collect logs or the logs weren't at DEBUG [04:33] natefinch: so this is a good opportunity to learn more [04:33] menn0: this is my second failure in a row like this [04:33] natefinch: even better :) [04:34] natefinch: this is the closest open ticket: https://bugs.launchpad.net/juju/+bug/1346597 [04:34] Bug #1346597: cannot get replset config: not authorized for query on local.system.replset [04:34] natefinch: the title has a slightly different error but there's discussion about the same problem you're seeing [04:35] menn0: I'll take a look [04:35] natefinch: pls attach any bootstrap and agent logs, the mongodb logs from /var/log/syslog and what you find out from looking at the replicaset status in the shell [04:42] anastasiamac: sorry took longer than expected, just having lunch then will review [04:43] axw: awesome \o/ most importantly - di u get a car? [04:43] did* [04:45] anastasiamac: yes, bought a qashqai [04:45] \o/ [04:46] axw: found an issue anyway while waiting - I can deploy fine to "controller" model (image is found!) but on hosted model like "default", m still seeing "image not found" [04:47] axw: so, i'd like to land what i have and address multi-model as a separate PR... (in addition to separate PR for the bug that I have identified with simplestreams)... [04:50] axw: and Nissans are awesome :D well done! [04:57] menn0: looks like a networking problem. I can telnet to localhost:37017, but can't telnet to :37017, and that's what's in the replicaset config [04:57] (from that machine, obviously) [04:57] natefinch: ok, that's super useful information [04:58] natefinch: is it because the wrong IP is being used in the replicaset config? (i.e. the machine has multiple ips and Juju is choosing the wrong one?) [04:58] natefinch: (pls update the ticket with these findings) [04:58] menn0: it's a GCE machine, it's putting its public IP, not the internal IP [04:59] O [04:59] but for manual, that should be correct [04:59] natefinch: firewall rules? [04:59] except that maybe we're not fixing the firewall [05:01] menn0: $ sudo iptables -S [05:01] -P INPUT ACCEPT [05:01] -P FORWARD ACCEPT [05:01] -P OUTPUT ACCEPT [05:01] that's it [05:02] natefinch: GCE will have it's own firewall external to the hosts [05:02] right [05:02] controlled via the API [05:04] anastasiamac: reviewed [05:04] anastasiamac: I agree that the other issue should be solved separately [05:05] axw: \o/ brilliant! made my day :D [05:05] wallyworld: you wanted to chat? [05:05] thumper: review done [05:17] welp, I have to bed [05:17] menn0: added some comments on the bug - https://bugs.launchpad.net/juju/+bug/1346597 [05:17] Bug #1346597: cannot get replset config: not authorized for query on local.system.replset [05:17] menn0: might be worth making a new bug for the different error message, just to make it easier to find [05:17] natefinch: good idea (might be worth checking if there's already one that's been closed before) [05:20] menn0: https://bugs.launchpad.net/juju-core/+bug/1412621 [05:20] Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap [05:21] last comment is about someone using manual on EC2 and needing to open the ec2 firewall :P [05:21] natefinch: so it was only fixed for EC2? [05:21] menn0: I don't think it's exactly fixable when using the manual provider [05:22] try [05:22] true [05:22] I guess with manual people are on their own [05:22] but... juju should test for this case and give a better error message [05:22] exactly what I was just about to say :) [05:22] :) [05:22] I'll make a bug for that [05:24] ot valid" [05:27] ignore that paste error [05:27] * redir goes eod [05:29] redir: sorry, had to go out at short notice [05:29] menn0: https://bugs.launchpad.net/juju/+bug/1616310 [05:29] Bug #1616310: Test replicaset address before replSetInitiate [05:29] and now I am EOD as well [05:30] natefinch: nice bug report [05:30] natefinch: thanks [05:30] menn0: welcome :) [05:30] axw: yeah, if you have a moment, we can talk [05:31] wallyworld: sure, 1:1? [05:31] ok [05:54] Bug #1591488 opened: Unable to bootstrap with --metadata-source [05:57] Bug #1591488 changed: Unable to bootstrap with --metadata-source [06:02] wallyworld: do u recall if resources are model-specific or controller-? [06:03] they are specific to a charm [06:03] stored in the blob store [06:03] wallyworld: sure, but once we put reference to them in mongo, are they model-bound or available to all models for this controller? [06:03] i can't recall how the path is set up [06:03] wallyworld: we definitely have "resources" collection in db [06:03] i'd have to double check [06:04] is that in state or blobstore [06:04] if in state, it would justr be the references [06:04] wallyworld: m looking in state [06:04] if it's marked a global collection, it is for all models [06:05] well, it's not.. and m wondering if it should be.. [06:05] don't think so, i think the design was to tie resources to models [06:06] k [06:06] the data in blob store is deduped [06:06] so even if the same reosurce is assigned to two models, it will only be stored once [06:06] Bug #1591488 opened: Unable to bootstrap with --metadata-source [06:18] Bug #1591488 changed: Unable to bootstrap with --metadata-source [06:20] axw: wallyworld: PTAL https://github.com/juju/juju/pull/6077 [06:20] dunno why RB is not picking it up [06:20] it's tiny but oh, so wonderful :D [06:22] anastasiamac: I don't think this is the correct solution. different models will want different image metadata [06:22] axw: right now, how will they get it? [06:23] anastasiamac: what is "they"? [06:23] axw: at this stage, not being able to deploy to any other model but "controller" is going to b poor experience for cloud [06:23] they = other models [06:23] anastasiamac: I don't disagree, but that doesn't mean we should break the data model [06:23] axw: when i say "cloud", i mean private cloud or anyone using metadata-source, for example [06:24] anastasiamac: perhaps what we should do is combine sources, like I did a while ago for tools [06:24] anastasiamac: look first in the model, then in the controller's model [06:24] axw: I think that we can do this for now, and have a wishlist item for bigger and shinier solution when we get a chance to tackle it, say for 2.1 [06:24] and in the case of image metadata, only look in the controller's if they're the same cloud [06:25] axw: sure, but right now, there is no way to add to model image collection [06:25] anastasiamac: if we do this quick hack, at least the image metadata worker needs to be made singular [06:25] hm, but probably not [06:25] because multiple regions [06:27] i think what i propose is good for a while until we iterate... it may even be tackled along with metadta in config.. :D [06:32] anastasiamac: I can live with this, but please remove the model-uuid field from cloud image metadata [06:33] anastasiamac: and please live test in a cloud with multiple regions [06:43] axw: i've pmed u.. m live testing against aws. or do u want me to test deploying/adding model to a different region than bootstrap? [07:02] wallyworld: LXD improvements: https://github.com/juju/juju/pull/6078 -- RB has crapped itself [07:02] axw: awesome, otp will look soon [07:24] axw: PR is ready to go: removed model-uui and tested live with multi-regions \o/ [07:26] anastasiamac: reviewed [07:28] wallyworld: PTAL https://github.com/juju/juju/pull/6077 [07:28] still otp :-( [07:29] :( === frankban|afk is now known as frankban === akhavr1 is now known as akhavr [08:28] the .fail is looking good. http://juju.fail/ [08:29] axw: if you have a moment before eod, would like to land this for beta16 https://github.com/juju/juju/pull/6079 [08:29] wallyworld: looking [08:29] ta [08:43] wallyworld: reviewed [08:43] ta [08:46] axw: with the embedding, i was going to wait and see how nate's stuff played out [08:46] i expect it will change somewhat [08:47] i can add a todo i guess [08:48] wallyworld: pretty sure each provider is going to have to have a schema either way, it's just a different type of schema [08:48] yeah, fair enough [08:48] everything else fixed [09:49] frobware: No dimitern? [09:49] babbageclunk: might have been up late fixing an issue [09:50] frobware: Ahh. [09:54] frobware: (Not to make you feel like second choice but) could you take a look at this? http://reviews.vapour.ws/r/5521/ [09:54] babbageclunk: in a second... :) [09:54] babbageclunk: otp [09:56] babbageclunk: going to have to come back to this today - need to urgently fix/verify bug #1616098 [09:56] Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 [09:57] frobware: No hurry! [10:10] Bug #1615552 opened: Juju 2: Cannot add new key to default model config. === rogpeppe1 is now known as rogpeppe [11:20] wallyworld: still around? [11:20] no [11:20] maybe [11:20] depends who wants to know [11:21] wallyworld: irs [11:21] nah, I just pushed the changed we discussed last night [11:23] ok [11:23] * wallyworld looks soon [11:25] * perrito666 makes a joke about looking old instead of soon [11:34] ,m' [11:47] fwereade: Are you working today? [11:48] babbageclunk, I am, but I have to go out on an extended errand very shortly -- how can I help? [11:49] fwereade: I'm working on the unit tests for the machine undertaker, but I've put a WIP version up for review just in case there are any things tha t jump out at you. [11:49] fwereade: Want to get any nasty surprises early! ;) [11:50] fwereade: Annoyingly I seem to have formatted my PR text in a way that makes the RB bot think it's already got a review. [11:51] haha [11:51] fwereade: recreating now [11:52] babbageclunk, ah, ok, I was just peering at /pulls and wondering what I was missing ;p [11:52] * babbageclunk shrugs [11:53] fwereade: Oh well, here it is on github: https://github.com/juju/juju/pull/6082 [11:58] babbageclunk, looks eminently sane, thanks, only the most trivial notes at first glance [11:58] fwereade: Ooh, I hadn't noticed you'd reviewed my other one, thanks! [11:58] fwereade: Great! [11:59] :D [11:59] later all [11:59] I might well not make standup but I'll be around later [12:01] o/ [12:01] * babbageclunk is off to lunch. [12:33] perrito666: why don't you think a test is needed? [12:33] wallyworld: I think the test is there [12:33] sure? why did CI fail then? [12:34] wallyworld: I added the test as part of that PR [12:34] adding code without changing a test doesn't make sense [12:34] ok, let me check the test [12:35] http://reviews.vapour.ws/r/5517/diff/2/#1 [12:35] perrito666: you are right, i thought remove user deleted rather than marking as inactive [12:35] ignore me [12:36] I usually do :p [12:42] Hi all. For writing a JUJU which language is preferable with community? Actually I want to develop in Shell script. [12:43] for writing a juju charm [12:44] Please anyone suggest me [12:45] ram___: I believe that the ideal platform is python, if you ask in #juju you might get better answers [12:47] perrito666 : OK. thank you. [13:11] hey guys, im getting a panic on aws when deploying the openstack bundle and i think it might be due to lxc being removed from the supported container types [13:12] 2016-08-24 11:32:11 INFO juju.provisioner container_initialisation.go:98 initial container setup with ids: [6/lxc/0 6/lxc/1 6/lxc/2] [13:12] 2016-08-24 11:32:11 INFO juju.worker runner.go:262 stopped "6-container-watcher", err: worker "6-container-watcher" exited: panic resulted in: runtime error: invalid memory address or nil pointer dereference [13:13] it could be caused by https://github.com/juju/juju/blob/master/worker/provisioner/container_initialisation.go#L102 === redelmann is now known as redelmann_wfh [13:43] SimonKLB: hi, sorry you're experiencing some issues. can you please file a bug @ https://bugs.launchpad.net/juju/+filebug ? [13:43] SimonKLB: and please be sure to attach logs, include information on what version you're using [13:49] yea sure, but might be good to verify asap that it isn't as critical as it looks, probably more people than me that are using the openstack bundle on aws [13:49] SimonKLB: sure. what version of Juju are you running? [13:49] 2.0 beta15 [13:50] SimonKLB: let me just check and see if this has already been reported/fixed [13:54] SimonKLB: i don't see anything being tracked [13:56] alright, ill file a bug then [13:56] might just be something on my end [13:56] SimonKLB: ta. i'm also checking with our internal openstack team to see if they've seen anything similar. [13:56] katco: great thanks [13:56] SimonKLB: also, ;) [13:57] katco: haha yea no worries! [14:00] SimonKLB: are you just deploying openstack-base? [14:04] katco: i was deploying the bundle that included telemetry [14:04] frobware: fwereade: mgz: standup time [14:04] i can try the base only [14:04] SimonKLB: no worries, just include that info in the bug so we can repro [14:04] katco: omw [14:06] frobware, fwereade: standup? [14:07] babbageclunk: ? if you like ^ [14:08] dimitern: Sure! [14:10] jam: ping [14:26] katco, ping? [14:26] mattyw: sec, in standup [14:26] katco, no problem, it can wait [14:26] mattyw: you can ask async if you like :) [14:27] katco, I'm making some changes to the deploy command, my understanding is your subjecting it to a much needed refactoring, just wanted to make sure we don't step on each other, or avoid it as much as possible :) [14:27] macgreagoir: btw, the phone home stuff is in github.com/juju/juju/charmstore/latest.go [14:28] macgreagoir: jujuMetadataHTTPHeader [14:28] Cheers [14:29] mattyw: the change i'm making to it now won't refactor anything, so we should be ok. out of curiosity, what are you working on? [14:30] * dimitern figured out what's wrong with his lxd on maas setup ! [14:30] natefinch: Oh... I have that open too :-) [14:30] lxd on maas is 2.0.3 [14:31] mgz: so this is a possible issue we were talking about wrt packaging I guess [14:31] SimonKLB, hey - so the telemetry and base bundles are designed to deploy on hardware - am I correct in reading you're using the bundle on AWS? [14:32] dimitern: well, we have potential confusions [14:32] in theory, if everything using lxd on one machine is using the api [14:32] it shouldn't matter if they were compiled with different minor versions of the go library [14:33] mgz: yeah, but *if* it happens to be <2.0.4, juju will be unable to deploy lxd with it [14:34] oh, with trunk? why is that? [14:34] mgz: and that's really lxd's fault - they changed the 1.0 api across not even a minor version release [14:35] jamespage: yes, is that not possible you say? [14:35] er.... [14:35] SimonKLB, its tricky [14:35] that is surely a regression they just need to fix? [14:36] mgz: as it happens 2.0.3 lxd is the most recent version in the archive, so it might become an issue for people not using ppa:ubuntu-lxd/lxd-stable [14:36] SimonKLB, the main issue right now is that LXD containers are not fully networked on AWS; that's relatively easy to work-around by updating the bundle to be 'sparse' i.e. an instance per service, rather than using containers at all [14:36] I understand that this willchange once container networking is fully implemented across providers. [14:37] morning [14:37] redir: morning [14:37] jamespage: i see, would local lxd containers even be an option or is maas the only provider that works well at the moment? [14:38] SimonKLB, the other problem areas are a) AWS does not support nested KVM (so you have to use the 'lxd' or 'qemu' virt-type option on the nova-compute charm) and b) I think they block alot of overlay network types and c) you can get a raw l2 port for north-south traffic routing [14:38] SimonKLB, https://github.com/openstack-charmers/openstack-on-lxd would let you do a whole stack in a single machine [14:38] jamespage: yea i mean this is only while developing, running a cloud inside aws wouldnt make much sense hah :) [14:38] such as your laptop [14:39] jamespage: that would be neat! [14:39] SimonKLB, co-incidentally I was just migrating the docs on those bundles into our OpenStack charm dev guide [14:39] unfortunately the charm im working on is running docker containers which is not supported by LXD on juj atm [14:39] juju* [14:39] but the content is much the same [14:39] SimonKLB - thats mostly true [14:39] SimonKLB - you can manually apply the docker profile to your lxd containers, and if you're using the docker.io package from apt, it will work inside lxd [14:39] as soon as profiles are implemented it should be good to go i guess? [14:40] lazyPower: aah, nice [14:40] limited, but it will work. I think you'll be missing some of the cgroups management bits of the docker daemon due to how that profile functions. [14:40] we're still actively working through some of those limitations ourselves wrt k8s. and yes, the missing profile apply from juju is a bummer at the moment but we remain hopeful ti will land in the next cycle after 2.0 gets pressed as release [14:41] cool! [14:42] katco: marcoceppi i tried deploying using the commandline client this time and it used lxd containers instead, could there be something that converts the yaml configuration from lxc to lxd in the commandline client and not when deploying through juju-gui? [14:45] SimonKLB: it looks like for bundles we do convert LXC placements to LXD [14:46] katco: yes but only in the commandline client and not in juju-gui? [14:47] SimonKLB: ahh that i don't know. urulama ? [14:47] SimonKLB: which version of GUI are you using? [14:48] urulama: how do i see that, i started it after i bootstrapped aws [14:48] SimonKLB: juju upgrade-gui --list [14:48] $ juju upgrade-gui --list [14:48] 2.1.8 [14:49] 2.1.2 [14:49] ok, so it's 2.1.8 [14:49] that should be covered [14:49] SimonKLB: and, because of all the beta changes, which juju? [14:49] beta15 [14:50] SimonKLB: ok, please download tarball here https://github.com/juju/juju-gui/releases/tag/2.1.10 and run "juju upgrade-gui /path/to/tarball" [14:51] it'll get into official streams soon [14:54] urulama, hatch, SimonKLB: AFAICT in the GUI we do not make the conversion LXC -> LCD when getting bundle changes for Juju [14:55] hatch: could you please create a card for that? the logic is quite easy: if containerType == 'lxc' and juju version >= 2 then containerType = 'lxd' [14:56] sorry I missed the first part of this conversation, is the bundle that's being imported have machine placement directives with lxc with Juju 2? [14:57] hatch: I guess so [14:58] tvansteenburgh: hey when you get around to creating that script, can you just attach it to the bug? [14:58] tvansteenburgh: (and ping me) [14:58] katco: i did [14:58] tvansteenburgh: ah ok, ty! [14:58] urulama: i dont seem to be able to search the search the store with 2.1.10 [14:59] search the store* [14:59] hm, that's new [14:59] frankban: ^ [14:59] frankban: I have confirmed you're correct, we do not munge the machine placement based on the Juju version [14:59] I'm not entirely sure we should [14:59] combo?app/components/search-results/search-results-select-filter-min.js&app/components/search-resul…:5 Search request failed. [14:59] katco: np, lemme know if you have any questions [14:59] SimonKLB: can you open up the network tab and inspect the request [15:00] did it 404? [15:00] hatch: we should, but let's talk on daily [15:00] uiteam: call [15:02] urulama: haha turns out it was the "privacy badger" addon :D [15:02] thought you guys were tracking me [15:02] LOL [15:02] not (yet) :D [15:03] ;D [15:03] :) [15:03] alright lets see if it deploys this time [15:05] urulama: nope, lxc again [15:05] SimonKLB: can you re-post the error? Sorry I logged in after [15:06] 2016-08-24 11:32:11 INFO juju.provisioner container_initialisation.go:98 initial container setup with ids: [6/lxc/0 6/lxc/1 6/lxc/2] [15:06] 2016-08-24 11:32:11 INFO juju.worker runner.go:262 stopped "6-container-watcher", err: worker "6-container-watcher" exited: panic resulted in: runtime error: invalid memory address or nil pointer dereference [15:06] when deploying the openstack bundle through the commandline client i seems to convert lxc to lxd in the yaml config [15:06] but not when deploying through the gui [15:06] ahh [15:07] ok yeah the gui definitely doesn't do that conversion [15:07] at least not yet :) [15:07] :) [15:08] SimonKLB: a simple workaround should be to just s/lxc/lxd [15:08] at least to get you going right now [15:09] hatch: np, i can use the cli client [15:09] it wasnt super obvious to begin with though, i just happened to use the gui because i was lazy :) [15:09] tvansteenburgh: great comment, ta [15:10] katco: you're welcome! [15:12] SimonKLB: we missed this in GUI when migrating from LXC to LXD ... it'll be a quick fix. thanks for letting us know [15:13] urulama: no worries! [15:19] SimonKLB: we'll be fixing it shortly [15:44] fwereade: you around? [15:56] [21:24] Hi. I followed https://jujucharms.com/docs/stable/getting-started. I deployed wiki charm. It was giving error. pasted error log : http://paste.openstack.org/show/563091/. please provide me the solution. [16:01] ram____: the lack of addresses shouldn't be an error in the log, the cause for containers getting stuck in pending is different I think [16:01] ram____: can you successfully run `lxc launch ubuntu:16.04 xenial-test` and then `lxc exec xenial-test -- 'ping google.com'` ? [16:03] dimitern: Yes. [16:03] ram____: ok, so lxd is working without juju - always good to check that first :) [16:03] ram____: are you using any proxy in your network? [16:03] so do all new bugs go into lp.net/juju now? [16:03] jcastro: for 2.0 yes [16:05] it's going to take some adjusting to :) [16:10] natefinch, hey, that errand was much more extended than expected, but I'm here [16:19] dimitern: No proxy. === frankban is now known as frankban|afk [16:20] fwereade: see https://bugs.launchpad.net/juju/+bug/1616523 [16:20] Bug #1616523: one badly formed credential makes bootstrap fail [16:21] fwereade: wondering if we should try to soldier on if part of the yaml is not what we expect [16:21] ram____: can you see any other errors? do any of the containers come up at all? [16:21] natefinch, yeah, definitely [16:21] natefinch, for historical interest, that's one thing we did do right in environments.yaml [16:22] natefinch, not try to interpret a given environment until someone needed it === mup_ is now known as mup === mup_ is now known as mup [16:29] dimitern: No containers come up. paste output of # juju status --format yaml http://paste.openstack.org/show/563096/ [16:30] dimitern: "Failed to get device attributes: no such file or directory" [16:32] ram____: ok, can you try juju kill-controller lxd-test -y and then juju bootstrap like before but add --debug and --config logging-config='=TRACE' to it, then paste the output [16:43] dimitern: $sudo juju bootstrap lxd-test localhost --debug --config logging-config='=TRACE' output http://paste.openstack.org/show/563099/. [16:44] ram____: no need for sudo btw [16:45] ram____: thanks, looking at the log [16:46] tvansteenburgh: ping [16:50] ram____: ok, so far so good - try `juju switch lxd-test:controller` then `juju add-machine -n 2` and then monitor it: `watch 'juju status --format=yaml'` until you get the error, then `juju ssh 0 -- sudo cat /var/log/juju/machine-0.log'` and paste the output? [17:05] dimitern: juju machines created and running without any error. http://paste.openstack.org/show/563102/ [17:18] ram____: nice! [17:18] ram____: so your issue is gone then? [17:23] katco: hey [17:23] tvansteenburgh: hey... i'm able to deploy a previous deploy charm as a new application without any changes to jujud [17:23] tvansteenburgh: so something is either wrong with your script or python-jujuclient [17:24] tvansteenburgh: i'm trying to figure out what [17:24] katco: eh, deploying a previously deployed charm as a new app wasn't the problem [17:26] katco: basically i need to know how to deploy a local charm with the api under juju2. the steps that worked for juju1 (the script) no longer work [17:26] tvansteenburgh: oops, that's what this bug is regarding. is it just deploying local charms? [17:26] tvansteenburgh: ah i see. check this out: https://github.com/juju/juju/blob/master/cmd/juju/application/deploy.go#L681-L725 [17:28] tvansteenburgh: you *might* be missing a call to GetCharmInfo in between add_local_charm and deploy [17:30] tvansteenburgh: i.e. follow this logic: https://github.com/juju/juju/blob/master/cmd/juju/application/deploy.go#L401 [17:48] * katco goes to grab some lunch [18:06] dimitern : Thanks. [18:06] ram____: you're very welcome :) I'm glad it worked! === mup_ is now known as mup === mup_ is now known as mup [20:14] are there any PR's folks are waiting on? I'm trying to test the new machine for the lander, but nothing has come up. Are folks waiting on PR's? === redelmann_wfh is now known as rudi_brb === mup_ is now known as mup [20:33] thumper: morning [20:33] morning [20:34] thumper: looks like someone is having another sort of these problems: https://bugs.launchpad.net/juju-core/+bug/1485784 [20:34] Bug #1485784: Error creating container juju-trusty-lxc-template; Failed to parse config [20:34] except it's here now: https://bugs.launchpad.net/juju/+bug/1610880 [20:34] Bug #1610880: Downloading container templates fails in manual environment [20:34] ugh [20:35] I repro'd with 1.25 manual deployed to a GCE instance [20:36] thumper: anything I should look for in particular? [20:37] oh... is this the missing lxc-templates package? [20:38] I don't know? [20:39] no [20:39] thumper: yeah, just checked, it's installed [20:39] -- https://10.2.0.186:17070/environment/80234a11-2d53-436e-855c-da998c76d6ca/images/lxc/trusty/amd64/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz [20:39] Connecting to 10.2.0.186:17070... connected. [20:39] ERROR: certificate common name '*' doesn't match requested host name '10.2.0.186'. [20:39] To connect to 10.2.0.186 insecurely, use `--no-check-certificate'. [20:39] from the in after [20:39] line [20:40] I wonder who is creating the cert, and why it doesn't like it [20:42] I dont have that error in my logs... but otherwise the same problem [20:42] might be a red herring [20:44] ls [20:45] natefinch: that is from trace output of the lxc package [20:45] module golxc.run [20:46] in the description of the bug [20:46] oh yeah.. I missed that it wasn't a juju log [20:46] where does lxc log? I don't see anything in /var/log/lxc [20:48] the golxc package uses loggo [20:48] so it is in the juju log [20:48] you just need to set the logging config appropriately [20:48] oh ok [20:53] thumper: 2016-08-24 20:50:21 TRACE golxc.run.lxc-create golxc.go:448 run: lxc-create [-n juju-trusty-lxc-template -t ubuntu-cloud -f /var/lib/juju/containers/juju-trusty-lxc-template/lxc.conf -- --debug --userdata /var/lib/juju/containers/juju-trusty-lxc-template/cloud-init --hostid juju-trusty-lxc-template -r trusty -T https://10.142.0.2:17070/environment/3d787fce-8f2a-49ea-8239-bc5ecda353c1/images/lxc/trusty/amd64/ubuntu [20:53] -14.04-server-cloudimg-amd64-root.tar.gz] [20:53] 2016-08-24 20:50:21 TRACE golxc.run.lxc-create golxc.go:458 run failed output: + '[' amd64 = i686 ']' [20:53] ha [20:53] interesting [20:54] I missed that bit [20:54] that was in my log [20:54] after I turned on trace and ran another deploy [20:58] it is in the bug description too [20:58] where does that script come from? [20:59] no idea [20:59] if you want I can give you access to my repro machine. I have to run for dinner time. Will be back in about 4 hours. [21:01] thumper: ssh ubuntu@104.196.3.75 should work [21:08] menn0, did you want to meet === natefinch is now known as natefinch-afk [21:14] alexisb_: gah... sorry forgot [21:14] alexisb_: I don't have anything specific to discuss [21:14] menn0, ok, have you 30 mins back then :) [21:14] alexisb_: ok :) [21:28] wallyworld: if you have time after the release call, I'd love to chat about the tools look up stuff [21:28] looking at manual s390x failures [21:29] perrito666: I just tried a fresh juju and the updated grant/revoke test and get a failure, seems that the removed users are still in list-shares output. Digging further now [21:29] ok [21:29] menn0: trivial 4 or 5 line review if you have time http://reviews.vapour.ws/r/5524/ [21:32] wallyworld: looking [21:33] wallyworld: ship it [21:33] menn0: tyvm [21:36] menn0: fixing because code landed before I did before http://reviews.vapour.ws/r/5525/ [21:39] thumper: looking [21:39] thumper: ship it [21:45] perrito666: yeah I see users that were removed appearing in list-shares [21:59] thumper: did you still need to talk? [21:59] wallyworld: let me finish this review for katco, then can we chat? [21:59] ok [22:00] alexisb_: i need to talk to tum, i'll ping when i'm free [22:00] thumper: not sure if you saw my message earlier; i pushed up new version. i had some dead-code-comments in there [22:00] kk, I'll hit refresh [22:00] k [22:00] who is tum? [22:02] alexisb_: when wallyworld is hungry, that's who he talks to [22:02] lol [22:02] * wallyworld was ripping of the NZ accent [22:02] *off [22:05] katco: shipit [22:06] wallyworld: chat? [22:06] sure [22:06] 1:1 i guess [22:07] thumper: ta for the quick turn-around [22:09] wallyworld, o I see how it is [22:28] perrito666, since it is obvious that wallyworld is going to stand me up would you like to meet early? [22:28] alexisb_: i'll be done in 3 minutes if horatio is not available [22:31] alexisb_: ready now [22:38] Bug #1580501 opened: cloudimg-base-url parameters not in Juju2 anymore <4010> [22:44] Bug #1580501 changed: cloudimg-base-url parameters not in Juju2 anymore <4010> [22:45] alexisb_: sorry was away, am there now [22:45] but now you change me for wallyworld [22:45] veebers: its because the outstanding pr hasnt landed [22:47] Bug #1580501 opened: cloudimg-base-url parameters not in Juju2 anymore <4010> [22:47] perrito666: oh I could have sworn I saw that it had :-\ [22:48] perrito666: sorry false alarm though, I screwed up in expecting the fix to be in [22:56] veebers: this is the GH PR https://github.com/juju/juju/pull/6074 [22:56] should merge RSN unless something blocks it [22:56] perrito666: sweet, I see it's queued [23:06] hey, I wont make it to the standup I have been fighting with 1616167 the whole day and am just now fixing the tests to PR a fix. [23:06] alexisb_: wallyworld ^^ [23:06] perrito666, that is fine [23:09] thumper: RB seems to have missed this one. https://github.com/juju/juju/pull/6088 [23:09] * thumper looks [23:16] thumper: standup? [23:36] axw: tghere's also removing the restricted config stuff etc too right? [23:36] wallyworld: that's what I said I had a PR up for :) [23:36] wallyworld: https://github.com/juju/juju/pull/6083 [23:36] wallyworld: RB didn't pick it up [23:36] ah sorry, was dstracted :-) [23:38] Bug #1615552 changed: Juju 2: Cannot add new key to default model config. [23:44] Bug #1615552 opened: Juju 2: Cannot add new key to default model config. [23:50] Bug #1615552 changed: Juju 2: Cannot add new key to default model config. [23:55] wallyworld: alexisb_: re: machine count, this is the bug I was referring to in standup https://bugs.launchpad.net/juju/+bug/1602032 [23:55] Bug #1602032: Add machine and core count to 'models' output <2.0>