[00:08] magicaltrout: yeah, apt get install charm-tools; then you can do `charm create` (sans the -) ;) === freeflying__ is now known as freeflying === JoseeAntonioR is now known as jose === tris- is now known as tris === hazmat_ is now known as hazmat [08:29] dosaboy, can you cast your eye over https://code.launchpad.net/~james-page/charm-helpers/fixup-service-running/+merge/294942 [08:40] jamespage: sure [08:40] dosaboy, ta - I was looking for the testing that beisner did yesterday - but he must of done it offline I think [08:42] jamespage: i just realised that since your fix for 1580320 is ceph-osd only and some amulet tests use ceph for osds it won't apply so tests will still fail [08:42] so i have a tmp workaround [08:43] dosaboy, I can apply to ceph as well [08:43] jamespage: that would be the easy fix ;) [08:43] or we could get teh amulet tests using ceph-mon+ceph-osd [08:43] but that means more resources for each test [08:43] test run that is [08:44] jamespage: i'm working around it with https://review.openstack.org/#/c/314063/10..11/tests/basic_deployment.py for now === Zetas_ is now known as Zetas === rohit___ is now known as rohit__ === bodie__ is now known as bodie_ === thomi_ is now known as thomi === cargonza_ is now known as cargonza === plars_ is now known as plars === xnox_ is now known as xnox [08:48] dosaboy, https://review.openstack.org/#/c/317910/ === Ursinha_ is now known as Ursinha === braderhart_ is now known as braderhart [10:44] dosaboy, beisner: raised https://bugs.launchpad.net/juju-core/+bug/1583109 [10:44] Bug #1583109: error: private-address not set === matthelmke_ is now known as matthelmke [11:51] hi everyone, is it possible to deploy custom openstack image from juju charm? [11:51] we have a few snapshots and we want to use them in our charms [11:52] seems it's not 'true' juju way === \b is now known as benonsoftware [11:55] beisner, shall we land the ch fix for service_running and then generate the master branch re-syncs? [11:55] that would exercise this bug well [11:55] jamespage, sounds good. just confirming that the trace log gets enabled as planned on 1 short run now. [11:56] beisner, okies [11:56] going to eat lunch [12:22] anyone familiar with the sync-watch plugin? for some reason it's causing two updates each time I re-build my charm - any idea why? === simonklb_ is now known as simonklb [12:43] beisner, ok going to raise reviews for resyncs right now [12:43] jamespage, merging your c-h thing now [12:43] beisner, already done [12:43] jamespage, oh you beat me to it [12:43] just pushed it [12:43] ha, good deal [12:55] dosaboy, what's your bugref for the IPv6 thing? [12:57] jamespage, skip ceilometer-agent. i've got a review that i'll just resync and append. [12:59] jamespage: there's this one for the charm bug - https://bugs.launchpad.net/charm-helpers/+bug/1581598 [12:59] Bug #1581598: ipv6 enabled charms don't understand mngtmpaddr flag Charms Collection):In Progress by hopem> Progress by hopem> (Juju Charms Collection):In Progress by hopem> [12:59] dosaboy, beisner: OK re-running now with amended commit message - I'd not raised any reviews just yet... [13:00] jamespage: once it all lands i can kick off an ipv6 run === caribou_ is now known as caribou [13:15] beisner, dosaboy: ok here they come [13:16] https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1581171 [13:16] Bug #1581171: pause/resume failing (workload status races) Charms Collection):In Progress by thedac> [13:29] Hey lazyPower, do you have a minute? Its about an exception I am getting when I stop filebeats. [13:29] sure, whats up? [13:29] https://pastebin.canonical.com/156708/ [13:30] Let me also tell you what i am deploying [13:30] I've seen this before kjackal [13:30] https://pastebin.canonical.com/156709/ [13:30] Do we know what it is? [13:30] unit-filebeat-0[15988]: 2016-05-18 13:06:11 INFO unit.filebeat/0.stop logger.go:40 subprocess.CalledProcessError: Command '['relation-list', '--format=json', '-r', 'elasticsearch:5']' returned non-zero exit status 2 <-- its trying to call relation-list on a relation thats been removed right? [13:31] i only got this once, and was unable to reproduce it [13:31] is the unit still alive? [13:31] yes [13:31] paydirt. is the ES unit still alive? [13:32] yes, the entire deployment is online [13:33] hmm.. when i encountered this, the beat subordinate did not have an active relation, and the scope.unit relation interfaces were still trying ot contact a disconnected elasticsearch unit [13:33] perhaps thats not the problem here [13:33] waitup [13:34] which makes me think i should go peek at the elasticsearch interface to make sure i'm removing its .available state [13:34] how do you check if you have active relations? [13:35] either via juju status, the gui, or by running the relation-* hookenv cmds on the unit [13:35] kjackal i think i found the root of the issue though [13:35] https://github.com/juju-solutions/interface-elasticsearch/blob/master/requires.py#L32 [13:36] reactive is doing exactly what i told it to do. the .available state is still set, (notice i only remove .connected) - so its hitting its cached data [13:36] and trying to reference that relationship because its still marked as active (i think this all true, cory may mythbust me) [13:36] :) [13:37] kjackal - has this been happening consistently? [13:37] or is this the one-off that got hung up? [13:37] s/the/a [13:37] it happened in two seperate deployment [13:38] and i can reproduce it by jujuresolved --retry [13:38] ok let me grab a coffee and i'll push a fixed publish to my namespace [13:38] if it works out, we'll bump the store revision [13:39] sounds good [13:42] so we need something line @when (elastic search available AND connected) call cache_elasticsearch_data(...) [13:42] let me try that [13:42] kjackal - think is, .available presumes .connected [13:43] as .available is only ever set when we've gotten the connection string data we expect [13:43] i think the problem lies in not removing that .available state on that conversation object [13:44] kjackal https://github.com/juju-solutions/interface-elasticsearch/pull/6 [13:45] let me try to test this... it will take some time... [13:47] Ha! I believe the charm is killed now! [13:47] yeah? [13:47] Yes, it seems to work! [13:48] Good job lazyPower!!! Not so lazy!!! [13:48] :) [13:48] :) Thanks [13:49] So, do you have any eta on when this will be available? [13:49] Do you want me to merge it? [13:49] url: cs:~lazypower/filebeat-0 [13:50] give that a go in place of the store url if you dont mind. i'd like an a-z test before we merge this assuming its fixed [13:50] let me do a full deployment with that charm [13:51] These easy fixes are too good to be true! But I am optimistic! [13:51] :) [13:51] I am as well [13:51] i'm curious why this wasn't rooted out i the bundle tests [13:51] *in [13:52] jamespage, dosaboy, thedac, tinwood - fyi - i'll be doing some readme-only change/reviews and landing them to iterate a bit on gerrit change-merged stream advertisements and triggers wrt getting charm upload jobs set up in the chain. just no-op noise, safe to ignore. [13:54] lazyPower could I have also a url for topbeat? The fix is in the beats_base and affects all beats charms [13:54] sure, 1 sec and i'll rebuild that as well [13:55] thank you [13:56] url: cs:~lazypower/topbeat-0 [14:00] I'd like to update /etc/neutron/dhcp_agent.ini using juju (as it is maintained by juju). can someone enlighten me how to achieve through cli or gui? [14:01] I forget, does quickstart work with bundles that reference local: charms? [14:01] beisner - i know we've exposed config points for cinder like this, where you can modify config with a sub. Is this also true with neutron that you're aware of? (re: gahan's question above) [14:01] cory_fu negative [14:01] Blast and damn [14:02] thats why deployer was so long-lived [14:02] that and its wrapped by our testing tooling [14:02] gahan, curious, what specifically are you needing to mod? [14:04] beisner: I heard if I modify dhcp_domain field in mentioned .ini file, it wil change the value of field "search" in /etc/resolv.conf for all new instances in openstack [14:07] gahan, indeed, in liberty and prior you could do that there (ref: http://docs.openstack.org/liberty/config-reference/content/section_neutron-dhcp_agent.ini.html) ... but it was deprecated and removed @mitaka. i'm not sure where the equivalent is off hand. [14:08] beisner, I'm still using liberty :) thanks [14:08] gahan, typically, any adjustments to those confs must be done via charm config values. if you modify the file by hand, your changes will be overwritten when the conf is re-rendered from templates by the charm, which could happen at any time. [14:09] it doesn't seem like this particular one is exposed or I'm looking in wrong place [14:09] gahan, we'd need to add a charm config option, but also figure out what it means for mitaka and later. i'd imagine it can be set by some other approach there. [14:10] gahan, right. we don't expose 1:1 all conf options as that would be massive. we typically encapsulate sane common defaults with knobs and levers to tune things to common needs. [14:11] ah, dns_domain in neutron.conf for >= mitaka it appears [14:12] so this *could* be added to the charm so long as the conf template rendering logic is set up to plop the config in the right file depending on release version (we do that for other things, so that framework is already there). [14:13] it'd have to go into the next/dev (master) charm first, which would then be released at either 16.07 or 16.10. [14:21] hmm. i haven't validated this assertion yet, but do we know when using 1.25 if i set environment constraints (eg; tags=lazypower) and then i have additional tags defined in the bundle if those are additive (eg: tags=lazypower,bundle-constraint) or if the bundle constraints override the environment constraints [14:21] beisner i think you have experience in this ^ [14:22] lazyPower, hmm i've only ever used constraints at bootstrap and deploy time (ie. in the bundle) [14:22] lazyPower, i would think the constraint in the bundle would win though [14:22] yeah i was afraid of that :/ [14:23] i'm trying to come up with a good workflow to test these submissions, without stepping all over oil's toes and without modifying the bundle [14:23] i guess i'm caught in a corner where i have to mod the bundle. *snaps* [14:23] lazyPower, sec... [14:25] beisner, I think we should put this resync through on a smoke only [14:25] jamespage, ack [14:26] jamespage, maybe kick a few fulls to try to trap this juju core bug? [14:26] i kicked 1 [14:26] sure [14:26] but I'd not block +2 landing on that basis [14:26] infra queue must be long.... atm [14:27] UOSCI is besting it hands down... [14:27] i think they *are* merging the tag constraints, at least thats the behavior i'm seeing with this deploy. #TIL [14:27] jamespage, she's a hopping atm for sure [14:27] lazyPower, we use this to strip or insert constraints on the fly in osci: http://bazaar.launchpad.net/~uosci/ubuntu-openstack-ci/trunk/view/head:/tools/bundle_constrainer.py#L18 [14:28] lazyPower, i've got a WIP refactor of that into our git namespace with other features, but that one is functional. [14:28] that would be excellent if i were generating this bundle [14:28] lazyPower, pull it down, pipe it through that, get 0 constraints ;-) [14:28] i think thats a big part of the disconnect here, is i'm testing an artifact of their submission, not really using the tooling OS/OIL uses [14:29] oh, so they have a bundle that they're proposing with abnormal constraints? [14:29] yeah [14:29] tags=openstack, or tags=nedge [14:29] arosales, kwmonroe, kjackal: Quick review of README updates for Bigtop PR: https://github.com/juju-solutions/bigtop/pull/2 [14:29] we kind of need that for placement however [14:29] these charms have very strict hardware requirements, if the charms detect they are on a sub-par node, it panics and sets the charm to blocked and refuses to continue [14:30] lazyPower, good thing actually. much better than mysteriously failing [14:30] yeah i'd rather it punch me in the face with a reason, than punch me in the face and tell me to go troll the logs :) [14:33] jamespage, yah the zuul queue is large atm [14:33] we have 42 of the 235 jobs queued up ;-) [14:46] jamespage, bahh. ceph-mon tox ini has site_packages True, which is bad. it failed upstream CI, but passes ours b/c we of course have that installed. [14:46] i pulled ceph-mon master, changed to sitepackages = False locally, and we fail identically to http://logs.openstack.org/57/318057/1/check/gate-charm-ceph-mon-python27/1ab631a/console.html [14:47] we need to make a pass and flip those False on any charms that have that True [14:48] beisner, is apt in pypi? [14:48] jamespage, not sure if it is. but it seems to me that should be mocked out anyway [14:48] jamespage sure is https://pypi.python.org/pypi/apt/ [14:50] jamespage, i'd suspect this passed before infra started re-paving hosts in the p->t upgrade efforts. [14:50] or something along those lines [14:50] beisner, new virtualenv disables this by default [14:50] either way, site packages pollute the test and will give differing results on different hosts [14:50] jamespage, yes, but if it's explicitly set True in tox.ini, it'll still use site packages [14:51] yup [14:51] we might be wise to inject False in case someone fires up with an older tox or virtualenv [14:56] cory_fu: thanks, taking a look at the readme now . . . [15:00] lazyPower: out of curiosity, have you tested the beats charms on xenial? wondering why they're trusty-only [15:01] jcastro - at the time of writing that layer their archive did not have xenial packages available [15:01] now that xenial is out, i think that may have changed. simple enough to add a series and kick off a bundle test. /me will do that later today [15:01] oh ok, so blocking on upstream packages [15:01] yeah however i'm not certain thats still the case [15:34] where would I file a bug for the charm store? [15:35] ah there's even a link at the bottom of the page... duh ;) === arturt______ is now known as arturt_ === arturt_ is now known as arturt______ [16:45] lutostag - we hid it down there in hopes it would be helpful :D [16:53] hi everybody, seems i have some issues charms publishing. as example i have pushed mesos-dns charm more than 1hr ago. but haven't got in store yet. [16:53] source code - https://code.launchpad.net/~dataart.telco/charms/trusty/mesos-dns/trunk [16:54] is it possible to check what is wrong with it? [16:54] gennadiy - launchpad ingestion was broken a couple weeks ago and has been unable to be restored. we're relying on the new charm publishing features of the store to continue progress [16:55] gennadiy see: https://jujucharms.com/docs/devel/authors-charm-store [16:56] gennadiy - bright side of the inconvenience is your publishing becomes instantaneous vs the 30/60 minute ingest waiting period. [16:58] thank you @lazyPower i will review new mechanism [17:11] beisner, updated https://review.openstack.org/#/c/318057/ with mocking for apt [17:11] waiting for check to confirm that's OK [17:12] beisner, stable syncs [17:12] https://review.openstack.org/#/q/status:open+branch:stable/16.04+topic:bug/1581171 [17:12] Bug #1581171: pause/resume failing (workload status races) Charms Collection):In Progress by thedac> [17:12] jamespage, ack thx [17:20] @lazyPower - is it possible to remove charm? [17:20] no removal, but you can remove the permissions from the charm so only you can see it [17:21] charm grant --help [17:21] ok [17:21] another question: can we deploy specific openstack snapshot from charm? [17:21] not sure what you mean [17:29] @lazyPower - now we have snapshot/image of openstack machine and we would like to deploy it from juju. [17:29] i know that it's not 'true' way. but maybe it's possible [17:38] ah, i dont think we have any support for that, as we assume the charms to probe/setup as required. i'm not going to say its not possible, but i dont know thats a supported deployment method as of yet [17:39] there has been talk of imaging support for providers that can support it, but i have no idea where that is on the roadmap if its a next cycle thing or cycle after next. [17:46] @lazyPower - one more question: i have account https://launchpad.net/~dataart.telco but email login is tads2015dataart@gmail.com so when i make charm login it will not allow to push to cs:~dataart.telco/sipp [17:46] as i understand it calculates name by primary email. am i right? [17:46] launchpad name, or launchpad group actually [17:47] its still backed by the mechanisms (groups, acls, etc) inherited by ubuntu SSO [17:49] hm. "charm push . cs:~dataart.telco/sipp" returns error "unauthorized: access denied for user "tads2015dataart" but https://launchpad.net/~dataart.telco it's me [17:50] a few mouth ago i renamed account from tads2015dataart to dataart.telco [17:51] *month [17:51] jrwren - any known issues with accounts that have changed their handle and the charm-store's backend bits? [17:52] lazyPower: yes, if the account name changed since they first logged into www.jujucharms.com [17:56] gennadiy - does this sound like the situation? ^ did you log into the charm-store (jujucharms.com website) prior to renaming your launchpad account? [17:56] also i can't login to jujucharm too [17:56] yes [17:56] yes i logged before rename [17:57] jrwren - anything we can do to help gennadiy? And for future reference, if i encounter a user with similar issues, where should i direct them to file bugs/etc. so we can triage accordingly? [17:57] does it fix issue if i remove account and create new with correct name? [17:58] ideally we should be able to fix this... i'm not very familiar with that side of the codebase however, so i'm bugging jay for details :) [18:01] Hi Matt/Kevin, I was reviewing the merge proposal which you have given for ibm-java. It looks good to me and I have a small doubt in the default package name and SHA value for JRE. [18:01] As per this merge proposal http://bazaar.launchpad.net/~kwmonroe/charms/trusty/ibm-java/may-2016/view/head:/reactive/ibm-java default package name and SHA values are used only for SDK in the reactive/ibm-java file (Between Line number 14 to 25). [18:01] How this default package name and SHA values for JRE if it is not provided by the user as it handles only for SDK. [18:03] lazyPower, gennadiy i'm browsing our docs to see if we have notes on this issue. iirc, we did the same thing for aisrael [18:03] Kindly advise on this and also I was not able deploy this merge proposal because of the if loop in the line number 65. After accepting this merge proposal I will be correcting this line of code. [18:03] Prabakaran: What is your concern with the ibm-java merge proposal? [18:05] Prabakaran: If you can fix it I think that is great. Let me check the code. Why is there a loop [18:07] we have a different package for JRE and SDK. My doubt is regarding the line of codes http://pastebin.ubuntu.com/16498220/ in the reactive file. How default package will work for ibm-jre [18:08] thats no problem. i will fix it. [18:09] gennadiy: can you file a bug with your account rename details at https://github.com/CanonicalLtd/jujucharms.com/issues ? [18:19] As per my understanding this new proposal code will install SDK primarly and if the user wants install jre obivously he must have to feed package name and its sha thru juju set command. Is my understanding is correct? [18:28] Prabakaran: that sounds good to me. [18:30] hey guys .. we can push revisions in charms using bzr right? i can see the revisions on launchpad but not is charm show cs.... [18:30] yup Prabakaran - you got it. the user gets to choose what installer to use (sdk or jre) by setting the installer filename [18:31] Prabakaran: so there is no need ot set the default for a jre (in fact, there's no need to set any default installer or sha since we say the installer filename and sha is required to be set in the config [18:32] bbaqar: You can push to your own namespace, but the ones in ~charmers need to be reviewed [18:33] mbruzek: well i have a private team space where i can push charms .. [18:33] i still cant get new revisions in [18:34] bbaqar - launchpad ingest has been broken for about 2 weeks, we sent a notice to the mailing list [18:34] bbaqar see: https://jujucharms.com/docs/devel/authors-charm-store [18:35] bbaqar: Yes the automatic injest is broken, but you can push using the document that lazyPower linked ^ [18:35] this will get you ramped up on the new charm push commands which will eliminate your need to wait 20/30 minutes for ingest. Make sure you fully read that document and grokk the new ACL structure for charms. by default, they are only visible to you as the uploader, and go into an unpublished channel [18:36] lazyPower: i believe i have done all this .. i ll go over the document once again [18:37] bbaqar so just to confirm, charm publish . ~plumgrid-ons/series/mycharm does not work for you? [18:38] notice the dot for current directory. [18:38] i paraphrased the namespace, feel free to sub with the proper group string [18:39] lazypower: i ran the exact same thing: charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid .. but the problem is it did not have the last two revisions that i pushed in the last week [18:39] bbaqar - did you publish those charms you pushed? [18:39] bbaqar: Did you publish those charms? [18:40] by default they land in an unpublished channel. You have to both push, and publish against the appropriate channel (Stable/devel accordingly) [18:40] I am deploying and checking this ibm-java now.. i will accept this merge proposal. Thanks matt and kevin :) [18:40] lazypower: yup .. charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-18 --channel stable problem is i want 20th rev [18:41] bbaquar: charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-20 [18:42] mbruzek: no matching charm or bundle for cs:~plumgrid-team/trusty/neutron-api-plumgrid-20 [18:42] bbaqar - i'm showing that -18 is head of what you have published. verified with `charm list -u plumgrid-team` and additionally charm show cs:~plumgrid-team/trusty/neutron-api-plumgrid does not show a -20 revision as available. [18:42] bbaqar so where is the "20'th revision" coming from? [18:42] is that the BZR id? [18:43] bbaqar: Yeah I only see 18 revisions in the listing. Perhaps the permissions are not allowing us to see 19 and 20? [18:43] mbrukzek, lazypower: http://bazaar.launchpad.net/~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk/changes/20?start_revid=20 [18:43] mbruzek - i'm pretty sure evne if its in unpublished, it will show up in the revision-info key [18:43] bbaqar ah thats where the disconnect is coming from. those revision id's do not match whats in bzr, at all [18:43] ah [18:43] its there on launchpad [18:43] its completely disconnected from VCS [18:45] So am i doing something wrong here? [18:46] bbaqar: Yeah as we mentioned the automatic launchpad injest is broken, you have to manually push each revision, and then publish the ones you want to see. The upside is no waiting 30 minutes for the automatic process, and the downside is that you have more manual work. [18:46] bbaqar if you did a charm push, it will tell you what revision the charm store incremented to. [18:46] So checkout the latest from bzr to your computer, then push it to the charm store. [18:46] ^ [18:48] * Where more manual work is 2 additional "charm" commands. [18:48] I love the fact that we dont have to wait 30 mins and i am willing to run as many commands it takes .. just trying to figure out what they are exactly .. so this is what i am going to do now [18:49] bbaqar: the document that lazyPower linked you to outlines the steps [18:50] okay so this is what i am going to do 1) bzr branch lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk 2) bzr ci -m "commit message" --unchanged 3) bzr push lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk 4) charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid 5) charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid --channel stable [18:50] mbruzek i did run that .. let me try this [18:51] bbaqar nope [18:51] bbaqar you *have* to specify the revision output in the charm push command to that charm publish command [18:51] ohhhh h [18:51] i get it .. [18:51] which should increment to -19, as -18 is head [18:51] wait let me try that [18:51] @lazyPower - new publish charm tool - very cool. it shows errors in bundle! very useful [18:52] gennadiy really happy its a positive impact on your experience :) [18:52] jrwren you're getting praise and not targeted sir ^ <3 [18:52] one more improvement: add flag --grant-everyone to publish command [18:52] gennadiy: and it lets *you* control what charms/bundles are in the store, no more automated injestion [18:52] gennadiy file a bug over here: https://github.com/juju/charm [18:52] gennadiy: you can do that with the charm set acl command [18:53] lazyPower: so this right charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid-19 [18:53] lazypower but it says error: charm or bundle id "cs:~plumgrid-team/trusty/neutron-api-plumgrid-21" is not allowed a revision [18:53] bbaqar nope push doesn't need the revno [18:53] gennadiy: sorry the `charm grant cs:~kirk/foo everyone` command. [18:53] only publish does, push i creating the entity, publish is registering it. [18:54] bbaqar - so what you're looking to do is this: [18:54] charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid [18:54] which it should come back with some output like: [18:54] url: cs:~plumgrid-team/trusty/enutron-api-plumgrid-19 [18:54] channel: unpublished [18:54] Then you move into the publish phase if its ready for that: [18:55] charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-19 --acl read everyone [18:55] it will default to publishing into the stable channel. if this is a devel release, please target the channel appropriately [18:55] It returns the same revno url: cs:~plumgrid-team/trusty/neutron-api-plumgrid-18 [18:56] so, this is a super cool feature of push [18:56] you dont have a change from what you pushed into the -18 revision, teh charm command does store *small bits of metadata about the VCS tree* [18:57] all that info is shown when you charm show cs:~plumgrid-team/trusty/neutron-api-plumgrid --- pipe that into less and evaluate the output to see the revision it read in from teh bzr metadata. Its not reading and is disconnected from, but if its present, we will use it to our advantage and keep you from revving a charm with no changes [18:57] bbaqar - one way you can validate is to `charm pull cs:~plumgrid-team/trusty/neutron-api-plumgrid-18` to a temporary location and then dir-diff that against what you have in your bzr archive. You should see its 1:1 copy [18:58] also i think its worth mentioning that some of these nuances will be much easier to see once we have the new review queue launched, which offers diffing between revisions and is layer aware. no ETA on when that will be available though, other than we are actively cycling towards resolution. [18:59] i understand .. yup you are right . .i pulled the charm .. and did a diff between the two directories .. its the same .. but one more thing ... i should be able to deploy the charm using juju deploy cs:~plumgrid-team/trusty/neutron-api-plumgrid-18 [19:00] That is correct, so long as you granted read access to the published charm to the user you are logged in as thats trying ot deploy it, or gave read permissions to everyone [19:00] bbaqar: It will depend on the permissions granted, but yes you should be able to [19:01] i got it .. thanks guys .. [19:01] thanks for patiently explaining it [19:02] bbaqar: No problem please raise issues (link above) if you feel it can be improved. If the documentation was not clear you can open an issue for that too [19:02] np! if there's any verbiage we could explain better in those docs, i'd love to get a bug report from anyone struggling with the new publish workflow [19:02] mbruzek we have finished assimilation, we are the same person [19:02] mbruzek who are you? [19:03] mbruzek: now that it is working .. this is awesome [19:03] * mbruzek whoami [19:03] > Pizza [19:03] https://github.com/juju/docs/issues/new [19:03] :O it all makes sense now [19:04] So my other self (lazypower) gave you a link to create issues with the charm command, and the documentation link is there if you feel it could have been more clear (which I have a feeling it could have been) [19:05] we are the juju singularity [19:07] lazyPower: I ll read it once again and raise an issue if i think we should update the doc [19:15] Once again .. guys this is cool .. i feel like i have control now .. [19:23] \o/ thats awesome [19:23] bbaqar glad it had a positive impact on your experience :) [19:37] Hi Matt/Kevin, I have tested ibm-java and i am able to deploy successfully. I have merged those changes into the stream. And also i have updated the charm store with the updated code https://jujucharms.com/u/ibmcharmers/ibm-java/trusty/7 . I hope it will be approved soon. Thanks for your help and support :) [19:37] Prabakaran: OK we will give it another look [19:38] Thanks matt [19:38] Prabakaran: did you also build the charm and update the bzr merge proposal [19:41] i did charm build with the latest source code and pushed using charm push command [19:46] mbruzek, All ok? is there anything needs to be done from my end?... [19:46] Prabakaran: can you give me link to the merge proposal ? [19:47] from which branch to which branch? === redir is now known as redir_lunch [19:49] Source Code Repo : https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-java/source This charm has been pushed into the charm store. Below is the link Charm store link : https://jujucharms.com/u/ibmcharmers/ibm-java/trusty/7 [19:51] Prabakaran: The Juju charm store and bzr are different things. [19:52] To be in the review queue you need to create a bug with a merge proposal [19:52] You had one, let me find it [19:52] https://bugs.launchpad.net/charms/+bug/1477067 [19:52] Bug #1477067: New Charm: IBM Java SDK [19:54] Prabakaran: Yep that is what I needed. [19:54] Prabakaran: my mistake, this one was a bug since there is no charm to merge with, so I incorrectly used "merge proposal" [19:54] thats no problem [19:55] nothing else from my end right.. [19:55] no [19:56] its time for me to sleep...if anything is required please email me.. thanks again for your great support :) [19:56] cory_fu: Do you have a minute? [19:57] Sure. We're in bigdata-daily === zz_CyberJacob is now known as CyberJacob === redir_lunch is now known as redir [21:08] interfaces.juju.solutions doesn't seem to understand lp links [21:08] the Apache WSGI base layer repo link is dead [21:16] cholcombe - err thats just a databag [21:16] do you mean the builder? [21:16] cholcombe - here's the list of what we test with the builder: https://github.com/juju/charm-tools/blob/master/tests/test_fetchers.py#L49 [21:16] lazyPower, i'm not sure. [21:16] sorry, i mean the fetcher [21:17] lazyPower, i see [21:17] well than maybe interfaces.juju.solutions is having an issue resolving lp links [21:25] lazyPower, i was just trying to click the repo link to figure out what the apache wsgi layer requires [21:25] cholcombe OH [21:25] firefox just gives me an: I don't understand this link error [21:25] :D you get it now haha [21:25] you mean with the browser [21:25] yeahhhhhh [21:25] yep [21:25] this is a fail on our planning part [21:25] can you file a bug against the repo? [21:25] sure if i could find it lol [21:26] this may still live in bens namespace [21:26] i can't find it under jacek's lp [21:26] https://github.com/bcsaller/juju-interface <- is what i meant [21:26] oh nvm i found it [21:26] https://code.launchpad.net/~jacekn/charms/+source/apache-wsgi/+git/apache-wsgi is the link [21:27] this is the codebase for the interfaces webservice [21:27] we'll need to add support to resolve those links [21:27] lazyPower, everyone else seems to just be putting in the link to the summary page [21:28] * lazyPower shrugs [21:28] i use github yo [21:28] lazyPower, i updated it [21:28] but that works equally as well [21:29] it would take me 10x as long to write a bug report haha [21:32] i still think it's too difficult to find out what reactive layers and interfaces require of the person using them. You have to dig [21:47] random question: I’m using juju 2.0-beta6 and MAAS 2.0 beta3. I’m still having to use “export JUJU_DEV_FEATURE_FLAGS=maas2” , is that correct or am I doing something wrong ? [21:55] mpjetta - i dont think that came out from behind the feature flag until beta7 [21:55] I'm pretty sure thats covered in the release notes in /topic however [22:07] ok thanks