#juju 2013-08-05
<gnuoy> I've trying to use lp:juju-deployer/darwin + juju-core for the first time. juju-deployer is hanging and this seems to be due to it attempting to contact the zookeeper node on port 17070 which it'll be unable to do due to a firewall between the server doing the deployments and the juju env. What is 17070 used for ?
<AskUbuntu> how do i apply the maas bug fix? | http://askubuntu.com/q/328858
<jamespage> jcastro, around? wanted to get your opinion on the name for the juju-core local provider dependency package thingy we discussed week before last
<gary_poster> gnuoy, hi.  71070 is the juju API port
<gary_poster> gnuoy, for juju core
<gary_poster> deployer needs to be able to talk to it
<gary_poster> or it will not work
<gnuoy> ok, so juju deployer talking to it is completely sane then
<gnuoy> thanks :)
<gary_poster> gnuoy, sure :-)
<benji> dimitern: you're so silly, I know you really mean AgenterEntertyWatchererer
<dimitern> benji: lol
<bryanmoyles> Hello, is there anything that needs to be done as a special case to get mongodb running on the juju control instance? I'm getting mongo timeouts when trying to deploy an instance
<jcastro_> bryanmoyles, what version of juju?
<bryanmoyles> I'm using the mac client from their website, I can't do -v to get the version
<bryanmoyles> from their github it looks like 1.11.2
<bryanmoyles> I see this "Binaries for mongodb are also required, but a newer version than is present in
<bryanmoyles> either 12.04 or 12.10 Ubuntu releases. Instead, you can get what you need from
<bryanmoyles> the public bucket which juju uses when deploying itself:"
<bryanmoyles> would I just upload that into the tools directory?
<bryanmoyles> Can anyone help me with juju's mongo precise tgz file? I have no idea what to do with it
<henninge> There is a packaged version from 10gen that I use (not tried with juju yet, though).
<henninge> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
<henninge> Maybe that is an option?
<bryanmoyles>     http://juju-dist.s3.amazonaws.com/tools/mongo-2.2.0-precise-amd64.tgz
<rick_h> bryanmoyles: juju already runs a mongodb instance on node 0
<bryanmoyles> juju's docs show that file, but I don't know what to do with it per se
<bryanmoyles> not according to my install, it can't connect to it after bootstrapping
<rick_h> bryanmoyles: can you ssh to that instance and look around? Some error during bootstrap perhaps?
<bryanmoyles> is there a way to associate a key pair on bootstrap? Otherwise I can't get access to the node
<marcoceppi> bryanmoyles: it should pick up the ssh key in you ~/.ssh - however you can add more using `authorized-keys` environments.yaml setting
<bryanmoyles> I've tried this for setting an instance type, how else would one go about it?
<bryanmoyles> juju bootstrap --constraints "instance-type=m1.small" -> led to ->unknown constraint "instance-type"
<marcoceppi> bryanmoyles: instance-type isn't available as a constraint yet in juju-core
<marcoceppi> You can instead use generic constraints mem, cpu-cores to match instance-types
<marcoceppi> AFAIK, m1.small is the default
<bryanmoyles> When I bootstrap by default, it's choosing nano, which runs out of resources, how can I prevent that? Ah, okay, so --constraints "mem=2048"
<marcoceppi> bryanmoyles: Right, you can use mem=1024 or 1G, it'll use the ec2 api to find an instance that at least meets that criteria
* bbcmicrocomputer changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://manage.jujucharms.com/review-queue || http://jujucharms.com || Reviewer: ~charmers
<AskUbuntu> How to deploy Django app from bazaar branch with juju? | http://askubuntu.com/q/328972
<bryanmoyles> Hey, I'm getting a no "precise" images from juju, but I believe I have all of the moving parts in the right place, can anyone throw out a thought to help me bend my thinking?
<bryanmoyles> Can someone help me figure out why juju won't find my instance declared in the meta data?
<bryanmoyles> Is there an easy way to debug why juju can't find images?
<bryanmoyles> Is there a better place I should be asking this question?
<sarnold> bryanmoyles: this channel is probably best; askubuntu.com might be another option, there's a different audience there, someone who knows the answer may be there instead of here..
<bryanmoyles> I just wish this tool had better documentation, this is a terrible roadblock considering I passed it 2 hours ago
<bryanmoyles> askubuntu doesn't have anything of value, does anyone have any insight on how to debug why juju can't find images?
<sarnold> bryanmoyles: then it's time to post a new question :)
<bryanmoyles> The question already exists, numerous times, all without valid answers
<bryanmoyles> http://askubuntu.com/search?q=no+%22precise%22+images+in+RegionOne
<bryanmoyles> I just found out what the blasted thing isâ¦ I'm going to contribute to juju to not waste 3 hours of someone else's time over a simple public url misconfiguration
<marcoceppi> bryanmoyles: what was the issue?
<bryanmoyles> I had wiped stack from the env, and then I totally spaced to update the public url's user_id, I had absolutely everything in place aside from my local environments.yaml
<bryanmoyles> stack had regenerated the user's ID when I ran ./stack.sh again
<marcoceppi> bryanmoyles: ah. Are you using devstack?
<bryanmoyles> @marcoceppi yeah, I'm running into a handful of issues accordingly
<marcoceppi> bryanmoyles: I was actually planning on trying devstack and juju compat this week. I figured it would be pretty difficult compred to a "real" openstack setup, given it's meant more for openstack dev and "production" per se
<marcoceppi> let me know if you need help with anything, it'd make for an interesting niche setup for those who want to use Juju privately and dont' have access to the local provider
<bryanmoyles> So far it's been okay, the only real issues I've had so far are swift configuration and intranet communication (currently). If you'd be willing to help me debug this issue it would be incredible, I can show you all of the steps that I've documented to get a compute / controller setup in place
<bryanmoyles> sometimes the best part of the journey is self discovery though haha, in my current networking case, definitely not
<marcoceppi> bryanmoyles: I'd be happy to help if I can
<bryanmoyles> 2013-08-05 14:53:31 ERROR juju open.go:88 state: connection failed, will retry: dial tcp 10.252.2.150:37017: network is unreachable
<bryanmoyles> that's what I'm getting, I'm tcpdumping now to ensure it's not trying to access that IP from my local net
<bryanmoyles> -.- sure enough it is
<bryanmoyles> do I have to run juju deploy on the actual instance?
<marcoceppi> bryanmoyles: no, the juju commands only need to be run from the client. Can you confirm in the dashboard that you have an instance running with that IP address?
<bryanmoyles> yeah absolutely, I see it running
<marcoceppi> Can your current machine actually reach 10.252.2.150?
<marcoceppi> either via ping or ssh, etc
<bryanmoyles> but I'm also seeing the traffic being processed on my local net, absolutely not, that's a local IP to my private cloud
<bryanmoyles> 76 bytes from g0-6-4-4.nwrknj-lcr-21.verizon-gni.net (130.81.137.104): Destination Net Unreachable
<bryanmoyles> that's when I tried pinging that IP
<marcoceppi> bryanmoyles: Ah, so there's a routing issue in your network then
<bryanmoyles> well my issue is our offices, and our private cloud is in a data center in another city
<bryanmoyles> I'm not operating on the same network as our private cloud
<marcoceppi> Okay, depending on how you have  your devstack setup, try switching the 'use-floating-ip: true' in your environments.yaml to see if that rectifies the issue. Either that, or is you can, set up a vpn between you and the data center to bridge the networks
<marcoceppi> err, set use-floating-ip to true in your environments.yaml
<bryanmoyles> are floating IPs supposed to be publicly accessible?
<bryanmoyles> I allocated one on the instance, and it's a 192 IP
<marcoceppi> bryanmoyles: it depends entirely on the network setup for your openstack install
<marcoceppi> 192 might just be the "public" network in the datacenter. So that IP is available to other machines outside of openstack
<marcoceppi> I think at this point, some kind of VPN or SSH tunnel in to the datacenter is the best way to resolve this networking issue
<bryanmoyles> Yeah I"m trying a VPN right now, I'll be back as soon as I can get that set up
<marcoceppi> bryanmoyles: cheers, gl. I'm going to pop out for some lunch
<ahasenack> hm, I need to write a quick how-to how to use simplestreams, or rather, juju's image-metadata, in the case of private clouds
<AskUbuntu> Juju zookeeper stuck at 1st bootstrap in maas enviornment | http://askubuntu.com/q/329024
<weblife> Can I get some feedback on a non juju based issue?? I respect many of you and your experience in software engineering.  I rewrote my resume and have never held a professional position in the field outside of my own work(company) and could use some feedback/advice:  http://www.themindspot.com/trunk/brandonclark-resume.pdf
<weblife> Please...
<sarnold> weblife: a friend of mine has a rule, "anything on a resume is fair game to ask about" .. are you prepared to handle questions like "draw two intersecting circles and a line through the intersection" in SVG? (just to make up something off the top of my head, to see how well someone knows svg.. :)
<sarnold> weblife: 'development practices' feels out of place. acronym I don't know, followed by words I -do- know but not sure why you wouldn't have given them top billing..
<sarnold> "rapid development" also feels a bit out of place -- you might be very good at quick turnaround times :) but I'd rather have someone flexible to fitting in with a team
<sarnold> weblife: and in place of the word "repository" on your githubs, maybe a quick and short description of what kinds of code you've got in those repositories
<sarnold> ah, all kinds of things, all over the place :)
<marcoceppi> sarnold: I've found that just having a github profile with interesting things (doesn't have to even be functional, just interesting) is quite helpful and often times replaces the time held tradition of "quiz 'em on the spot"
<sarnold> marcoceppi: agreed there, a github account is a far better indicator of, well, just about everything you'd care about as a co-worker :)
<marcoceppi> Or, a launchpad account, as it may be :P
<sarnold> :)
<weblife> sarnold:  (yes,  I have immersed myself in all practices(all over the place), lol),  Thank you, very good advice. I do plan on doing my homework of concepts so I don't get caught off guard.(ie been 2 years since I last did heavy SVG, but I used to make interactive Blackberry interface themes.
<weblife> I hadn't seen any examples of people actually putting their repos or work into a resume.  Thought it might at least make me stand out.
<weblife> A bit worried on how people may treat the fact I am going back for my MS
<sarnold> weblife: folks may worry about your ablity to work full-time and do studies, but if you can convince them in an interview that it won't be a problem, that is liable to be good
<weblife> sarnold:  While finishing the last year of my bachelors degree as a full time student enrolled in  5 classes a semester and around 2 to 4 classes a intersession I worked week days and weekends also.  This unfortunately took any free time away from me and I didn't see any time off till the holidays.  I also maintained keeping a relationship with my soon to be wife.  (Sound good???)
<sarnold> weblife: yup :)
<weblife> I got this.
<weblife> lol
<marcoceppi> hazmat: can you `relation-set --format=json` ? If so, what does that look like?
<weblife> woohoo!! First tech interview  ever tomorrow at 10.  I haven't been nervous in ages, kinda exciting.
<sarnold> weblife: congratulations :)
#juju 2013-08-06
<AskUbuntu> Bootstrapping Juju 1.13 to private OpenStack | http://askubuntu.com/q/329198
<bbcmicrocomputer> fo0bar: ircd-charybdis promulgated..thanks for your work :)
<stub> Has anyone running trunk (not) seen missing log messages from juju-log? Per https://bugs.launchpad.net/charms/+source/postgresql/+bug/1205082/comments/4
<_mup_> Bug #1205082: Relation errors when deploying two units <postgresql (Juju Charms Collection):New> <https://launchpad.net/bugs/1205082>
<pavelpachkovskij> guys, can I deploy charm from my custom launchpad repo?
<ahasenack> pavelpachkovskij: you will have to branch it locally first, then use --repository and the local: prefix
<pavelpachkovskij> that's quite clear
<pavelpachkovskij> I thought there could be a way to do this remotely
<ahasenack> no, the charm store syntax doesn't allow for custom charm stores
<rick_h> pavelpachkovskij: I think if the branch is the right format it'll be available from the store: see https://juju.ubuntu.com/docs/authors-charm-store.html
<rick_h> pavelpachkovskij: and you can see others like this in the new charms section of jujucharms.com
<rick_h> then you can deploy the long charm name. cs:~matsubara/precise/tarmac-jenkins-0
<rick_h> for instance
<rick_h> pavelpachkovskij: see https://juju.ubuntu.com/docs/charms-deploying.html for some notes on those urls and the local option
<pavelpachkovskij> rick_h, I'm trying to deploy this way, but get an error
<pavelpachkovskij> juju deploy --config ~/projects/charms/precise/rack.yml cs:~pavel-pachkovskij/precise/rack-4  discourse
<pavelpachkovskij> error: unknown option "repo"
<pavelpachkovskij> but you can see here http://bazaar.launchpad.net/~pavel-pachkovskij/charms/precise/rack/trunk/view/head:/config.yaml
<pavelpachkovskij> that there is this option
<pavelpachkovskij> and if I do bzr branch it works well
<ahasenack> the store has a table which tells it which bzr branch to use for a specific charm
<ahasenack> the cs: prefix doesn't work for random branches, the store has to know about it
<rick_h> pavelpachkovskij: k, I see your charm here https://jujucharms.com/sidebar/~pavel-pachkovskij/precise/rack-4/
<rick_h> ahasenack: so it should be deployable I thought. We don't ingest charms into the gui that aren't in the juju go store.
<pavelpachkovskij> rick_h, but it contains an old `Configuration`
<ahasenack> rick_h: it's a matter of being deployable with the cs:~ syntax or not
<pavelpachkovskij> rick_h, since then it had changed significantly
<pavelpachkovskij> rick_h, and for some reasons 'Readme' is blank
<rick_h> ahasenack: ok, haven't tried it, but had thought that if it's in the gui it can be deployed from the 'store'. Otherwise there's no point in showing it.
<ahasenack> rick_h: the guy might just make a local copy and then deploy, I don't know
<ahasenack> gui
<pavelpachkovskij> rick_h, exactly, but looks like it doesn't update update config from the branch
<rick_h> pavelpachkovskij: looking into it. I'll file a bug on the missing readme. There might be an issue with the charm
<pavelpachkovskij> rick_h, this is my charm and there is a readme
<rick_h> pavelpachkovskij: yes, I see the readme, but store says it's not there so I'll look into it/file a bug as I said ^^
<pavelpachkovskij> rick_h, so as it doesn't try to parse new config.yaml
<pavelpachkovskij> rick_h, so as metadata.yaml
<rick_h> pavelpachkovskij: so basically whenever you submit a new branch of the charm our back end pulls the changes and updates things. If there's a problem it can't update and serves out the old data.
<rick_h> pavelpachkovskij: I've got to look into what the problem might have been.
<rick_h> pavelpachkovskij: so yes, I see you've updated the charm and such and that we don't have the latest data. I'll take a bit to figure out why that is.
<pavelpachkovskij> rick_h, I see... How can I find what's the issue with it?
<rick_h> pavelpachkovskij: did you run the charm proof tool against the charm?
<pavelpachkovskij> rick_h, on first revision
<rick_h> pavelpachkovskij: that usually catches most things from what I understand
<pavelpachkovskij> rick_h, I'll rerun now
<pavelpachkovskij> â  precise  charm proof rack
<pavelpachkovskij> W: Maintainer address should contain a real-name and email only. [Altoros]
<pavelpachkovskij> W: No icon.svg file.
<pavelpachkovskij> â  precise  charm proof rack
<pavelpachkovskij> W: Maintainer address should contain a real-name and email only. [Altoros]
<pavelpachkovskij> W: No icon.svg file.
<pavelpachkovskij> E: revision file in root of charm is required
<pavelpachkovskij> could the last error be the issue?
<pavelpachkovskij> In one of charmers meetings Marco told that revision file is no more required
<rick_h> pavelpachkovskij: yea, that would do it
<rick_h> pavelpachkovskij: make sure the proof tool is up to date maybe? /me checks that
<pavelpachkovskij> charm-tools is already the newest version.
<pavelpachkovskij> rick_h, I can try to push revision file and see if this is the issue
<rick_h> pavelpachkovskij: yea, give that a shot please. I'll work on seeing if I can get access to the server and check the logs
<pavelpachkovskij> rick_h, how long it takes to fetch data?
<rick_h> pavelpachkovskij: should be aroud 15-20min usually
<rick_h> pavelpachkovskij: can see here http://manage.jujucharms.com/~pavel-pachkovskij/precise/rack
<pavelpachkovskij> rick_h, ok, waiting
<pavelpachkovskij> rick_h, nope, stays the same http://manage.jujucharms.com/~pavel-pachkovskij/precise/rack
<rick_h> pavelpachkovskij: yea, looking
<rick_h> pavelpachkovskij: so I'm running a script to pull it in locally and trying to find my notes on getting into the server logs. I'll ping back with what I find but will be a few.
<pavelpachkovskij> rick_h, thanks
<jcastro_> marcoceppi, I could have sworn we updated charmtools to not check for a revision file? ^^
<rick_h> and he's gone...ugh
<marcoceppi> jcastro_: Yeah, it should be fixed
 * marcoceppi checks
<rick_h> jcastro_: marcoceppi yea let us know and if there's a new version please file a bug on charmworld to update what version we're using in our proofing as well
<rick_h> he said he had the latest but didn't verify
<rick_h> issue was way beyond that
<marcoceppi> rick_h: jcastro_: I see what I did. I fixed it in the wrong branch. Let me see if I can pop that change over to the current branch and kick off a package re-build
<rick_h> marcoceppi: ah cool
<aimatt> hello, I have juju bootstrapped, but a charm is stuck as pending
<aimatt> it's mysql
<juju> hi
<marcoceppi> aimatt: what version of juju and what cloud environment are you using?
<aimatt> marcoceppi: openstack
<aimatt> checking version
<juju> hi
<aimatt> marcoceppi: it's the mac client version
<aimatt> 1.11.2-unknown-amd64
<marcoceppi> aimatt: Can you confirm that there are two running intances in the openstack dashboard?
<marcoceppi> How long has it been pending?
<aimatt> 14 hours
<juju> what
<juju>  8-)
<aimatt> marcoceppi: the second instance never launched, it was always pending
<marcoceppi> aimatt: Okay, typically it takes at most 10 minutes for the MySQL charm to come up (on even the slowest of providers) so there's something not quite right. Can you see two machines running in the dashboard? What does the machine state for machine 1 say?
<marcoceppi> aimatt: okay, well that's the problem. What I'd do is `juju destroy-environment` bootstrap again, then deploy. If you're waiting more than 10 mins and the machine hasn't launched yet something went wrong.
<juju> you are typing to fast
<aimatt> ok
<bryanmoyles> I've bootstrapped a few times over and I still ran into the same issue
<marcoceppi> bryanmoyles: Are you also on the mac client?
<bryanmoyles> Yeah
<marcoceppi> bryanmoyles aimatt when you're running your commands, can you use the `-v` flag to get more verbose output?
<bryanmoyles> when running juju deploy mysql?
<marcoceppi> deploy, bootstrap, etc
<bryanmoyles> bootstrap works just fine, machine-0 gets launched and is active, it's the deploy that breaks, even though juju said command finished
<bryanmoyles> I'm in the process of wiping my env so I can't give you the exact output
<juju> can you people meet me at aimatt
<aimatt> juju spammer?
<marcoceppi> bryanmoyles: that's fine, if you do run it again with -v I'd be interested in the output. Barring that /var/log/juju/* on the bootstrap node would be good to look at for possible reasons why it didn't launch
<marcoceppi> aimatt: I'm not sure who juju is, but they appear to be just spamming the channel at this point.
<aimatt> server moorcock.freenode.net
<aimatt> :/
<aimatt> k, Bryan will try verbose flags
<juju> you are anoyen
<bryanmoyles> well here's the question marcoceppi , I'm running the client on my local machine connecting externally to the private cloud, would the log files be on the hosted server or my local machine?
<juju>  :@
<marcoceppi> bryanmoyles: when you run juju commands, they connect to the bootstrap to relay the information. That's why the commands finish so quickly. The bootstrap node does all the orchestration. So the logs as to why machines failed to launch would be on the bootstrap node
<bryanmoyles> and my ssh key should grant me access to that node? The problem I had with that earlier is that juju is creating an eth1 device on the physical server, that maps to the bootstrap node's local IP, hello recursion when i try sshing into the bootstrap node
<bryanmoyles> did I miss something, my client is baaaad
<marcoceppi> bryanmoyles: not yet
<marcoceppi> bryanmoyles: SSH Key should be on the node, the eth1 device loop back thing is really odd though. It might explain why you're not getting any nodes to launch though.
<bryanmoyles> so how do I prevent juju from creating it?
<bryanmoyles> should I just tear it down after juju bootstraps?
<bryanmoyles> Just out of curiosity, is it IRC proper to always include the name of the person we're talking to so you get pinged?
<marcoceppi> bryanmoyles: my understanding is juju shouldn't be creating that, as it's not juju's job from my understanding, to mess with host as such. That's typically based on the image and cloud provider
<bryanmoyles> well let me ensure that it is, one sec
<marcoceppi> bryanmoyles: uh, probably. I do it out of habbit as I'm not always watching IRC
<bryanmoyles> marcoceppi: alright, so right now ifconfig doesn't show an eth1 device, I'm almost at the point where I can just bootstrap again
<mgz> bryanmoyles: it's probably useful to pastebin the config you're actually using
<bryanmoyles> environments.yaml?
<mgz> yup, unless it's trivial
<bryanmoyles> http://collabedit.com/r5924
<bryanmoyles> I'll start with the output first
<mgz> that url is a redirect loop
<bryanmoyles> Someone should also think to factor in a default constraint for juju bootstrap, it defaults to 64 MB of ram which runs out of memory -.-
<marcoceppi> mgz: loaded for me
<mgz> ah, borked with cookies disabled
<mgz> go poor webserver coding
<marcoceppi> bryanmoyles: it shouldn't, but you can use --constraints to set mem=1024, etc
<marcoceppi> I really need to look in to the defaults because it seems recently a lot of people have been having this problem
<bryanmoyles> I know, I just forget to and then it leads to head scratching
<bryanmoyles> PS: I'm not all knowing, you just told me that yesterday lol
<marcoceppi> bryanmoyles: also, you said you're using the mac version, what does `juju version` say for your local client?
<marcoceppi> Because this is bootstraping/deploying 1.10.0, latest stable mac is 1.12.0
<marcoceppi> which, iirc, had a bunch of openstack fixes in it
<bryanmoyles> juju version is: 1.11.2-unknown-amd64
<bryanmoyles> oh what the funk, all of my devices in ifconfig just went away?
<bryanmoyles> all but 3
<marcoceppi> bryanmoyles: try running `juju bootstrap --upload-tools` that should upload the most recent version of juju
<bryanmoyles> I get a go error when I do that
<bryanmoyles> one sec, sync-tools is running
<bryanmoyles> 2013-08-06 15:08:40 ERROR juju supercommand.go:234 command failed: build command "go" failed: exit status 1; can't load package: package launchpad.net/juju-core/cmd/jujud: cannot find package "launchpad.net/juju-core/cmd/jujud" in any of:
<bryanmoyles> 	/usr/local/go/src/pkg/launchpad.net/juju-core/cmd/jujud (from $GOROOT)
<bryanmoyles> 	($GOPATH not set)
<bryanmoyles> error: build command "go" failed: exit status 1; can't load package: package launchpad.net/juju-core/cmd/jujud: cannot find package "launchpad.net/juju-core/cmd/jujud" in any of:
<bryanmoyles> 	/usr/local/go/src/pkg/launchpad.net/juju-core/cmd/jujud (from $GOROOT)
<bryanmoyles> 	($GOPATH not set)
<marcoceppi> bryanmoyles: Do you have go-lang installed on your machine?
<bryanmoyles> thought I did, new-host-8:TrapCall_v2 bryanmoyles$ go
<bryanmoyles> go      godoc   gofmt   gotour
<bryanmoyles> new-host-8:TrapCall_v2 bryanmoyles$ which go
<bryanmoyles>  /usr/local/go/bin/go
<marcoceppi> hum, this is a bit out of my leauge. I don't have a mac machine at my disposal
<juju> hi my friands
<bryanmoyles> want me to hop on my ubuntu machine and work from there?
<mgz> marcoceppi: we shouldn't be suggesting --upload-tools
<marcoceppi> mgz: well, why is 1.12 client using 1.10 tools?
<mgz> that's a dev option for building juju locally and using a trunk version
<marcoceppi> How do you get the right tools on an openstack install?
<bryanmoyles> juju sync-tools
<mgz> right.
 * marcoceppi smacks head
<marcoceppi> I need to stop running trunk for a while
<juju> hi
<juju> ok
<mgz> that doesn't answer the question of why the 1.12 tools are not being selected
<bryanmoyles> is there a flag I can set for that?
<mgz> no, but the -v output of sync-tools may be informative
<bryanmoyles> I might add that sync-tools is the one actually uploading 1.10*
<mgz> bootstrap is claiming 1.10 is the newest version
<bryanmoyles> new-host-8:TrapCall_v2 bryanmoyles$ juju sync-tools
<bryanmoyles> listing the source bucket
<bryanmoyles> found 6 tools
<bryanmoyles> found 6 recent tools (version 1.10.0)
<mgz> whih implies the 1.12 tools weren't uploaded
<bryanmoyles> listing target bucket
<bryanmoyles> found 0 tools in target; 6 tools to be copied
<bryanmoyles> copying tools/juju-1.10.0-precise-amd64.tgz, download 2205kB, uploading
<bryanmoyles> copying tools/juju-1.10.0-precise-i386.tgz, download 2306kB, uploading
<bryanmoyles> copying tools/juju-1.10.0-quantal-amd64.tgz, download 2209kB, uploading
<bryanmoyles> copying tools/juju-1.10.0-quantal-i386.tgz, download 2311kB, uploading
<bryanmoyles> copying tools/juju-1.10.0-raring-amd64.tgz, download 2208kB, uploading
<bryanmoyles> copying tools/juju-1.10.0-raring-i386.tgz, download 2312kB, uploading
<bryanmoyles> copied 6 tools
<bryanmoyles> right, but I'm not sure how I would
<mgz> bryanmoyles: try the --dev flag on sync-tools
<mgz> I bet that will get you 1.13
<rick_h> does anyone know the timeline for the store.juju.ubuntu.com picking up revisions? It's been a bit over an hour and it's not picked up r5 from earlier https://store.juju.ubuntu.com/charm-info?charms=cs:~pavel-pachkovskij/precise/rack
<bryanmoyles> yessir it did
<mgz> well, that will do.
<rick_h> and wondering if I should be patient or go bug filing
<marcoceppi> rick_h: I have no idea the timeline for syncing. I suspected it was like 15 mins but I appear to be wrong in that assumption
<rick_h> marcoceppi: so charmworld is 15min, but charmworld (manage.jujucharms.com) checks in with store.juju.ubuntu.com for info and it's not catching up
<marcoceppi> rick_h: ah, that's where I got the 15 mins from
<rick_h> marcoceppi: yea, you're not completely crazy :)
<marcoceppi> rick_h: hazmat might know
<rick_h> yea, I figured I'd see if anyone knew around before I bugged hazmat as I imagine he's sprinting off in IoM
<bryanmoyles> cloud-init start-local running: Tue, 06 Aug 2013 15:25:02 +0000. up 6.65 seconds
<bryanmoyles> no instance data found in start-local
<bryanmoyles> cloud-init-nonet waiting 120 seconds for a network device.
<bryanmoyles> I wasn't getting networking timeouts before :(
<mgz> bryanmoyles: I suspect an issue with your openstack setup more than juju here....
<mgz> can you `nova boot` with some custom cloud init successfully?
<bryanmoyles> I'm wiping again and scripting my setup process, after 10 times I think it's about that time lol
<juju> is same body here
<sarnold> 12 seconds. that's gotta be some kind of record. :)
 * hazmat backlogs
<hazmat> rick_h, i thought it was an hrly pull or less re store, looks like the charm is there, its out of dev hands atm
<hazmat> ie. log access only
<marcoceppi> Apparently my IRC bouncer is offline
#juju 2013-08-07
<jpds> Why does charm get ceph not work?
<pavel> hello
<pavel> can anybody help me to understand why on this http://manage.jujucharms.com/~pavel-pachkovskij/precise/rack doesn't update readme, metadata and config?
<raywang> hello, anyone knows how to upgrade the juju-core agent version from a deployed node?
<jamespage> raywang, from a deployed node?
<jamespage> raywang, you just do it from the client - 'juju upgrade-juju'
<raywang> jamespage, hi, yes, like bootstrapped node
<raywang> jamespage, i use juju-core 1.11.4 while I bootstrapped, but i just recently upgrade the client to 1.13. and can't not deploy any service node, so I'm expecting to upgrade the juju-core's agent from bootstrapped server
<jamespage> raywang, ah - OK - you have to jump through 1.12 first
<jamespage> juju upgrade-juju --version 1.12  I think
<jamespage> raywang, rogpeppe sent an email to the juju-dev ML about this
<raywang> jamespage, error: invalid version "1.12"
<jamespage> raywang, what cloud are you using?
<raywang> jamespage, maas
<jamespage> raywang, actually try '1.12.0'
<jamespage> 1.12 does not exist - that was my bad sorry
<raywang> jamespage, does it mean i have to upgrade to 1.12.0, and then 1.13.0?
<jamespage> raywang, https://lists.ubuntu.com/archives/juju-dev/2013-August/001333.html
<raywang> jamespage,    try 1.12.0 also not working     error: no matching tools available
<jamespage> raywang, how did you get your tools into the MAAS environment in the first place? I've not tried that yet
<raywang> jamespage, sorry, what do you mean by "get the tools into the maas environment?" :)
<jamespage> raywang, so when you bootstrapped your environment did you use --upload-tools or did you use juju sync-tools
<raywang> jamespage, ah, i see, i used --upload-tools option
<jamespage> raywang, OK - so see the mailing list post - you can use the same with the upgrade-juju command
<raywang> jamespage, ok, I will check it now, thanks for helping :)
<jamespage> raywang, hey - no problem
<jamespage> fwiw there is work going on in the tool distribution area this cycle to make stuff like this a bit easier!
<rogpeppe> jamespage: we've been discussin it, but haven't decided on anything concrete yet. one possibility that's been mooted is to allow upgrades only between adjacent even-numbered minor releases
<rogpeppe> jamespage: another possibility is to encode upgrade paths into the simplestreams data (or somewhere else external to the juju binary)
<rogpeppe> raywang: if you've upgraded to juju-core tip, i suspect your installation might be broken now :-\
<rogpeppe> jamespage: i'd also like to make the upgrade process a little more reliable by having the client switch versions only when it's sure that the new version can connect to the state ok
<rogpeppe> jamespage: i wrote some code ages ago to do that, but it was deemed premature
<kita> hi
<kita> hi
<geme> hi
<kita> hi
<geme> I'm attempting to use juju-core 1.13.0 on a private openstack instance and I'm trying to generate the image metadata using juju image-metadada BUT the command doesn't exist
<jcastro_> http://askubuntu.com/questions/327177/cannot-bootstrap-due-to-precise-images-in-regionone-with-arches-amd64-i386
<jcastro_> geme, ^^^^^
<geme> thanks jcastro but juju image-metadata gives command not found
<jcastro_> juju-metadata generate-image
<jcastro_> try that
<geme> command not found agian
<geme> juju version 1.13.0
<geme> Should I install a different version ?
<jcastro_> hmmmm
<jcastro_> let me ask somebody here
<geme> thanks
<jcastro_> hey mgz, any idea on this? ^^^
<jcastro_> marcoceppi, ping
<marcoceppi> jcastro_: pong
<jcastro_> hey pavel has some questions about the charm testing framework
<jcastro_> aka, can he use it yet
<marcoceppi> it's not in a ppa anywhere atm, it's probably best to wait for release which is soon
<pavel> marcoceppi, and does it make sense to write integration tests with old documentation?
<pavel> marcoceppi, or it would be just waste of time?
<marcoceppi> pavel: those tests _will_ always work in our testing environments. However, it's tedious to write good tests using the old documentation
<pavel> marcoceppi, I have only three features undone with rack charm: backups, testing and apparmor profile and all of them are totally unclear for me
<marcoceppi> and the old methods/original methods of testing, which is what this framework I'm writing aims to fix
<pavel> marcoceppi, then I think it would be better to wait for your framework, to provide relevant tests
<marcoceppi> pavel: the apparmor profiles are pretty straight forward to write as I understand it, while I finish writing this testing framework that might be a good place to focus next
<pavel> marcoceppi, btw, I've deployed discourse with Rack charm and wrote an article, I will publish it after release of charm
<marcoceppi> pavel: awesome!
<pavel> marcoceppi, problem is ... that we have foreman there and I can't know what should be restricted in the profile
<marcoceppi> pavel: doesn't most everything run as the user under the rvm directories? (with the exception of a few extra things globally installed as a gme)?
<marcoceppi> gem*
<pavel> marcoceppi, as I understand apparmor is something like permissions based on process, but when we talk about rack charm user may want to do all kind of stuff with server
<pavel> with foreman you can run any process which may access different directories
<geme>  jcastro_, any news re: juju image-metadata ?
<jcastro_> https://lists.ubuntu.com/archives/juju/2013-August/002814.html
<jcastro_> ^^^
<jcastro_> I posted to the list because I don't seem to have it either and I'm on 1.13
<pavel> when does Mark Mims come back?
<geme> jcastro, thanks I'll keep an eye on the list
<jcastro_> pavel, like 2 more days
<pavel> jcastro_, so he'll be available from monday?
<jcastro_> yeah
<jcastro_> geme, did you have the problem prior to 1.13?
<jcastro_> I am going to guess "no"
<geme> jcastro, this is the 1st time that I've tried the juju-core version. Used python juju before
<geme> jcastro, the default-image-id was a lot easier
<marcoceppi> jcastro_: it's in 1.11.4 and by proxy in 1.12.0 - geme maybe try 1.12 (ppa:juju/stable) until this is resolved?
<jcastro_> wait
<jcastro_> there's a ppa:juju/stable?
<jcastro_> why do the docs say /devel?
<geme> ok, I'll give it a try
<marcoceppi> jcastro_: because up until a week ago stable was 1.10 and 1.10 was not nearly as awesome compared to 1.11
<jcastro_> marcoceppi, I don't see packages there
<jcastro_> https://launchpad.net/~juju/+archive/stable
<marcoceppi> jcastro_: yeah, nevermind. I have no idea where 1.12 was uploaded to.
<jcastro_> me either
<jcastro_> wtf
 * jcastro_ fires off another email to the list.
<marcoceppi> pew pew!
<geme> jcastro, juju-gui questions handled here or another topic ?
<jcastro_> geme, here too!
<geme> jcastro, that's handy. Are there plans for juju-gui to handle local repository charms ?
<sinzui> pavel,  your README.md has a unicode quote on line  116 "appâs"
<sinzui> pavel, The immediate fix is to use an ascii quote
<sinzui> I am reporting a bug that README.* can be utf-8 encoded.
<jcastro_> geme, ok so the image generating thing appears to be a packaging bug
<jcastro_> but hopefully something we can fix tomorrowish when the right people wake up
<geme> jcastro, great - I'll reinstall in a couple of days
<jcastro_> geme, I've got guys working on it as we speak
<jcastro_> so it might be sooner
<jcastro_> sorry for the problem! we messed that up. :-/
<geme> jcastro, Are there plans for juju-gui to handle local repository charms ?
<pavel> sinzui, thanks you so much, I would never figure this out on my own
<sinzui> np, thank you for finding a bug.
<geme> gary_poster, Are there plans for juju-gui to handle local repository charms ?
<rick_h> geme: define 'handle'?
<geme> rick_h, will juju-gui be able to list charms in a local repository and then deploy ?
<gary_poster> geme, agree with rick_h's question :-)  we can show the ones that you deploy from the command line.  You want to deploy local from GUI?  If so, is this in a developer use case or a deloyer/user case?
<gary_poster> ah
<gary_poster> geme, we have talked about it.  We have two stories.
<gary_poster> We can see how to do this one cleanly:
<gary_poster> 1) You zip the charm
<gary_poster> 2) You upload it
<gary_poster> 3) You can now deploy it
<gary_poster> We are less sure about this one:
<gary_poster> 1) You run something locally (some program we provide) and configure it to find your local repo and to talk to the GUI
<gary_poster> 2) You can now deploy arbitrary charms from repo
<gary_poster> That one has some issues that I forget at the moment
<gary_poster> the first one is good for people simply deploying local charms
<gary_poster> the second is good for people developing and deploying local charms
<geme> So, the 1st is to upload a local charm into the public charm store ?
<gary_poster> geme, no into local juju env
<gary_poster> have to step away, back soon
<gary_poster> geme does story one work for you?
<geme> so local charms can be uploaded into the juju-gui ?
<gary_poster> geme that would be the plan, yes.  (to be clear, you can't do it now)
<geme> The use case I'm interested in is the ability to connect to any project charm store / repository - so that a project can view all available services that can be deployed
<gary_poster> geme, ah, yes.  we're interested in that too, for a bit later.  The upload story would be a sooner thing.
<geme> gary_poster, that sounds good. We see that projects may need to use a composite service catalog comprising of generic and project specific charms
<geme> being able to point the gui at multiple local charm stores would be very useful
<gary_poster> geme, cool.  I will prioritize the charm upload story for a bit sooner if I can, and communicate the multiple-charm-store interest as well.  thanks for feedback.
<geme> Great - I see the multi-charm-store case as giving the developer control of deployment instead of maybe devops.
<kita> hi
<kita> gsdsdhxz
<marcoceppi> hi kita
<juju> hi
<juju> jh
<juju> hi
<sidnei> noodles775: https://code.launchpad.net/~sidnei/charms/precise/avahi/trunk want to give it a try?
<sinzui> pavel, I see you pushed a fix for the README.md, but you did not increment the revision file. That might be a problem. The juju store only sees the 4th revision so far. So while http://manage.jujucharms.com/~pavel-pachkovskij/precise/rack shows it found your branch with new data, the store doesn't see juju revision 5 yet http://manage.jujucharms.com/search?search_text=rack
<pavel> sinzui, but in revision there is '5'
<pavel> sinzui, not 4
<sinzui> ah
<pavel> sinzui, and charm store doesn't see 5 revision either
<sinzui> pavel, thank you. I will update the bug report
<sinzui> right, the store only knows about 4 so manage.jujucharms only retrieved 4
<pavel> sinzui, marcoceppi told that revision file is no more required
<pavel> sinzui, so I've added it only on five number because of error message
<pavel> sinzui, I thought that this was the issue
<pavel> sinzui, maybe I have to remove it?
<sinzui> pavel, I don't think it is either. Maybe the store is ingesting charms slower than manage.jujucharms.com
<pavel> I think I pushed fifth revision yesterday
<pavel> or even on monday
<pavel> sinzui, nope, yesterday and it didn't update yet
<m_3> sinzui, pavel: are you expecting changes in lp:charms/rack or lp:~pavel-pachkovskij/charms/precise/rack/trunk ?
<pavel> sinzui, second
<pavel> http://manage.jujucharms.com/~pavel-pachkovskij/precise/rack
<m_3> ah, ok
<m_3> separate issue then :)
<pavel> m_3, my issue is that I can't deploy with cs:~pavel-pachkovskij/rack/trunk because it doesn't update, though if I pull it locally it works. My guess is that issue is with readme file
<sinzui> pavel, the store doesn't know about your recent work: http://pastebin.ubuntu.com/5959371/
<pavel> sinzui, how could I fix this?
<m_3> very strange
<sinzui> I'm not sure yet. Maybe the store is ingesting slower that m.jc.com
<pavel> I'm afraid it would be something very simple
<m_3> sinzui: I notice a stacking problem with the official branch... not sure if that's a red herring or not... `bzr info lp:charms/rack`
<pavel> bzr: ERROR: Parent not accessible given base "bzr+ssh://bazaar.launchpad.net/+branch/charms/rack/" and relative path "../../../../../~pavel-pachkovskij/charms/precise/rack/trunk/"
<sinzui> There is a report of a missing revision in logs. It does mention your id, but it is not clear which charm branch is the issue
<pavel> look like I broke something :D
<m_3> pavel: no, that was probably broken during "promulgation"... so it wouldn't've been you
<pavel> m_3, what if I remove branch and push again?
<m_3> it's most likely not causing this problem
<m_3> sinzui would know _lots_ more about it
<pavel> sinzui, so what if I?
<m_3> sinzui: nice.. yeah, that could be any number of things
<pavel> m_3, I will try, it won't be worst
<m_3> pavel: sure... with maybe a trivial new commit to make sure the store gets kicked
<m_3> sinzui: any way to infer the problem branch you mention based on it's position in the log?  I'm happy to dig if you send me a pointer or snippet of the log
<m_3> s/it's/its/   /me needs coffee
<pavel> ok, now I have brand new branch
<pavel> will charmers meeting be held today?
<m_3> pavel: nope... people are out afaik
<m_3> postponed to next week
<pavel> m_3, ah... ok
<pavel> m_3, do you have few minutes to chat about backups?
<m_3> pavel: sure... I see an email about it in my inbox, but haven't caught up yet... first day back from vacation
 * m_3 reading now
<m_3> oh, it's short
<tasdomas> hi
<tasdomas> where can I find documentation on the upgrade-charm hook?
<marcoceppi> tasdomas: we don't have an explicit docs about that hook yet, what questions do you have?
<tasdomas> marcoceppi, just wanted to add it to a charm and wasn't sure, what context the hook is run under
<marcoceppi> if functions like any other hook, it's just triggered via `juju upgrade-charm` command
<marcoceppi> tasdomas: most people simply symlink to the install hook as hooks are designed to be idempotent
<marcoceppi> but it depends on what the process is for upgrading the charm/service
<tasdomas> marcoceppi, thanks
<tasdomas> marcoceppi, what happens to relations when a charm is upgraded? They remain intact, right?
<marcoceppi> tasdomas: therelations remain, and the relation data does too, but just becareful you don't overwrite config files that have values derived from relations
<tasdomas> marcoceppi, thanks
<marcoceppi> m_3: have you ever used `relation-set --format=json` >
<marcoceppi> ?*
<m_3> marcoceppi: nope
<ahasenack> jcastro: I just tried destroy-unit, and the underlying machine (openstack instance) is still alive
<ahasenack> same for destroy-service
<ahasenack> it's even still listed in juju-status
<marcoceppi> ahasenack: is the unit in an error state?
<ahasenack> marcoceppi: no, it was a plain ubuntu charm
<marcoceppi> what version ubuntu?
<marcoceppi> s/ubuntu/juju-core/
<ahasenack> trunk, 1.13.1.1 currently
<ahasenack> tbh, I never saw juju destroy-unit or destroy-service actually taking down machines, I was surprised when jcastro said it would nowadays
<marcoceppi> ahasenack: it's worked for me in 1.11.4
<ahasenack> marcoceppi: just destroy-unit and the instance would die soon?
<marcoceppi> destroy-unit would remove the unit, then destroy-machine would remove the machine
<ahasenack> well
<ahasenack> destroy-machine or terminate-machine works
<ahasenack> I was talking about destroy-unit only
<marcoceppi> they're synonyms, *-machine
<ahasenack> jcastro said it would terminate the machine, unless I misunderstood
<marcoceppi> what is your expected outcome of destroy-unit?
<ahasenack> remove the unit from the machine, leave the machine up
<ahasenack> I filed a bug to add a --terminate flag to destroy-unit to optionally terminate the machine too
<marcoceppi> ahasenack: yeah, from my understanding that works. Unless I'm confusing it with destroy-service
<ahasenack> with co-location it's a bit trickier, it would have to know that no other units are there
 * marcoceppi pokes local deployment
<ahasenack> marcoceppi: I know it works. I was speaking to jcastro the other day about his blog post where he was talking about colocation and expanding and shrinking the cloud, and his example had only destroy-unit, no terminate-machine
<ahasenack> and I raised the point that the instance would still be up burning money
<marcoceppi> ah
<jreingol> can someone help me out... after juju install and using a maas env i keep getting the following error on bootstrap
<jreingol> error: cannot create initial state file: gomaasapi: got error back from server: 400 BAD REQUEST
<marcoceppi> jreingol: what does your environments.yaml file look like? Can you also run `juju bootstrap -v` and put the output in paste.ubuntu.com?
<jreingol> marcoceppi: found a few errors in the docs i was using to set it up
<jreingol> the maas url was wrong.. .its now http://localhost:5242
<jreingol> seems to be connecting now but gives error: Requested array, got <nil>.
<jreingol> http://paste.ubuntu.com/5960394/
<marcoceppi> jreingol: I've not seen that error before.
<jreingol> hahaha.... y do i always find the new ones
<jreingol> thanks for the look though
<marcoceppi> jreingol: what version of juju are you using (juju version)
<jreingol> 1.13.0-quantal-amd64
<marcoceppi> jreingol: Okay this might be related to a few missing commands in 1.13
<jreingol> erg
<jreingol> what version would you recommend?
<marcoceppi> jreingol: I'd recommend 1.12 but there currently isn't a ppa for it. It should be sorted soon
 * marcoceppi checks
<jreingol> i used the devel ppa
<marcoceppi> jreingol: actually, it's been uploaded to ppa:juju/stable already, if you remove the /devel ppa and add that one you should be able to get juju-core 1.12
<jreingol> should i remove all of it and go with the stable branch
<jreingol> perfect
<marcoceppi> jreingol: at least until the next devel release, which should sort out the missing commands issue if this is indeed affected by that
<marcoceppi> if you get the same error in stable you'll want to open a bug for sure, if not, i'd still recommend opening a bug with the output you pasted and letting them know it works in 1.12
<jreingol> k... well thanks for the help
<jreingol> yep... still happening
 * jreingol wonders if provider-state is a maas problem
<marcoceppi> jreingol: thanks, there's definitely something else going on. A bug would be the next best step. The core devs would be able to point you in the right direction
#juju 2013-08-08
<lolity> Hey admins and other people. Your site JujuCharms.com isn't working!
<lolity> So fix that please!
<lolity> Ok now it works. Great
<pavelpachkovskij> why 'config-get' command isn't documented in the official docs?
<pavelpachkovskij> I can't find it anywhere
<kita> hu
<juju> hi
<marcoceppi> pavelpachkovskij: it was in the old docs. What questions did you have in particular?
<juju> hi
<marcoceppi> kita: hi
<marcoceppi> err, juju
<juju> it is me juju but my name change into kita
<pavelpachkovskij> marcoceppi, nothing special, I was wandering why it's not there
<pavelpachkovskij> marcoceppi, I thought it could be replaced with smth I don't know
<marcoceppi> pavelpachkovskij: it's somethign that will be covered under charm author docs, which is still under contstruction
<pavelpachkovskij> what's the syntax for relation-list ?
<pavelpachkovskij> relation-list -r relation_id ?
<ahasenack> pavelpachkovskij: new way to find out: http://pastebin.ubuntu.com/5962862/
<pavelpachkovskij> ahasenack, oh man, thanks a lot!
<pavelpachkovskij> do I get it right that two service could have 1+ established relations and if i wan to run relation-get on custom relation I should run 'relation-get --format=json remote-service/0 -r relation-id-1' ?
<pavelpachkovskij> *relation-id:0
<ahasenack> pavelpachkovskij: reations are between services, and there could be several types of relations
<pavelpachkovskij> ahasenack, yep
<ahasenack> pavelpachkovskij: take a look at this for some introductory concepts: http://blog.labix.org/2013/06/25/the-heart-of-juju
<pavelpachkovskij> ahasenack, thanks
<marcoceppi> ahasenack: miracle command!
<pavelpachkovskij> Here is an example of how to use chef-solo with juju-charm https://github.com/Altoros/juju-charm-chef I would really appreciate if somebody test it
<marcoceppi> pavelpachkovskij: this looks awesome. I'll have a look at it tomorrow while traveling
<pavelpachkovskij> marcoceppi, thanks!
<sidiney> Hello..
<sidiney> ola
<sarnold> hello sidiney
<sarnold> note that IRC channels tend to be quiet until someone has a question :)
<sidiney> Need help to config my juju, i newbie and get any errors in my Yaml.
<sidiney> i generate ssl keys, import to my server, and setup configs, but get errors,
<sidiney> last error default environment "amazon" does not exist
<sarnold> sidiney: can you pastebin your errors, and configuration (with secrets removed, of course)
<sidiney> right.
<sidiney> environments:
<sidiney>      type: ec2
<sidiney>     access-key:
<sidiney>     secret-key:
<sidiney>     control-bucket: juju-faefb490d69a41f0a3616a4808e0766b
<sidiney>     admin-secret: 81a1e7429e6847c4941fda7591246594
<sidiney> #    default-series: precise
<sidiney> #    juju-origin: ppa
<sidiney> #    ssl-hostname-verification: true
<sarnold> sidiney: are you missing an "amazon:" before the "type: ec2" line?
<sidiney> yes,
<sidiney> error: cannot parse "/home/sidiney/.juju/environments.yaml": YAML error: line 104: did not find expected key
<sidiney> this error is in my maas service ?
<sarnold> did maas write your environments.yaml file for you? or did you write it yourself?
<sarnold> (I've not tried maas yet, sorry..)
<sidiney> i'm write
<sarnold> sidiney: you need an "amazon:" before that block: https://juju.ubuntu.com/docs/config-aws.html
<sidiney> yes, i had amazon account and setup the acceskey and my secret key
<sidiney> and umcomment in the Yaml
<marcoceppi> sidiney: your file should look like this: http://paste.ubuntu.com/5963596/
<sidiney> Ty, for help. need to getout.. restart
<sidiney> ;-)
<sidiney> Ty
<sarnold> marcoceppi: ah, nice idea :) thanks
<AskUbuntu> juju zookeeper installation stuck | http://askubuntu.com/q/330295
<weblife> hopefully that helps them.
<andreas__> hey charmers, rabbitmq-server fix when used with ceph: https://code.launchpad.net/~ahasenack/charms/precise/rabbitmq-server/check-device-before-mkfs/+merge/179282
<andreas__> and the same for mysql: https://code.launchpad.net/~ahasenack/charms/precise/mysql/wait-for-rbd-device/+merge/179292
#juju 2013-08-09
<sidiney> Hello! Can anyone help to solve a problem to set my YAML to amazon,
<sidiney> in the bootstrap juju, get error. no public ssh keys found
<arosales> sidiney, did you run ssh-keygen -t rsa -b 2048
<sidiney> yes
<arosales> and do you have something like the following in your ~/.juju/environment.yaml file
<arosales> authorized-keys-path: /home/<user>/.ssh/id_rsa.pub
<arosales> under aws
<arosales> given you didn't rename your keys and took the default
<weblife> sidiney what do you get when you do this 'juju bootstrap -v'
<sidiney> no arosales. where i make this,?
<arosales> sidiney, try appending that to the end of your aws stanza in you ~/.juju/environment.yaml file
<arosales> sidiney, also to confirm are your keys uploaded to aws?
<sidiney> no.
<arosales> sidiney, take a look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws
<arosales> sidiney, as a manual check you should be able to launch an instance in AWS via the console and then type in "ssh ubuntu@<aws-ip>" and you should get access to your manually provisioned instance.
<arosales> that is after your keys have  been uploaded to AWS
<sidiney> looks like good..
<sidiney> ;-)
<utlemming> davecheney: around?
<davecheney> utlemming: on a conf call at the moment
<davecheney> will be quite some time
<davecheney> utlemming: i'm on a plane at 08:00 tomorrow
<utlemming> davecheney: ack. anyone else that you know of that has access to HPCLoud for a juju smoke test?
<davecheney> utlemming: gz, jam, wallyworld
<davecheney> they are all in #juju-dev now
<utlemming> davecheney: ack, thanks
<jcastro_> thumper, http://askubuntu.com/questions/132411/how-can-i-configure-juju-for-deployment-on-openstack
<jcastro_> thumper, http://askubuntu.com/revisions/bba01ffc-f0b3-4992-ba44-1b38d11d116e/view-source
<jcastro_> thumper, http://askubuntu.com/questions/225513/how-do-i-configure-juju-to-use-amazon-web-services-aws
<jcastro_> thumper, http://askubuntu.com/questions/116174/how-can-i-configure-juju-for-deployment-on-hp-cloud
<jcastro_> thumper, https://juju.ubuntu.com/docs/glossary.html
<jamespage> is there a way I can use daily images rather than released images with juju-core using simplestreams data?
<juju> hi
<juju> hi
<sarnold> juju: hello; note that irc tends to be quiet until someone asks a question :)
<juju> ghi
<juju> jff
<kurt__> Hi all - just wanting to play with the juju-gui and looking at the read me.  Anyone know of a more complete howto?  I've got questions about the "deploying behind a firewall" section.
<benji> kurt__: what's your question?  I (or someone else here) might be able to help.
<kurt__> is it best to force deploy to a particular machine?  I'm scratching my head a bit determining how to make the URL or FILE available for the gui source files
<kurt__> I'm using MAAS in combination with this, though I'm not sure that matters
<benji> kurt__: let me see if I can find some details
<kurt__> thanks @benji
<kurt__> is Matthew Williams on your team?
 * benji notices he suddenly has a snail stuck on him.
<benji> kurt__: nope, I'm on the GUI team
<kurt__> Oh excellent - straight to the source :D
<kurt__> my goal is to figure out the steps necessary to deploy behind a firewall then post my own howto eventually
<benji> kurt__: so, let me be sure I'm right on what you're trying to do; you want to deploy the charm into a juju environment and that environment can not access the internet; right?
<kurt__> not necessarily - I just want it exposed to an address I can reach inside my network.  Once I understand that configuration methodology, then I can deploy as I wish.
<kurt__> get it?
<kurt__> my environment can in fact access the internet, but it is behind a firewall
<benji> kurt__: in that case you can just "juju deploy juju-gui" to get the GUI running (it'll take a little while) and then "juju expose juju-gui" so that you can access it; then you can run "juju status" and figure out what machine ("unit") it is running on, at that point you can load http://WHATEVER-UNIT-IT-IS-ON in your web browser and you'll be asked to log in
<kurt__> I am testing with maas and juju entirely on virtual machines
<kurt__> I actually have done that.  I see the public-address as null
<kurt__> "        public-address: null"
<benji> hmm, if it says that after "juju expose juju-gui" has been run, then I don't know what is happening
<kurt__> http://pastebin.com/JJQ4emyD
<kurt__> does the environments.yaml need to be updated with any special parameters for my case?
<benji> kurt__: the "agent-state: pending" is suspicious; "pending" means that it's still being deployed
<bac> kurt__: the agent state is pending.  you won't get an address until later.
<kurt__> suggest destroy-environment and start again?
<benji> It can't hurt, but machine 1 is still pending too, meaning that the machine isn't even fully up.  I don't know enough about juju on virtual machines to know how long that should take or what to do if it seems to take forever.
<kurt__> everything is only running the "y4na.master" node, correct?
<kurt__> sorry "ty4na.master"
<benji> I'm afraid I don't know enough about MAAS and using it with VMs to know.
<kurt__> I was looking at the juju status to figure that out
<kurt__> I'll try to destroy-environment and see if it happens again :)
<benji> machine 0 is ty4na.master; I would expect machine 1 to be a newly provisioned machine
<kurt__> yes, but I never saw it get allocated in the status
<kurt__> which is weird
<kurt__> are there any particular parameters that need to be explicitly spelled out in my environments.yaml in my case?
<benji> I'm afraid I don't know enough about deploying to VMs to know.  And I have to head out now.  I hope you figure it out.
<kurt__> thanks.  this question would generically apply to juju and not VMs :)
<kurt__> is there any kind of bug to do with juju-gui stuck in an agent-state of "pending"?  I don't see a node getting allocated for the charm.
#juju 2013-08-10
<asomething> hi all, wondering if anyone around could help make some sense of some debug info from trying to deploy a Django project from bzr using the python-django charm
<asomething> http://paste.ubuntu.com/5968323/
<asomething> the relation-joined hook fails when running: 'juju add-relation python-django postgresql:db'
<sarnold> asomething: just guessing, that -r in the relation-list command is given as the reason for the failing..
<sarnold> asomething: .. can you find documentation on the relation-list command that describes what -r is and why it's there?
<asomething> sarnold, let's see....
<sarnold> hrm, finally found some docs, https://juju.ubuntu.com/docs/authors-charm-anatomy.html
<sarnold> ... and its shorter than it could be and doesn't describe command line arguments at all. :/
<asomething> hmm... 'juju debug-hooks'  doesn't exist for juju 1.10.0.1
<asomething> guess I'll look at the  postgresql charm itself
<asomething> so it's something about the id of a related unit...  line 200-206 here: https://github.com/charms/postgresql/blob/master/hooks/charmhelpers/core/hookenv.py
<sarnold> nice
<sarnold> but the downside is I'm not seing an easy way to remove it and leave the rest of the code alone. ;/ if it is supported in a newer version of juju, that'd be the easier thing to do, I think.
<asomething> the same set of commands will get a hello world django project running, I guess I'll just work through the postgresql charm to I see exactly where my Django code is interacting
<asomething> i was just hope that someone in here might have seen the same thing before
<asomething> thanks for the try sarnold
<sarnold> asomething: good luck :)
<sarnold> asomething: oh!
<sarnold> asomething: maybe you can roll back to a previous version of the postgresql charm?
<sarnold> maybe there's an earlier version that would work with your version of juju?
<asomething> worth trying
<sarnold> maybe not _fun_... but worth trying.
<MACscr> So do charms have to be deployed on their own instance or can you choose to install some of them on the same virtual machine or physical server?
<mattrae_> MACscr: depends on which juju provider you use. there are juju providers for MAAS for bare metal installs, Openstack and ec2
<mattrae_> MACscr: right now if you want to deploy multiple charms to a single machine or vm, you can use jitsu
<MACscr> mattrae_: interesting, i wasnt really 100% aware of juju providers
<mattrae_> MACscr: in newer juju releases there deploying mulitple charms to a single node will put those charms in lxc containers, so the charms dont' conflict
<MACscr> ive only watched a few videos
<MACscr> ah, thats cool
<MACscr> so is there a management interface for managing these instances after they are deployed?
<mattrae_> MACscr: there is juju-gui, which is a charm that you can deploy as well
<MACscr> also, is there a limit to what is compatible with what? Aka, i see that juju can deploy openstack and it can also deploy ceph. Can it deploy ceph as the storage backend for openstack?
<mattrae_> MACscr: the juju gui is similar to what you see when you go to jujucharms.com
<mattrae_> MACscr: it depends on the charms but the openstack charms and ceph charms are reviewed or written by ccanonical to make sure they work together
<mattrae_> MACscr: here's an example deployment that includes openstack and ceph https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<mattrae_> MACscr: that page is describing a HA deployment, which the charms support. but you don't have to deploy that way
<MACscr> LOL, 28 servers for HA? rofl
<mattrae_> MACscr: yeah, basically every component has at least 2 nodes
<MACscr> right, but not every component needs its own node
<mattrae_> MACscr: so right now, i'd suggest putting each charm on its own vm or node
<MACscr> a lot of openstack components can go on the controller
<MACscr> so should be able to do HA with actually just 6
<mattrae_> MACscr: particularly for the openstack charms until there is container separataion. some of the charms can edit the same config files and they can conflict if you use jitsu to deploy to the same node
<MACscr> hmm, so is the LXC container thing theoretical or is there a version that currently supports it?
<mattrae_> MACscr: yeah you could do it that way, i'd watch out though for the scenario i'm mentioning where different charms may edit the same config file
<mattrae_> MACscr: i think its real, let me see if i can find the branch to use
<MACscr> I have 10 physical servers right now, plus a management server (low power for just doing deployments/vpn,etc), and then a single storage node, though im adding a second one shortly
<mattrae_> MACscr: looks like in juju-core 1.13.0, there is a 'local' provider that does the container stuff
<mattrae_> if you add the ppa:juju/devel
<mattrae_> you should be able to install juju-core
<MACscr> http://content.screencast.com/users/MACscr/folders/Snagit/media/539491bf-c5ed-49e8-88a5-d812f954176e/2013-08-02_04-54-29.png
<MACscr> thats pretty much a quick mockup of the idea im having
<MACscr> cool, thanks for the idea
<mattrae_> MACscr: sure np :)
<mattrae_> MACscr: looks like enough machines. you could potentially use a machine as a hypervisor and enlist the vms in maas.. if you need more machines in maas
<mattrae_> MACscr: heres how maas can control vms http://askubuntu.com/questions/292061/how-to-configure-maas-to-be-able-to-boot-virtual-machines
<MACscr> mattrae_: wow, you are just full of great info. Thanks so much!
<mattrae_> MACscr: for sure, np :)
<MACscr> i know there much be some huge gotcha thats going to rain on my parade that i havent seen yet =P
<juju> hi
<marcoceppi|away> jamespage: re simplestream data, you can spin your own imagemetadata to "overrule" whatever the actual data is. While I'm not 100% certain it'll work in all cases, I'm fairly confident you can have your own image-metadata which points to a different image than what simplestreams has
#juju 2013-08-11
<tal> I need some help with Juju. Anyone got a minute?
<tal> How do you save an environment for later use?
<tal> I have an exercise to make a web app and I will be using the Amazon cloud.
<tal> I don't want to keep my machines up and running constantly.. I only need them to run when I'm working on my project. Is there anyway to save an environment for later use?
<drecute> Hi
<drecute> I'll like to know, how do I deploy my own app to juju?
<drecute> do I need to write it as a charm?
<marcoceppi|away> drecute: yes
<marcoceppi|away> tal: You can use the juju-gui to export an environment for later use
<marcoceppi|away> tal: but that will only export the structure, not the actual data on each node
<drecute> cool
<drecute> marcoceppi|away: please is there a doc anywhere that describes all the possible keys of metadata.yaml?
<marcoceppi|away> drecute: yes
<marcoceppi|away> drecute: https://juju.ubuntu.com/docs/authors-charm-anatomy.html
<marcoceppi|away> drecute: https://juju.ubuntu.com/docs/authors-charm-writing.html
<drecute> great
 * drecute just wondering why that is hidden
<marcoceppi|away> drecute: it's on the docs page, if you click documentation at the top of the juju.ubuntu.com site
#juju 2014-08-04
<jam1> dimitern: geoff teale is inviting you (as I understand)
<jam1> mup: whois geoff
<mup> jam1: Unknown commands are unknown.
<jam1> mup: who geoff
<mup> jam1: Can't grasp that.
<jam1> mup help
<jam1> mup: help
<mup> jam1: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
<dimitern> :D
<jam1> mup poke geoff
<jam1> mup: poke geoff
<mup> jam1: Plugin "ldap" is not enabled here.
<dimitern> jam1, it's tealeg
<ackk> hi, does juju wait for machines to shut down properly before releasing/detroying them through the provider?
<mgz> ackk: in what context?
<ackk> mgz, specifically, with the maas provider. when calling destroy-environment I don't see DHCPRELEASEs in MAAS' dhcp. I see the shutdown message on the machines being destroyed but perhaps maas they are powered off by maas before the shutdown has completed?
<mgz> juju doesn't generally destroy machines unless you tell it to, and a cloud's terminate-machine *is* a "shut down properly" vms get a shutdown signal as normal
<mgz> ackk: possibly. worth asking in #maas perhaps, I'm not sure what their intended semantics on destroy-environment are, but we just tell maas to release all the machines
<ackk> mgz, so what does juju do on destroy-environment? destroy all services, then all machines?
<ackk> mgz, sorry, I'm trying to put together all the pieces :)
<mgz> nope, just releases all the machines straight away, doesn't do any fiddling around with state first
<mgz> so, no relation hooks get run etc
<ackk> mgz, ok, so in the case of maas it's just a release call
<ackk> mgz, thanks
<mgz> yup
<tedg`> jose, lazyPower, thanks guys for landing the SSL support for OwnCloud!
<Guest94660> Question about get peer ip after stop/start machine on amazon ec2. unit-get seems get correct local ip but relation-get -r <id> private-address <unit-id> still get previous peer ip
<Guest94660> do i need overwrite ip with relation-set?
<jose> tedg: enjoy! and please file a bug if you find anything else missing, I'll be glad to take a look at it :)
<abrimer> does anyone have experience deploying openstack havana on a cluster of IBM HS21 servers?
<tedg> jose, Cool, do you have plans to update for OwnCloud 7?
<jose> abrimer: are you using maas?
<abrimer> yes. maas and juju
<tedg> jose, Not sure that I need it, but it's version is one greater! :-)
<jose> tedg: it's definitely on the queue for approval, it's a simple change so should be in the store soon!
<jose> abrimer: cool, someone should be along to follow-up. I'm about to leave for a meeting
<tedg> Great, thanks!
<abrimer> I am having problems with getting quantum to setup the network properly
<jamespage> abrimer, I have experience of both but not together - I might be able to help
<abrimer> outstanding.
<abrimer> I have been working with maas and juju for some time now
 * jamespage listens
<abrimer> I don't think that I have setup maas networking such that the openstack juju charms have what is needed.
<jamespage> abrimer, are you having problems with accessing instances once deployed?
<abrimer> yes.
<abrimer> and the networking is not setup for eth1 at all
<jamespage> abrimer, on the neutron-gateway?
 * jamespage assumes you are deploying with neutron
<abrimer> I truely believe that I don't have the deployment lifecycle setup properly
<abrimer> yes. neutron via the quantum charm
<jamespage> abrimer, good - so the neutron-gateway/quantum-gateway charm required two network ports
<abrimer> right. and I only have eth0 configured via maas
<abrimer> when I create a network and attach the mac for each of the compute nodes, the nova-cloud-controller node and the quantum node my public ip changes in juju
<abrimer> should I walk back my deployment all the way to how my maas is configured with regards to the networking?
<jamespage> one for traffic to compute nodes, and one to provide access to a public networks
<jamespage> abrimer, eth0 only is OK for now
<jamespage> eth1 needs to be connected to the 'public access' network
<jamespage> just for the neutron-gateway node
<jamespage> abrimer, HS21 is a blade center right?
<whit> morning charmy world
<abrimer> jamespage, yes the HS21 is a blade and I have 14 of them specifically for this project.
<abrimer> all managed via the bladecenter chassis with their own cisco
<abrimer> I cannot for the life of me get the neutron networking to work,
<abrimer> everything up to the openstack-dashboard installs and I can log into horizon
<abrimer> when I first get in there is no public network setup
<abrimer> I can use the cli to create the network, subnet, and router but the instances will not get an ip assigned during initial vm boot
<abrimer> I feel that my gre setup is terribly wrong but don;t know where to start
<jamespage> abrimer, the gre tunnels should run OK over the configured eth0 interfaces
<jamespage> traffic breaks out over ext-port on the neutron-gateway charm
<abrimer> I thought so too. I have assumed that following the standard maas and juju deployment would get me a working config. I know that I am doing something wrong but cannot figure what it is.
<jamespage> abrimer, you may want to bump the mtu on your network interfaces to deal with the overhead of GRE
<jamespage> (and switch)
<abrimer> I have NOT done that.
<jamespage> abrimer, it should still work without doing that
<jamespage> but you might see some issues - ping at least should work
<abrimer> the cisco that is in the IBM chassis is essentially a 2950 and will not allow for an MTU less than 1500 is that a problem?
<jamespage> bigger is better
<jamespage> 1546
<abrimer> I allowed ssl (22) and icmp for the default sec groups but ping will not work for me
<abrimer> OH. Bigger not smaller
<jamespage> yup
<abrimer> I thought that 1400 was my target.
<abrimer> I know enough to be dangerous at this time. I have the cli commands down pat for nova, neutron, and ovs but I think that my overall understanding of the network layouts is deficient
<jamespage> abrimer, you can drop the instance MTU via the neutron-gateway charm but its not 100% reliable
<abrimer> is multi-host networking something that I would want? or is that only applied if using nova-networking?
<jamespage> only nova net
<abrimer> cool. thought so but wanted to be sure.
<abrimer> are there any other gotchas that I may want to eyeball beyond MTU?
<abrimer> I has been my understanding that using maas and juju should provide a working stack without my having to dig into the servers and manually config. Is that a true statement?
<jamespage> yes
<jamespage> you should be able todo everything via charm configuration
<jamespage> abrimer, but that does assume that the neutron-gateway charm is on a node networked correctly
<abrimer> thought so. again, I think that there is something (probably simple and obvious) that I am doing wrong here. Just cant put my finger on it.
<abrimer> right. both eth interfaces are available for that blade and if I load an OS alone it is pingable, ssh, scp the works.
<jamespage> abrimer, ok
<jamespage> abrimer, when you have deployed, can you get to all the servers?
<abrimer> I think that I will put the MTU to 1546 as you recommend and see where it leads me.
<abrimer> jamespage, yes all servers are available
<jamespage> abrimer, you only need to apply taht to eth0
<abrimer> even when eth0 is my maas ip address?
<jamespage> abrimer, not sure I understand that
<jamespage> by default, all the machines should provision with a configured eth0
<abrimer> sorry. when the server is installed maas provides the ip 10.10.30.X
<jamespage> abrimer, awesome
<jamespage> abrimer, so on the compute and gateway nodes run "sudo ovs-vsctl show"
<xwang2713> hhh
<jamespage> and see if gre tunnels are present for all compute and gateway nodes
<abrimer> right, and I see the br-tun and the local_ip and remote_ip are on the 10.10.30
<abrimer> they are correct for each compute node in regards to the remote-ip address
<jamespage> abrimer, that's good
<jamespage> abrimer, on the neutron-gateway node check that br-ex has a network port
<abrimer> I setup eth1 with a 10.10.20 so that there is external access and I manually change ovs to that interface
<jamespage> abrimer, you don't need todo it directly on the server
<abrimer> OH
<jamespage> juju set neutron-gateway ext-port=eth1
<whit> cory_fu: having some connectivity issues, relocating, but then want to catch up about monit start/stop stuff
<cory_fu> kk
<abrimer> I know that this is simple stuff, probable very elementary to you. I appreciate your help so much
<jamespage> abrimer, np
<abrimer> I have examined the openstack docs with regards to the networking and have looked at the troubleshooting document, do you have any advice regarding how to troubleshoot gre tunnels without setting up instances?
<abrimer> you know, using the cli to examine the network
<lazyPower> tedg: all in a days work. *hattip*
 * whit relocates
<jamespage> abrimer, neutron agent-list
<jamespage> is useful - sorry have to drop for a bit
<jamespage> also doing meetings today
<abrimer> no problem jamespage, thanks for all of your help man.
<jamespage> yw
<lazyPower> Tribaal: Thanks for these CH merges. +1'd and merged upstream.
<Tribaal> lazyPower: welcome :)
<Tribaal> lazyPower: I have a few more things to hack on CH and charms using it... it seems there was a lot of organic growth lately, that leaves some room for clearing things up :)
<lazyPower> Yeah, may want to ping tvansteenburgh and marcoceppi - as they are spear heading some new efforts with CH to make it more beginner friendly and address some of our longer running issues
<Tribaal> lazyPower: ah? Is there anywhere where I could track discussions regarding CH? Despite being pretty involved in Ubuntu's Openstack plans I never see anything about it anywhere.
<lazyPower> We talk about them occasionally in here. Mostly though that's been discussed during our standups and in the bugs filed against CH
<Tribaal> ok
<Tribaal> that's not very convenient for "external" communication however :/
<Tribaal> one thing at the time, first, let's get that code in better shape :)
<tvansteenburgh> Tribaal: question about https://code.launchpad.net/~tribaal/charm-helpers/drop-juju-gui-dead-code/+merge/228986
<tvansteenburgh> did you actually check every charm in the store to make sure none are using that code?
<frobware> how can I prevent add-machine adding a proxy. I added one at some point in the past, but it was bogus. Now every time I `add-machine' I see this: http://pastebin.ubuntu.com/7953894/
<frobware> ah, I see: juju --debug unset-env apt-http-proxy
<frobware> I was using set-env apt-http-proxy=
<Tribaal> tvansteenburgh: no, I haven't checked *all* of the charms, indeed. I tried to find a list of all charms, but couldn't find one. I would be happy to have a script check every charm methodically if I could be shown such a list :)
<Tribaal> tvansteenburgh: I checked the most obvious ones (juju-gui being one of them), and decided to apply the "see who complains" approach, TBH
<tvansteenburgh> Tribaal: yeah, i think that approach is probably fine for this. i can't decide how much i care
<Tribaal> tvansteenburgh: either way, the juju-gui part needs a lot of work if it has to stay - it contains duplicates of a lot of functionality (for example, it reimplements the apt-get wrappers)
<Tribaal> so I chose the path of least resistance :)
<tvansteenburgh> Tribaal: it's impossible to verify that no charm uses that code anyway, b/c charms can exist outside the store
<tvansteenburgh> Tribaal: but it seems low risk to remove since charmhelpers are bundled with each charm. that won't always be the case though
<Tribaal> tvansteenburgh: exactly. If somebody complains, I wager a lot of the functionality they actually need is either 1) available somewhere else in CH or 2) trivial to reimport...
<Tribaal> tvansteenburgh: I'm happy to be pointed at and mocked if somebody comes back at us for removing that (and fix the mess, too).
<Tribaal> :)
<tvansteenburgh> lol
<Tribaal> well, ok, "happy" might not be the right term here
<tvansteenburgh> Tribaal: fair enough. i'm pinging in #juju-gui to see if anyone there complains, if not...
<tvansteenburgh> ok they don't care either, i'll approve it
 * Tribaal conjures the image of a guillotine in his mind. His French heritage approves.
<tvansteenburgh> kirkland: have you tried transcode on ec2?
<tvansteenburgh> kirkland: more to the point, do you know if it works there? everything deploys fine but i can't get the web ui to come up
<sebas5384> hey lazyPower!
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: lazypower & tvansteenburgh || News and stuff: http://reddit.com/r/juju
<lazyPower> Whats ups sebas5384
<sebas5384> sorry for the delay hehe i sow your messages about the dns server like today
<sebas5384> hehe
<lazyPower> O, yeah! its a great PoC charm, needs a bit more love
<lazyPower> it'll work great in dev though - where your DNS can afford to not be HA
<sebas5384> lazyPower: i understand your point, but for a local dev environment
<sebas5384> its more than enough, what do you think?
<lazyPower> certainly!
<sebas5384> great! so, thinking in a local dev environment, into a vagrant box, how do you think it should be used?
<lazyPower> are you distributing your services between many vagrant boxes?
<lazyPower> its not going to work in that sense if you are. It will work in a single LXC (or cloud) based environment. as all the services are in a single deployment map. You would then add the DNS service charm IP to your /etc/resolv.conf - all your domains are then available
<lazyPower> so that will work in terms of Vagrant machine juju host -> to service. i havent tried parent of vagrant -> vagrant -> service. It wasn't intended to be used within vagrant, more so for like EC2 environments, or LXC local based environments. I would think if you proxy your DNS into that vagrant machine, you can reach them.
<lazyPower> but that needs testing.
<abrimer> jamespage, are you available for a question?
<tvansteenburgh> jose you around?
<jose> tvansteenburgh: hey! I am
<tvansteenburgh> jose, you tested the transcode charm a bit right?
<tvansteenburgh> jose, did you run it on ec2?
<jose> tvansteenburgh: I did, yes! I could test it again if needed
<jose> yep, EC2
<tvansteenburgh> and it worked fine?
<tvansteenburgh> i can't get the web ui to come up
<jose> oh, is the port listed open in juju status?
<jose> I recall the port not being called as open with open-port
<tvansteenburgh> ah, ok. i'll check that, gotta redeploy my env
<tvansteenburgh> that was probably it though, thanks
<jose> np :)
<jose> I'll give it a shot again in a minute, just finishing up a text
<tvansteenburgh> jose, in that case, you wanna just leave your feedback here? https://bugs.launchpad.net/charms/+bug/1342843
<jose> sure
<tvansteenburgh> jose, no sense both of us testing it - i'll move on to something else
<tvansteenburgh> jose, thanks!
<jose> no prob!
<jose> tvansteenburgh: hey, would you mind confirming the behavior I'm seeing? I'm just getting exit code 5 whenever I try transcode
<tvansteenburgh> jose: where do you see that? in one of the unit logs?
<jose> tvansteenburgh: correct, in the first unit logs, it never gets to convert it
<jose> lemme pastebin
<jose> http://paste.ubuntu.com/7955549/
<tvansteenburgh> jose: ok, i'll give it a go
<jose> thanks :)
<tvansteenburgh> jose: my config-changed hook ran successfully
<jose> wait, this is after doing 'juju set transcode input_url=link to video'
<tvansteenburgh> yeah
<tvansteenburgh> i used http://download.blender.org/demo/old_demos/diditdoneit.mpg
<tvansteenburgh> jose: http://ec2-54-166-1-51.compute-1.amazonaws.com/transcode/job__diditdoneit.mpg_copy/
<jose> I used this one
<jose> https://ia600201.us.archive.org/14/items/ligouHDR-HC1_sample1/Sample.mpg
<tvansteenburgh> i'll try it
<tvansteenburgh> exit 5
<jose> hmm, probably because it's httpS?
<tvansteenburgh> jose: yeah it worked for me with http
<jose> ok, I'll take a look later
<jose> will try with some other links
<tvansteenburgh> i'm gonna add a couple comments to the bug before i EOD, thanks for testing this!
#juju 2014-08-05
<jcastro> https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
<jcastro> thumper, ^^^
<thumper> bac, fwereade: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
<bac> thumper: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
<bac> thumper: no, https://wiki.canonical.com/InformationInfrastructure/IS/Mojo
<stub> Tribaal: I think the non-corosync leader election stuff is still racy, in that you can have two or more units that think they are the leader running hooks at the same time.
<Tribaal> stub: interesting, but how can that work?
<Tribaal> stub: seems like "I am the unit with the smallest unit number" should be relatively easy to determine?
<Tribaal> stub: or do you mean it races with the peer list fetching?
<stub> A three unit cluster, units 2 and 3 have joined the peer relationship and happily running hooks. unit 1 is finally provisioned and joins the peer relation
<Tribaal> ah
<Tribaal> smartass units :)
<stub> Last I checked, it is impossible to elect a leader reliably if you create a service with more than 2 units
 * stub looks for the bug number
<Tribaal> yeah, seems very dodgy to do so. I guess the decoumentation should reflect that, but the comments are still valid
<Tribaal> stub: can we query the juju state server for the list of peers?
<Tribaal> :)
<stub> Tribaal: I haven't looked into unsupported mechanisms :)
<Tribaal> stub: hehe
<stub> Tribaal: I'm just sticking with the 'create 2 units, wait, then add more' as a documented limitation until juju gives us leader election
<stub> https://bugs.launchpad.net/juju-core/+bug/1258485
 * Tribaal looks into how complex a corosync setup is
<stub> Let me know, that might solve my issues too...
<Tribaal> stub: seems like it would be generally useful, yes. seems like a job zookeeper would have handled well though
<Tribaal> sorry if I'm breaking a taboo :)
<stub> I think juju has the information we need, it just needs to be exposed to the charms ;)
<Tribaal> stub: yeah
<Tribaal> stub: ohh
<Tribaal> stub: I think I have an idea :)
<Tribaal> stub: I'll give it a spin when I'm on the beach this week and see if it can work
<stub> Tribaal: I've proven to myself that it is impossible, and nobody has yet corrected me, but you are more than welcome to prove me wrong :)
<stub> My test suite seems guaranteed to trigger the race conditions :)
<Tribaal> stub: sweet!
<Tribaal> stub: a reproductible race is half he battle already
<Tribaal> s/he/the/
<Tribaal> so, corosync uses multicast it seems
<Tribaal> that comes with its own set of problems
<tvansteenburgh> jacekn: hi, i'm working the charm review queue this week, do you have any updates for https://code.launchpad.net/~jacekn/charms/precise/rabbitmq-server/queue-monitoring/+merge/218580 ?
<jacekn> tvansteenburgh: sorry no another team took over this project
<jacekn> tvansteenburgh: I will let them know
<tvansteenburgh> jacekn: ok thanks
<bigtree> I am having an issue with the juju mongodb filling up my 8gb micro sd card -- is there a way I can periodically flush this db?
<jamespage> dimitern, http://paste.ubuntu.com/7961799/
<jamespage> dimitern, http://paste.ubuntu.com/7961802/
<khuss> i'm creating a new charm my-nova-compute and it has to be installed on top of nova-compute. This means my-nova-compute has to be installed after installing nova-compute on the same machine. What kind of relationship can I use to achieve this?
<rbasak> sinzui: did you sort that source tarball for me, please?
<rbasak> sinzui: I was having connectivity issues, so don't know if I missed a URL.
<sinzui> rbasak, I am so sorry. I forgot. http://juju-ci.vapour.ws:8080/job/build-revision/1666/
<rbasak> sinzui: no problem. Only getting to it now, as I wait on some very slow mysql tests :-/
<rbasak> sinzui: are you free in eight minutes? The TB meeting has had some questions about Juju upstream QA for the exception request.
<rbasak> sinzui: looks like it's dragged on for a while. If you could answer their questions, that might speed things up.
<rbasak> sinzui: #ubuntu-meeting-2
<sinzui> rbasak, I don't have time, sorry. I am sprinting and debating at this moment
<rbasak> sinzui: OK, I'll try and do what I can.
<hatch> anyone know why I would get this error when trying to bootstrap using local?
<hatch> WARNING ignoring environments.yaml: using bootstrap config in file "/home/vagrant/.juju/environments/local.jenv"
<hatch> 1.20.1-saucy-amd64
<jcw4> hatch: I believe that's just a warning letting you know it's using the local.jenv instead of the environments.yaml
<jcw4> hatch: if the local.jenv doesn't exist juju will create it the first time using environments.yaml as the template
<jcw4> hatch: but after the local.jenv has been created, any changes in that section of the environments.yaml won't get picked up
<hatch> ohh ok, it subsequently fails with:
<hatch> ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
<hatch> so I thought that might have been the problem
<jcw4> hatch: hmm, that seems like an unrelated error.  Not sure what that one is
<hatch> here is the full output https://gist.github.com/hatched/5849510b38afac01b6cf
<hatch> not sure if that helps at all heh
<jcw4> hatch: interesting.  The WARNING unknown config field "shared-storage-port" bit is interesting
<jcw4> hatch: but I'm not sure it's related either
<jcw4> hatch: I'm suspecting lxc issues maybe
<jcw4> hatch: can you 'juju destroy-environment local' and 'juju bootstrap' again?
<hatch> yeah i have to use --force though because it seems to have created a 'partial' env
<hatch> the same issue happens
<jcw4> hatch: hmm
<hatch> yeah I'm at a loss at how to debug this heh
<jcw4> hatch: I'm afraid I don't know much more than that.  What does 'sudo lxc-ls --fancy' show?
 * jcw4 grasping at straws
<hatch> a fancy empty table :)
<jcw4> hmm; that's interesting.  I would expecte at least one row
<hatch> after destroying?
<abrimer> jamespage, are you available for a question?
<hatch> jcw4 well thanks for the help, I'll keep poking around
<jcw4> hatch: yeah, I think the 'juju-*-template'  would stay around
<jcw4> hatch: yw... good luck :)
<hatch> thanks - I'll need it haha
<jcw4> hatch: lazyPower or marcoceppi or someone else may know better, if they're available right now
 * lazyPower reads scrollback
<lazyPower> hatch: do you have teh juju-plugins repository added?
<lazyPower> there's a plugin to help clean this up and get you to a known good state - fresh from the cloud. juju-clean
<hatch> lazyPower not sure....
<hatch> unrecognized command
<hatch> so probably not
<lazyPower> https://github.com/juju/plugins
<lazyPower> install instructions are in the README. just clone and add to $PATH
<hatch> oh ok will try
<themonk> how to view unit log in amazon instance?
<lazyPower> themonk: either jujud ebug-log, or cat/tail/less it in /var/log/juju/unit-service-#.log
<lazyPower> *juju debug-log
<themonk> ok thanks :)
<hatch> lazyPower I don't want to jinx it but it appears to be working now....
<lazyPower> woo
<hatch> so...was that caused by the upgrade path or something?
<hatch> any idea why it was broken?
<lazyPower> hard to say
<lazyPower> local provider can be picky
<hatch> is this plugins stuff in the docs? I couldn't find it, it definitely should be :)
<lazyPower> nope
<lazyPower> its very unofficial atm
<themonk> lazyPower, its not there i have /var/log/juju-themonk-local it has only local unit log, i want amazon instance unit log
<lazyPower> themonk: you need to juju ssh to the unit, then look for it in /var/log/juju
<lazyPower> bbiaf, lunch
<themonk> ok got it
<natefinch> man, memtest is not fast
<hatch> natefinch you're sure having bad luck lately :)
<natefinch> probably same problem as before... I just thought it wasn't hardware, since the live disk worked, but maybe it's something specific to booting
<lazyPower> natefinch: no sir
<lazyPower> memtest is slooowwwww especially when you have quite a bit of it.
<sarnold> heh, reminds me of the first time using it on a machine with 16 gigs.. "oh haha look how long this is going to take! *wait five minutes* oh. this is annoying."
<lazyPower> haha, seems about right
<natefinch> took an hour... no errors in the first pass though
<abrimer> Can anyone help me with my quantum configuration for openstack using maas and juju?
<npasqua> Hello all. Does anybody have experience using the hacluster charm? We received agent-state-info: 'hook failed: "ha-relation-changed/joined"' on most subordinates.
<abrimer> jamespage, do you have a minute to help me with my quantum issue?
<themonk> I am not getting anything after hitting amazon public-address i cant ping too !!!!
<themonk> amazone dashbord shows me that instance are running
<lazyPower> themonk: did you expose it?
<themonk> lazyPower, yes
<lazyPower> did you validate your security groups were modified to actually open the ports?
<lazyPower> and its not some hiccup on the AWS API side of things?
<themonk> 30 min ago it was ok
<lazyPower> did your units public address change on you?
<themonk> i just redeploy my charm
<themonk> i use --to 2 so public address should not change
<themonk> and it remain same
<themonk> i just expose my amother service and i cant access it now too !!!
#juju 2014-08-06
* Topic unset by jcastro on #juju
<jcastro> heya lazyPower
<jcastro> https://bugs.launchpad.net/charms/+bug/1353535
<jcastro> can you look at this when you get a chance?
<lazyPower> jcastro: on it
<jcastro> lazyPower, <3
<jcastro> marcoceppi, lazyPower: also, can you guys rope in adeuring with that vagrant port discussion?
<jcastro> sinzui tells me he can help us fix
<lazyPower> adeuring: have a look at https://github.com/juju/juju/issues/470 when you've got time.
<adeuring> lazyPower: already working on it. At least a simple fix should be ready tomorrow
<lazyPower> nice
<lazyPower> adeuring: you're the man. hi5
<jcastro> adeuring, thanks man, that's awesome!
<adeuring> lazyPower: thanks :) Bur right now I've just changed the port used for the gui. but vagrant also supports a kind of "port collision detection and resolution". It would be nice to use that too -- but I have no clue how the guest machine could see what its port config is. And that would be needed if we want to enable the collision detection
<lazyPower> adeuring: I'm somewhat familiar with what you're talkign about - i'm pretty sure its a mapping. You give it options to use on the host, the port on the guest is always static.
<lazyPower> so the host could be say, 8000, 8001, or 9090 - you pass it that array and it attempts. if fails it cycles to the next port. once its exhausted all options it bails out.
<lazyPower> the idea is you dont know what the host environment has occupied, but the guest is always expecting to use the same port.
<adeuring> lazyPower: the problem is: the server on port 6080 redirects to 8001 (or some othoer port in the furutre). And this redirecting server needs to know what the currently used port is
<lazyPower> this sounds like a job for configuration management, and a tuneable config option.
<adeuring> lazyPower: perhaps. My main problem: As I understand it, vagrant can select a new port on each run of "vagrant up". So the redirecting server needs to know what the currently selected of for the main gui server is
<lazyPower> this is all guest based configuration. The HOST is the only port that's likely to change in this scenario right?
<adeuring> in other words: some process on the guest needs get information how is was configured by the host
<lazyPower> hmm... maybe i'm not looking at this correctly
<lazyPower> adeuring: i'll wait for your patch submission - you said likely tomorrow?
<lazyPower> sub me to the MP please. I'd love to look this over so I know whats changing.
<adeuring> lazyPower: right. Two patches actually: One for lp:jujuredirector/quickstart, the other for the build scripts
<lazyPower> ok. Thanks for the heads up adeuring. I'll keep an eye out for the e-mails when it happens.
<rbasak> marcoceppi: could you review my answer in http://askubuntu.com/questions/506647/juju-and-openstack-manual-provisioning/507690#507690 please? I think it's accurate, but I want to make sure.
<frobware> I keep running into issues with keystone and agent-state-info: 'hook failed: "shared-db-relation-changed"'. Further debug-log stuff: http://pastebin.ubuntu.com/7971540/  Any clues as to why that now fails? I say now because for some hours this afternoon everything has been fine.
<mfa298> frobware: I've had similar issues when deploying a HA cluster but doing a juju resolved --retry keystone/<instance> on the failed one after the primary has finished all it changes seems to fix it.
<mfa298> My understanding is when setting up the relationships only one instance can do some of the work (setting up the DB). If the other instances try and get that config before those parts are setup you get an error
<frobware> mfa298: I can juju ssh keystone/0 and run the command in the output without error
<frobware> mfa298: so maybe some impatience in my script: I have all the deploy and add-relation's run without any intervening sleeps...
<mfa298> even with pauses you might get errors.
<frobware> mfa298: so repeatedly run 'resolved' should resolve this?
<mfa298> If you've deployed 2 instances of keystone, added HA cluster then add the relationship to mysql. You'll find that only one keystone instance creates the DB etc (I think that's the oldest instance) but both  will try and get the DB config. If the 2nd instance tries to get the config before the DB is created (very likely to happen) then it gets an error
<frobware> mfa298: no HA here. just single instances.
<mfa298> ah ok, the shared-db error seemed to imply you had some sort of HA. But I'm by no means an expert (I've been trying to get it working with HA).
<frobware> mfa298: I was curious as to why ssh'ing and running the commands always succeeds, yet any invocation of --retry yields the error in the pastebin
<mfa298> I'm not sure about that. I've only seen errors like that with the HA bits
<frobware> mfa298, this is with: 1.18.1-trusty-armhf.  I could try 1.20.
<frobware> mfa298: ah well, so it's resolved. I've been banging on the --resolved action for about 30 mins prior to talking here. Seems a long time but 'status' reports 'started'. Thanks anyway.
<frobware> mfa298: heh, so pretty much all services now report the same error. I'm confused...
<frobware> mfa298: Once you enter a 'agent-state: error' do you have to resolve manually or does juju retry automatically?
<mfa298> I dont think juju does anything else until something is prodded - but I don't know enough to be sure what actions can lead to things being fixed.
<mfa298> I've mostly just used resolved --retry and if that fails try and work out what config is missing, destroy the environment and start again.
<mgz> you can also destroy-machine --force as a next step after resolved which is less of a big hammer than taking down the whole env
<mfa298> for me at this point it's a good test of the documentation and how repeatable things are.
<mfa298> as a side note what't the best way to report bits missing in the documentation (e.g. needing to set secret for openstack-dashboard when doing HA)
<frobware> mgz, mfa298: for the some of the other things that are failing I get "Access denied for user 'glance'@'192.168.2.251' (using password: YES)".
<frobware> And I wonder how relevant this still is: http://www.tikalk.com/alm/solution-mysql-error-1045-access-denied-userlocalhost-breaks-openstack
<marcoceppi> rbasak: great answer!
<rbasak> marcoceppi: thanks! Just wanted to check it was accurate. I'm not really a charmer - you know about the experimental stuff I've been thinking about though.
<rbasak> Not had time to look at that again recently :-/
<marcoceppi> rbasak: yeah, I gave it a look and had an email drafted a while ago. Got really busy and forgot to reply. In short: I love it
<rbasak> marcoceppi: I made some more progress since then. I really want to sort it out when I can.
<rbasak> I've convinced myself that it'll work, so I need to do a more refined PoC next.
<jose> did anything happen to the channel topic?
<lazyPower> jose: it was 'jorged'
<jose> well, expected
#juju 2014-08-07
<lazyPower> http://askubuntu.com/questions/507521/juju-charm-relation-hooks-are-not-running/508158#508158
<mbruzek> jose are you there?
<mbruzek> jose https://bugs.launchpad.net/charms/+source/owncloud/+bug/1352484
<khuss> i have a charm B that is subordinate of charm A.  Then I added a new unit of charm A. How do I add the subordinate B to this new unit
<marcoceppi> khuss: it will happen automatically
<khuss> marcoceppi: yes.... it did. Took a while but eventually it happened
<tvansteenburgh> lazyPower: i can't get on the standup, sry
<lazyPower> tvansteenburgh: ack. Any udpates from abently on your tracking card?
<tvansteenburgh> lazyPower, yes actually, will update the group once my POS isp gets back to normal
<lazyPower> ack. ty
#juju 2014-08-08
<jcastro> bloodearnest, hey
<jcastro> bloodearnest, noodles775: evan is asking (We're at a sprint) on how we test the ansible bits in a charm?
<bloodearnest> jcastro: not easily, other than actually running it, which is a downside to using ansible currently
<bloodearnest> jcastro: the 2 things I do do are:
<bloodearnest> 1) use ansible-playbook --syntax-check to check for typos
<bloodearnest> 2) unit test any templates standalone
<jcastro> ack, thanks, exactly what we needed to know, thanks.
<bloodearnest> jcastro: we've talked about spending some time seeing if we can write something that mocks out ansible at some level (it's all python, after all)
<bloodearnest> jcastro: if it's any help, my lint make target is usually: https://pastebin.canonical.com/115051/
<bloodearnest> jcastro: in general, once I have a unit up, I find that edit the playbook on the unit with debug-hooks, then using this plugin to save those changes, works pretty well for fast feedback: https://pastebin.canonical.com/115053/
<lazyPower> gnuoy: Are you around/avail?
* jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: lazypower & tvansteenburgh || News and stuff: http://reddit.com/r/juju
<lazyPower> kentb: Ping
<kentb> lazyPower, hey
<lazyPower> Hey Kent! I'm reviewing the Dell Open Server Management charm in the queue - there's no way i can stub out the calls and validate without a dell power server is there?
<kentb> lazyPower, right. you'll probably only get so far with it.  probably as far as the package installation but the dataeng service most likely won't start without a server
<kentb> (a dell server that is)
<lazyPower> kentb: I can do a full charm review, but without being able to validate this charm i'm not comfortable promoting it to a recommended charm. It can live in a personal namespace - but i'd need access to the physical hardware to get full validation to recommend this particular charm.
<lazyPower> this is sad to me, because it looks like an excellent utility for people running dell power edge servers
<kentb> lazyPower, I'll make a note and see if maybe we can loan you one or get you access to one somehow
<lazyPower> kentb: that would be excellent. For now i'm going to wrap this review as a CR, outline any concerns about not being able to promote under current conditions - when you've settled a path forward ping me directly here and I'll be more than happy to context switch into testing the charm.
<kentb> lazyPower, ok. sounds good
<jose> mbruzek: ping
<lazyPower> jose: he just bailed for lunch
<jose> ah, gotcha
<lazyPower> something I can help with?
<jose> lazyPower: nope, just wanted to confirm the bug he filed for owncloud, and that an MP is in the queue now
<jose> thanks though :)
<lazyPower> link?
<jose> as it was using the repo by default and it grabs the latest version ever, there was a path change and a sed failed, but MP is there
<jose> https://code.launchpad.net/~jose/charms/precise/owncloud/update-to-7.0.1
<mbruzek> Hello jose
<jose> hey mbruzek!
<jose> I found out what caused the problem
<mbruzek> yeah?
<jose> I was focusing in 7.0.0, and 7.0.1 was released maybe a couple days or hours ago? so basically the Sabre path changed and caused the sed to fail
<jose> the config options for 'src' were still pointing to 7.0.0, but as the repo gives you the latest one, it was grabbing 7.0.1, causing that
<mbruzek> jose ahh
<mbruzek> jose what about comment #3, with the ini error?
<jose> I haven't been able to reproduce that one
<jose> my config-changed ran successfully
<mbruzek> Ok let me give it another go.
<mbruzek> Thanks for turning this change around for me.
<mbruzek> I appreciate it.
<jose> no problem, my pleasure
#juju 2014-08-09
<SP33D> Hello frinds first of all respect your demos look realy great and awsome i whant to contribute better docker support but first of all :D i need to get it running in 2 situations
<SP33D> i don't understand how to deploy the gui to the local machine that runs juju or the laptop
<SP33D> the laptop maybe via ssh maybe
<SP33D> :D
<SP33D> but i can query the api that works great :D
<SP33D> hello frinds its time to repeat my question to fix my understanding whats going on and why i have nodebug infos :D
<SP33D> my question is how to i deploy to my local pc juju charms isn't there a other option as lxc that don't works?
<SP33D> maybe the manual option?
#juju 2015-08-03
<blr> noted today that $LANG is unset in a hook execution context, this can cause problems for some python libraries that (arguably incorrectly) rely on a default system encoding e.g. calling codecs.open()
<blr> commented on https://github.com/juju/juju/issues/133 marcoceppi, any thoughts on resolving that one?
<jose> Odd_Bloke, rcj: ping
<jamespage> gnuoy, cinder is sufferring from inadequet patching in its unit tests
<jamespage> they work ok on a real machine - but on a virt machine (like the test environment)
<jamespage> vdb is a real device :-)
<jamespage> I've worked around this for now by prefixing device names with 'fake'
<jamespage> but it will need a wider review - I'll raise a bug task for it
<jamespage> for 15.10
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/cinder/unit-test-fixes/+merge/266692 review please :-)
<jamespage> resolves the cinder test failures for now
<gnuoy> jamespage, thanks, merged
<Odd_Bloke> jose: Pong.
<jamespage> gnuoy, anything else I can help with?
<gnuoy> jamespage, well if you wanted to create a skeleton release note I wouldn't hold it against you ...
<jamespage> gnuoy, ok
<jamespage> gnuoy, dosaboy, beisner: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507
<gnuoy> ta
<jamespage> gnuoy, deploy from source needs to coreycb input - is he back today?
 * jamespage goes to look
<gnuoy> jamespage, looks like we have charm upgrade breakage.
<gnuoy> rabbit is refusing connections from neutron-gateway after the upgrade (invalid creds). investigating now
<gnuoy> Actually, it's refusing connections from everywhere by the looks of it
<jamespage> gnuoy, that would indicate a crappy rabbitmq upgrade methinks
<jamespage> gnuoy, is that on 1.24?
<jamespage> gnuoy, I'll take a peek
<jamespage> gnuoy, I think I see a potential bug in the migration code in peerstorage
<gnuoy> jamespage, sorry, was stuffing food in my face
<gnuoy> jamespage, yes, 1.24
<gnuoy> jamespage, am all ears viz-a-viz peerstorage migration bug. I still have my environment so can supply additional debug if helpful
<jamespage> gnuoy, L97
<jamespage> there is a relation_get without a rid
<gnuoy> ah
<jamespage> so I think that may cause a regeneration of all passwords if called outside of the cluster relation context
<jamespage> gnuoy, grrr - redeploying as dns foobar
<gnuoy> jamespage, it definitely looks like the password has changed in rabbit, using "rabbitmqctl change_password" to set it back to what it was seems to fix things.
<jamespage> gnuoy, the password changed in rabbit, or the password changed on all the relations?
<jamespage> that's my hypothesis
<gnuoy> jamespage, the password changed in rabbit
<gnuoy> jamespage, well actually, what I'm saying is, the password in the client config and the password advertised down the relations are the same but they don't seem to equal the actual password rabbit has for the user
<jamespage> gnuoy, yah
<jamespage> that matches my theory - just trying to prove it
<gnuoy> kk
<jamespage> gnuoy, there is no code in the charm that changes passwords in rabbit, but it would ignore a change triggered by a broken migration - that would propagate out to related services, but not reflect the actual password
<beisner> o/ good morning
<beisner> gnuoy, afaict, reverse dns is/was a-ok.  host entries are coming and going with instances.
<gnuoy> beisner, it worked straight through on my bastion with the only error being trusty/git/icehouse. I've scheduled another run but it's been in the queue for ~4hours
<beisner> gnuoy, however, i have observed that due to rmq-funk in serverstack, some messages are really delayed.  that is observable in that serverstack-dns may not always have the message back and the reverse dns record added by the time the instances is already booted an on its way.  :-/
<beisner> gnuoy, saw this as well on ddellav's bastion as we were t-shooting a failed re-(re)deploy
<jamespage> my dns appears foobarred right now
<jamespage> I thought I just fixed it up
<beisner> gnuoy, throttle is way down.  if we turn it up to have more concurrency, serverstack gives us error instances.
<beisner> i just removed 6 error instances from last night (which induced some job fails)
<jamespage> beisner, indeed - I have partial entries for my dns
<gnuoy> jamespage, I don't follow the scenario you outlined. broken migration?
<gnuoy> I assume you don't mean db migration
<jamespage> gnuoy, yeah - the migration incorrectly missed the peer relation data, so generates a new password
<jamespage> gnuoy, peer -> leader migration
<gnuoy> oh, yes, of course that migration
<gnuoy> jamespage, so rabbit is pushing out a new password to the clients without actually changing the password for the user to the new value?
<jamespage> yeah
<gnuoy> oh /o\
<jamespage> I think that's the case, but can't get an env up right now
<jamespage> beisner, gnuoy: it would appear notifications are going astray somewhere on serverstack
<beisner> jamespage, oh yeah ... also observable in not always getting an instance;  juju sits at "allocating..."
<beisner> meanwhile nova knows nothing of the situation
<beisner> but, on the jobs ref'd in bugs, i've run, re-run, and re-confirmed that things went well for those runs, afaict.
<jamespage> beisner, hmm
<beisner> jamespage, gnuoy - mojo os-on-os deploy test combos all pass.  bear in mind, that just fires up an instance on the overcloud, checks it, and tears down.  http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner/
<beisner> so there's a \o/ !
<beisner> jamespage, gnuoy - the bare metal equivalent of that ^ is also almost all green.  re-running a T-K fail.  http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner_baremetal/
<jamespage> gnuoy, do we have a bug open for the rmq upgrade problem?
<jamespage> the password def gets missed during the migration
<gnuoy> jamespage, nope, I'll create one now
<beisner> jamespage, gnuoy:  fyi just deployed T-I/next.  vgs and lvs come back "no volume groups found."   added to bug 1480504
<mup> Bug #1480504: Volume group "cinder-volumes" not found <amulet> <openstack> <uosci> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1480504>
<gnuoy> jamespage, Bug #1480893
<mup> Bug #1480893: Upgrading from stable to devel charm breaks clients <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1480893>
<jose> Odd_Bloke: hey, I'm getting some errors with the ubuntu-repository-cache charm, the start hook is failing
<jose> let me run and do a pastebin of the output
<jamespage> gnuoy, dosaboy: added some detail to that bug - I need to take an hour out - maybe dosaboy could look at a fix in the meantime?
<jamespage> otherwise I'll pickup when I get back
<gnuoy> jamespage, dosaboy, I can take a look
<jamespage> gnuoy, ta - i think the migration code needs to switch to always resolving the using the rid for the cluster relation - or get passed that from high up the stack (its not currently)
<gnuoy> kk
<jose> Odd_Bloke: lmk once you're back around please
<Odd_Bloke> jose: o/
<jose> Odd_Bloke: hey. I'm getting an error on the start hook of the ubuntu-repository-cache charm, says 'permission denied' for /srv/www/blahblah
<jose> I'm having some issues with GCE right now so haven't been able to launch the instance
<Odd_Bloke> Oh, hmph.
<Odd_Bloke> Let me see if I can reproduce.
<jose> cool
<jose> I'll try to run again
<Odd_Bloke> jose: Are you using any config, or just the defaults?
<jose> Odd_Bloke: defaults here
<Odd_Bloke> jose: Cool, waiting for my instances now. :)
<jose> I wish I could say the same...
<Odd_Bloke> :p
<Odd_Bloke> jose: I'm seeing a failure in the start hook; let me dig in to it.
<Odd_Bloke> Some of the charmhelpers bits changed how they do permissions, so it's probably an easy fix.
<jose> cool, I thought that but wasn't sure
<Odd_Bloke> jose: Do you have a recommendation for quickly testing new versions of charms?  Is there something I can do with containers, or something?
<jose> Odd_Bloke: oh, definitely! wall of text incoming
<jose> so, ssh into the failing instance. then do sudo su. cd /var/lib/juju/agents/unit-ubuntu-repository-cache-0/charm/hooks/
 * Odd_Bloke braces for impact.
<jose> edit start from there
<jose> then save your changes and do a juju resolved --retry ubuntu-repository-cache/0
<jose> and if it goes well it should go out of error state
<jose> just copy the exact same changes you did on the unit to your local charm and commit + push
<jose> DHX should be a good tool too, but I can't give much insight on how it works and its usage
<coreycb> jamespage, hey I'm back, need input for something?
<Odd_Bloke> Hmph, I'm sure we saw this problem before and I fixed it.
<Odd_Bloke> I guess I did trash my old charm-helpers merge branch, which might have been where I fixed it.
<jose> probably missed that one bit :)
<jamespage> coreycb, yeah - could you check the deploy from source release notes pls?
<jamespage> coreycb, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507
<jamespage> gnuoy, how far did you get?
<gnuoy> jamespage, so...
<gnuoy> I don't think which specify rid any higher
<jamespage> gnuoy, ?
<gnuoy> since leader_get is supposed to mimic leader-get
<jamespage> gnuoy, well in the scope of peerstorage, its whatever we make it :-)
<jamespage> as we have a wrapper function there
<gnuoy> jamespage, as for line 98, peer_setting = _relation_get(attribute=attribute, unit=local_unit(), rid=valid_rid)
<gnuoy> does fix it
<jamespage> yah
<gnuoy> jamespage, if you use  relation_get you get an inifinte loop which is fun
<jamespage> gnuoy, I was thinking - http://paste.ubuntu.com/11993006/
<jamespage> less the debug
<jamespage> gnuoy, this has potential to impact of pxc and stuff right?
<gnuoy> jamespage, yes the whole caboodle
<jamespage> grrr
<jamespage> gnuoy, infact I'm surprise everything else is still working :-)
<gnuoy> jamespage, +1 to your fix given the point you make about the scope of leader_get in peer storage
<jamespage> gnuoy, ok working on that now
<coreycb> jamespage, notes look good, I made a few minor tweaks.
<jamespage> dosaboy, gnuoy: https://code.launchpad.net/~james-page/charm-helpers/lp-1480893/+merge/266712
<jamespage> that should sort-out the out-of-cluster context migration of peer data to leader storage
<gnuoy> jamespage, I'm surprised lint isn't sad about rid being defined twice
<gnuoy> jamespage, err ignore me
 * jamespage was already doing that :-0
<jamespage> gnuoy, lol
<dosaboy> jamespage: reviewed
<jamespage> dosaboy, gnuoy jumped you and landed that
<jamespage> dosaboy, I actually think leader_get should not be exposed outside of peerstorage
<jamespage> its an internal function imho
<jamespage> the api is peer_retrieve
<jamespage> which deals with the complexity
<dosaboy> jamespage: yup fair enough
<jamespage> gnuoy, want me to deal with syncing that to rmq?
<gnuoy> jamespage, well, we should sync it across the board
<jamespage> gnuoy, +1
<jamespage> gnuoy, got that automated yet?
<gnuoy> ish
<gnuoy> beisner, looks like it's time for another charmhelper sync across the charms. I'll do that now unless you have any objections?
<beisner> gnuoy, +1 also ty
<Odd_Bloke> jose: My units seems to get stuck in 'agent-state: installing'; any idea how I can work out what's happening?
<jose> Odd_Bloke: juju ssh ubuntu-repository-cache/0; sudo tail -f /var/log/juju/unit-ubuntu-repository-cache-0.log (-n 50)
<jose> that gives you the output of your scripts
<Odd_Bloke> jose: I haven't even got the agent installed yet, so my scripts haven't started.
<axino> Odd_Bloke: you'll have to go on the GCE console
<axino> Odd_Bloke: and look at "events" (or something) there
<jose> Odd_Bloke: oh, huh. if there's a machine error, then juju ssh 0; sudo tail /var/log/juju/all-machines.log
<jose> axino: it's probably best to take a look at all-machines.log, last time when I went to the gce console machines simply weren't there and I couldn't find a detailed answer on what was going on :)
<axino> jose: there was nothing in all-machines.log last time I had issues :( just events in GCE console (which are a bit hard to find, I must say)
<Odd_Bloke> Oh, perhaps I misunderstand the status output.
<jose> I'm still learning how to deal with GCE :)
<Odd_Bloke> jose: OK, looks like I've fixed it.
<Odd_Bloke> Let me push up a MP.
<jose> woohoo! \o/
<Odd_Bloke> jose: https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-perms/+merge/266724
<jose> taking a look
<Odd_Bloke> jose: So host.mkdir creates parents, so that line is unnecessary, and forces permissions to something that is broken.
<Odd_Bloke> jose: So we can just lose that line.
<jose> as long as it works we're good :P
<Odd_Bloke> jose: It does. :)
<jose> 'running start hook'
<Odd_Bloke> Oh, you're _testing_ it?
<Odd_Bloke> Pfft.
<jose> I am :)
<jose> need to
<Odd_Bloke> :)
<Odd_Bloke> As well you should.
<sto> Is anyone working on a charm to install designate? I want to try it on my openstack deployment and I'll be happy to test a charm instead of installing it by hand (I have no experience writing charms right now)
<jose> sto: I'm sorry, but I don't know what designate is. maybe you have a link to its website?
<sto> jose: it is an openstack service https://github.com/openstack/designate
<jose> oh
<sto> And it is already packaged
<jose> unfortunately, I don't see a designate charm on the store. sorry :(
<jose> but maybe an openstack charmer can work on it? :)
<jamespage> gnuoy, I'm going to switch to liberty milestone 2 updates - pull me back if you need hands
<sto> Yes, I know that there is no charm on the store, thats why I was asking... ;)
<jamespage> its not critical but would like to push it out soonish
<gnuoy> ok, np
<beisner> gnuoy, just lmk when the c-h sync is all pushed, and i'll run metal tests.  probably with some sort of heavy metal playing.
<gnuoy> beisner, crank up Slayer, c-h sync is all pushed
<beisner> gnuoy, jamespage - wrt that cinder bug, it's with the default lvm-backed storage where i'm seeing breakage.  works fine with ceph-backed storage.  bug updated with that lil tidbit.
<gnuoy> sto I heard people talking about creating a charm but I'm not sure it ever got past the hot air stage
<beisner> gnuoy, awesome thanks
<beisner> gnuoy, isn't that on our list-o-stuff to add more official support for in the os-charms?
<gnuoy> I think Barbican and Designate were high on the list
<beisner> yep that sounds right.
<jose> Odd_Bloke: woohoo! it looks like it deployed cool!
<jose> I'm gonna give it a quick test ride and merge
<sto> gnuoy: ok, thanks, I' guess I'll install it by hand on a container to see how it works
<jose> Odd_Bloke: woot woot! works works works works!
<gnuoy> jamespage, beisner Trusty Icehouse, stable -> next upgrade test ran through cleanly. thanks for the patch Mr P.
<beisner> gnuoy, jamespage  \o/
<beisner> gnuoy, do you have a modified upgrade spec to deal with qg:ng?
<gnuoy> beisner, yes, I'm running from lp:~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha
<Odd_Bloke> jose: \o/
<jose> Odd_Bloke: should be merged. thanks a bunch for the quick fix, really appreciated!
<Odd_Bloke> jose: No worries, thanks for the quick merge. :)
<beisner> gnuoy, these guys didn't get a c-h sync, is that by design?:  n-g, pxc, n-ovs, hacluster, ceph-radosgw, ceph-osd
<gnuoy> beisner, I'll check, they may not be using the module that changed (but I'd have thought pxc was tbh)
<gnuoy> beisner, sorry about that, done now (no change for n-ovs)
<gnuoy> oh, cause it did work the first time
<jcastro> marcoceppi: oh hey I forgot to ask you if everything with rbasak/juju in distro is ok?
<jcastro> anyone need anything from me?
<marcoceppi> jcastro: I have to fix something in the packaing and upload it, I'm about to do a cut of charm-tools and such so I'll fix those then
<jcastro> ack
<jamespage> beisner, suspect that regex is causing the issue - reconfirmning now
<beisner> jamespage, ack ty
<beisner> beh.  look out, gnuoy, jamespage - i just got 11 ERROR instances on serverstack ("Connection to neutron failed: Maximum attempts reached")
<jamespage> beisner, sniffs like rmq
 * beisner must eat, biab...
<apuimedo> lazyPower:
<lazyPower> apuimedo: o/
<apuimedo> lazyPower: how are you doing?
<lazyPower> Pretty good :) Hows things on your side of the pond?
<apuimedo> warm
<apuimedo> :-)
<apuimedo> lazyPower: I have a charm that at deploy time needs to know the public ip it will have
<apuimedo> usually what I was doing was add a machine, and then knowing the ip change the deployment config file
<lazyPower> apuimedo: unit-get public-address should get you situated with that though
<apuimedo> but I was wondering if it were possible in the install script to learn the public ip
<apuimedo> ok
<apuimedo> that's what I thought
<apuimedo> and it's the same the other machines in the deployment will see it with, right?
<apuimedo> lazyPower: so unit_public_ip should do the trick
<apuimedo> hookenv.unit_public_ip
<lazyPower> yep
<lazyPower> and looking at teh source, that wraps unit-get public-address :)
<apuimedo> ;-)
<apuimedo> thanks
<lazyPower> np apuimedo :)
<beisner> gnuoy, jamespage - the heat charm does have a functional usability issue, though not a deployment blocker, nor a blocker for using heat with custom templates.  that is, the /etc/heat/templates/ dir is just awol.   bug 1431013   looks to have always been this way, so prob not crit for 1507/8 rls.
<mup> Bug #1431013: Resource type AWS::RDS::DBInstance errors <amulet> <canonical-bootstack> <openstack> <uosci> <heat (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1431013>
<lazyPower> ejat-: Hey, how did you get along this weekend? I wound up being AFK for a good majority.
<beisner> gnuoy, jamespage, coreycb ... aka ... ^ our "one" remaining tempest failure to eek out  ;-)    http://paste.ubuntu.com/11994632/
<coreycb> beisner, would that fixup the rest of the failing smoke tests?
<beisner> see paste ... we are down to that 1
<beisner> after some merges and template tweaks today
<beisner> coreycb, i'm installing heat from package in a fresh instance just to see if the templates dir is awol there (ie. without a charm involved).
<beisner> coreycb, yeah so these files and this dir don't make it into /etc/heat/templates when installing on trusty.  http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/heat/trusty/files/head:/etc/heat/templates/
<coreycb> beisner, that's awesome, down to 1
<coreycb> beisner, might be a packaging issue
<beisner> coreycb, yeah, woot!
<beisner> coreycb, and ok, bug updated, she's all yours ;-)
<coreycb> beisner, thanks yeah I'll dig deeper later, need to get moving on stable kilo
<beisner> coreycb, yep np.  thanks!
<beisner> gnuoy, 1.24.4 is in ppa:juju/proposed    re: email, when you next exercise the ha wip spec(s), can you do that on 1.24.4?
<ddellav> jamespage, your requested changes have been made and all tests updated: https://code.launchpad.net/~ddellav/charms/trusty/glance/upgrade-action/+merge/265592
<jamespage> beisner, reverting that regex change resolves the problem
<jamespage> with cinder
<beisner> jamespage, ah ok.  i can't find context on that original c-h commit @ http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/revision/409
<jamespage> beisner, [daniel-thewatkins] Detect full disk mounts correctly in is_device_mounted
<beisner> jamespage, yep, saw that, but was looking for a merge proposal or bug to tie it to.
<beisner> (being that that fix breaks this charm)
<beisner> jamespage, i suspect context is:  http://bazaar.launchpad.net/~charmers/charms/trusty/ubuntu-repository-cache/trunk/view/head:/lib/ubuntu_repository_cache/storage.py#L131
<beisner> s/is/was/
<jamespage> beisner, I'm actually wondering whether that charm-helpers change has uncovered a bug
<jamespage> beisner, huh - yeah it does
<beisner> ooo oo a cascading bug
<jamespage> beisner, /dev/vdb was getting missed on instances, so got added to the new devices list before
<jamespage> no longer true
 * jamespage scratches his head for a fix
<jamespage> beisner, the charm does not have configuration semantics that support re-using a disk that's already mounted
<jamespage> beisner, overwrite specific excludes disks already in use - its a sorta failsafe
<jamespage> beisner, I could do a ceph type thing for testing
<beisner> jamespage, ok so vdb is mounted @ /mnt, and with that c-h fix, the is it mounted helper actually works.  whereas all along we've just  been clobbering vdb?  is that about right?
<jamespage> beisner, yup
<beisner> jamespage, ok i see it clearly now.
<jamespage> beisner, ok - testing something now
<jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803
<jamespage> testing now
<beisner> sweet.  oh look you even updated the amulet test.  i was just thinking:  i'll need to update a config option there.
<beisner> jamespage, if this approach is what we stick with, i'll update o-c-t bundles
<jamespage> beisner, how else would I test my change? ;)
<beisner> jamespage, well that's the shortest path for sure!
<jamespage> beisner, longer term filesystem_mounted should go to charm-helpers
<jamespage> but for tomorrow here is fine imho
<beisner> jamespage, ack
<jamespage> beisner, passed its amulet test for me
<jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803
<jamespage> gnuoy, ^^ or any other charmer
<jamespage> beisner, I've not written a unit test which makes me feel guilty
<jamespage> but I need to sleep
<marcoceppi> jamespage: idk, lgtm
<beisner> jamespage, lol
<jamespage> marcoceppi, ta
<beisner> jamespage, yes, i believe this will do the trick.   thanks a ton.  i've updated and linked the bug.
<marcoceppi> jamespage: maybe just default ephemeral-mount to /mnt ?
<jamespage> marcoceppi, meh - I'd prefer to keep it aligned to ceph
<marcoceppi> jamespage: and I really don't care enough either way
<jamespage> marcoceppi, just in case someone did have /mnt mounted as something else :-)
<jamespage> marcoceppi, and really did not want it unmounted
 * marcoceppi nods
<jamespage> marcoceppi, this is really a testing hack
<marcoceppi> jamespage: yeah, I see that in the amulet test you updated
<jamespage> beisner, ok - going to land that now
<beisner> jamespage, yep +1
<jamespage> beisner, done - to bed with me!
<jamespage> nn
<beisner> jamespage, thanks again.  and, Odd_Bloke thanks for fixing that bug in is_device_mounted.
<Odd_Bloke> beisner: :)
<moqq> how do i deal with an environment that seems completely stalled? when i try âjuju statusâ it just hangs indefinitely
<marcoceppi> moqq: is the bootstrap node running?
<marcoceppi> what provider are you using?
<moqq> marcoceppi: yes, machine-0 service is up. manual provider
<marcoceppi> moqq: can you ssh into the machine?
<moqq> yep
<marcoceppi> moqq: `initclt list | grep juju`
<moqq> marcoceppi: http://pastebin.com/dUqwsTez
<marcoceppi> moqq: sweet! VoltDB
 * marcoceppi gets undistracted
<moqq> haha
<marcoceppi> moqq: try `sudo restart jujud-machine-0`
<marcoceppi> give it a few mins
<marcoceppi> then juju status
<marcoceppi> also, are you out of disk space? `df -h`?
<moqq> no plenty of space. and restarting the service to no avail, have cycled it a good handful of times
<marcoceppi> moqq: have you ccycled the juju-db job as well?
<marcoceppi> that's the next one
<moqq> yeah
<marcoceppi> moqq: time to dive into the logs
<marcoceppi> what's the /var/log/juju/machine-0 saying?
<moqq> marcoceppi: http://pastebin.com/KWDXACvD
<marcoceppi> moqq: were you running juju upgrade-juju ?
<moqq> yeah at one point i tried to and it failed
<marcoceppi> moqq: from what version?
<marcoceppi> moqq: this may be a bug that was fixed recently, and if so there's a way to recover still
<moqq> 1.23.something -> 1.24.4
<moqq> iirc
<marcoceppi> moqq: what does `ls -lah /var/lib/juju/tools` look like?
<moqq> marcoceppi: http://paste.ubuntu.com/11996021/
<marcoceppi> moqq: this should help: https://github.com/juju/docs/issues/539
<marcoceppi> moqq: you'll need to do that for all of the symlinks
<marcoceppi> moqq: so, stop all the agents first
<marcoceppi> then that
<marcoceppi> then start them all up again, with juju-db and machine-0 being the first and second ones you bounce
<moqq> ok thanks. on it
<beisner> coreycb, around?  if so can you land this puppy?:  https://code.launchpad.net/~1chb1n/charms/trusty/hacluster/amulet-extend/+merge/266355
<coreycb> beisner, sure, but is that branch frozen for release?
<moqq> thanks marcoceppi that did the trick!
<marcoceppi> moqq: awesome, glad to hear that. It was only a bug that existed in 1.23, so going forward you shouldn't have an issue with upgrades *related to this*
<moqq> ok excellent
<moqq> now, its gotten me to 1.24.3
<moqq> but it seems to be refusing to go to 1.24.4
<moqq> ubuntu@staging-control:/var/lib/juju/tools$ juju upgrade-juju --version=1.24.4  >>> ERROR no matching tools available
<marcoceppi> moqq: 1.24 is a proposed release
<marcoceppi> moqq: you need to set your tools stream to proposed instead of released
<marcoceppi> moqq: I'd honestly just wait until it's released (in a few days)
<moqq> iâm pretty sure i already did. juju has been constently chewing up 100% of all of our cpus
<moqq> so i was hoping the .4 upgrade would fix that
<marcoceppi> ah
<moqq> cuz if its not solved soon i have to rip out juju and switch to puppet or chef
<marcoceppi> moqq: hum, juju using 100% shouldn't happen
<marcoceppi> is there a bug already for this?
<moqq> yeah https://bugs.launchpad.net/juju-core/+bug/1477281
<mup> Bug #1477281: machine#0 jujud using ~100% cpu, slow to update units state <canonical-bootstack> <canonical-is> <performance> <juju-core:Triaged> <https://launchpad.net/bugs/1477281>
<marcoceppi> moqq: looks like this was reported with 1.23, is it still chewing 100% cpu on 1.24.3?
<moqq> it looks fine for the moment. but when i did this upgrade on the other env earlier it was fine for 20m then went back to spiking
<moqq> going to watch it
<marcoceppi> moqq: if it does spike up and start chewing 100% again, def ping me in here and update that bug saying it's still a problem, it's not targeted at a release so it's really not on the radar atm
<marcoceppi> moqq: as to your other question about 1.24.4, what does `juju get-env agent-stream` say?
<moqq> marcoceppi: ok, will do
<moqq> apparently >>> ERROR key "agent-stream" not found in "staging" environment
<marcoceppi> moqq: haha, well that's not good
<marcoceppi> well, that's not bad either
<marcoceppi> jsut, interesting
<marcoceppi> moqq: you could try `juju set-env agent-stream=proposed`, then an other upgrade-juju (per https://lists.ubuntu.com/archives/juju/2015-August/005540.html)
<marcoceppi> but if there is no value currently it may not like that
<moqq> just gave a warning, but it set ok
<marcoceppi> moqq: well, if you feel like being daring you can give it a go
<marcoceppi> in the changelog I doin't see any reference to CPU consumption
<bdx> core, devs, charmers: Is there a method by which juju can be forced to not overwrite changes to config files on node reboot?
<beisner> coreycb, nope we can land passing tests any time.
#juju 2015-08-04
<beisner> jamespage, marcoceppi - ugg.  so that merge fixed our use case, but broke all other use cases.  please review @ https://code.launchpad.net/~1chb1n/charms/trusty/cinder/next.ephem-key-error/+merge/266826
<jamespage> beisner, landed
<jamespage> ddellav, reviewed - couple of niggles - also amulet tests are failing - not had time to dig yet.
<apuimedo> jamespage: Hi
<jamespage> apuimedo, hello
<apuimedo> I was looking at the next version of nova-compute
 * jamespage nods
<apuimedo> did I get this right, that it will run nova-api-metadata on each machine if neutron-openvswitch sends it the shared metadata secret
<apuimedo> ?
<jamespage> apuimedo, yes - that was added to support neutron dvr for ml2/ovs
<jamespage> infact I think that's in the stable charm as well
<jamespage> that was a 15.04 feature
<apuimedo> aha
<jamespage> apuimedo, neutron-openvswitch will also setup and configure l3-agent and metadata-agent in that particular configuration
<apuimedo> jamespage: and the neutron-metadata-agent still runs on neutron-gateway pointing to nova-cloud-controller, right?
<apuimedo> oh, neutron's metadata-agent will run on each compute host?
<jamespage> apuimedo, neutron-metadata-agent on the gateway points to itself - it runs a nova-api-metadata as well
<jamespage> apuimedo, that's correct
<apuimedo> interesting
<apuimedo> thanks
<apuimedo> :-)
<jamespage> apuimedo, it only services requests for instances located on the same hypervisor
<apuimedo> that should help with scalability ;-)
<apuimedo> and I guess it gets the data from the rabbitmq server
<jamespage> apuimedo, yah
<jamespage> apuimedo, nova-api-metadata <-> conductor
<jamespage> that's a potential bottleneck still but it is horizontally scalable
<jamespage> I'd been considering whether we should support 'roles' for nova-cc
<apuimedo> yeah
<apuimedo> roles?
<jamespage> so you could have a scale out conductor/scheduler pool
<jamespage> with a smaller public facing set of API services
<jamespage> apuimedo, we have something like that in cinder atm
<jamespage> the cinder charm can do all roles, or just some of them, allowing this type of split
<jamespage> cinder-api/cinder-scheduler/cinder-volume
<jamespage> esp import for when using iscsi volumes
<jamespage> you want a big volume backend, with a smaller set of schedulers and api servers
<apuimedo> I see
<apuimedo> jamespage: sounds like you'll end up with a similar split as Kolla has
<apuimedo> where almost everything is a separate unit
<jamespage> apuimedo, not that familiar with kolla - sounds like I need to read
<jamespage> oh - openstack/dockers
<jamespage> apuimedo, well I guess the bonus with juju charms is we could do a kolla like single process approach, or you can opt todo more than one thing in the same container
 * jamespage likes flexibility
<apuimedo> ;-)
<beisner> o/ jamespage, gnuoy
<beisner> fyi  in for just a few min before preschool registration, then back again.
<beisner> gnuoy, i see a pile of error instances on serverstack belonging to you and to me
<beisner> jamespage, thanks for the merge
<ddellav> jamespage, hmm, weird that amulet tests are failing, i didn't change that much.
<jamespage> ddellav, that might be serverstack tbh - we're having some troubles
<ddellav> ah yea, ok
<ddellav> yea, right at the bottom of the amulet output you can see it failed due to rabbitmq: http://paste.ubuntu.com/12000161/
<beisner> jamespage - raised this before we forget about it bug 1481362   < also fyi wolsen dosaboy gnuoy coreycb
<mup> Bug #1481362: pxc server 5.6 on vivid does not create /var/lib/mysql <amulet> <openstack> <uosci> <percona-xtradb-cluster-5.6 (Ubuntu):New> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481362>
<jamespage> urgh - I think I just put ovs in a spin on all the compute hosts - sorry folks
<jamespage> fixing now
<beisner> jamespage, yeah, connectivity lost to bastions.  it's ok.  you'll make it all shiny and new i know you will.
<beisner> also just raised this against pxc re: deprecation warn on > vivid:  bug 1481367
<mup> Bug #1481367:  'dataset-size' has been deprecated, please useinnodb_buffer_pool_size option instead <amulet> <openstack> <uosci> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481367>
<apuimedo|lunch> jamespage: is there some minimal openstack bundle that groups some charms in the same machine like nova-cloud-controler and neutron-api?
<jamespage> apuimedo|lunch, yes
<apuimedo|lunch> I only saw https://jujucharms.com/openstack-base/34
<apuimedo|lunch> and I don't have 17 machines around :P
<jamespage> apuimedo|lunch, oh - I was about to point to that
<jamespage> apuimedo|lunch, that only needs 4 servers
<jamespage> (check the readme)
<jamespage> its 17 units, 4 physical machines
<apuimedo> ah, I must have misread the bundle file
<apuimedo> I don't see any lxc reference
<apuimedo> in the bundle.yaml
<apuimedo> oh!
<apuimedo> jamespage: why is there a bundel.yaml and a bundle.yaml.orig?
<apuimedo> only the latter has the lxc references
<jamespage> apuimedo, its an artifact of the charm store injestion
<apuimedo> so the one that should be used is the  .orig, right?
<jamespage> apuimedo, although I don't believe it should scrub machine placement like that
<jamespage> apuimedo, yes
<jamespage> for now
<jamespage> rick_h_, ^^ is that right?  https://jujucharms.com/openstack-base/  - the bundle.yaml has lost the machine placement data?
<rick_h_> jamespage: looking
<jamespage> rick_h_, thats def changed - it used to be fewer machines than units, but not any longer
<rick_h_> urulama: rogpeppe1 ^ looks like lxc got lost in the transition from v3 to v4?
 * rogpeppe1 reads back
<rick_h_> rogpeppe1: basically https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig vs https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundle.yaml it lost the placement
<rogpeppe1> rick_h_: yeah, i see that now. just investigating
<rick_h_> jamespage: with the deployer release tvansteenburgh is doing today we can do true machine placement in the new bundle. We'll look into the bug, but the best way forward, once that deployer is out there, is to check out the 'machines' part in https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md
<rogpeppe1> rick_h_, jamespage: i see what has happened
<rick_h_> jamespage: and get rid of bundles.yaml and go to bundle.yaml as the only file in the bundle.
<rogpeppe1> jamespage: i wasn't aware (and i don't think it was documented) that the "to:" field in a legacy bundle could be list as well as a string
<jamespage> eeek!
<rick_h_> rogpeppe1: urulama that's a bit :( as openstack is preparing new release and their bundles are kind of busted atm now
<beisner> coreycb, thanks for the merge :)
<rogpeppe1> i'm quite surprised that the goyaml package allowed unmarshaling of a list into a string (it seems to have just ignored it)
<urulama> rick_h_: i'd say +1 on the quick fix of using bundle.yaml until we come up with a solution
<rick_h_> jamespage: can the to: be representing as a string and repushed to get through a fix until the charmstore can be updated?
<coreycb> beisner, you're welcome, thanks for the updates
<rick_h_> jamespage: hmm, looks like not with ceph/nova compute
<rick_h_> urulama: the problem is that it's not supposed in the deployer yet until the release is out and folks get it. Kind of rock/hard place atm. /me wonders if you can do ceph, ceph, ceph for the nova-compute one...where's that doc
<rick_h_> rogpeppe1: http://pythonhosted.org/juju-deployer/config.html#placement is the docs for that and the wordpress example has the lists.
<rogpeppe1> rick_h_: it's possible i skipped over that syntax because there were no bundles around that actually used it (that I could find - there are none in the corpus used for testing migratebundle)
<apuimedo> is it possible in bundles to define a config that applies to all the charms that share the setting
<apuimedo> like openstack-origin ?
<lazyPower> apuimedo: certainly. there's an overrides: key that you can use
<apuimedo> lazyPower: at the same level as "services" ?
<lazyPower> apuimedo: its a parent level key, same topical level as "services"
<apuimedo> good
<apuimedo> thanks
<rogpeppe1> jamespage: in the new format, the syntax would be:
<rogpeppe1> to: ["lxc:ceph/1"]
<apuimedo> I wonder why the bundle.yaml.orig doesn't use it
<lazyPower> apuimedo: something like this - http://paste.ubuntu.com/12000543/
<rogpeppe1> jamespage: (the unit syntax being the same as juju's usual unit syntax)
<apuimedo> I seem to remember that it was in some previous versions
<apuimedo> rogpeppe1: is this new syntax on stable?
<rogpeppe1> apuimedo: "on stable" ?
<rogpeppe1> apuimedo: the new bundle syntax is considerably stripped down from the old syntax (and somewhat more general in places too)
<apuimedo> I mean the stable bundle deployer
<apuimedo> when you install juju on trusty
<rick_h_> apuimedo: it's been in trunk for a long time. tvansteenburgh is working on a release. It's not in the current trusty :(
<jamespage> beisner, is there a bit of dellstack I could use to refresh the charm-store bundle?
<apuimedo> ok
<apuimedo> rick_h_: does that mean that bundles will have to be updated? Is there some degree of backwards compatibility?
<rick_h_> apuimedo: yes, it'll support both but we'll be working on deprecating v3 and disallowing new v3 ones to the charmstore
<rick_h_> apuimedo: the format is only slightly different. It's mostly the same, just removes the top key from the file
<rick_h_> most bundles will work with a delete of the first line and dedent
<apuimedo> ok
<apuimedo> rick_h_: does the "to" allow you to put two charms on the same container scope, or only when one of them is subordinate?
<rogpeppe1> apuimedo, rick_h_: but placement is thing that has changed most
<rogpeppe1> the ting
<rogpeppe1> the thing
<rogpeppe1> apuimedo: the format does. i'm not sure about the deployer implementation.
<jamespage> rick_h_, urulama, rogpeppe1: working on moving to bundle.yaml now
<rick_h_> jamespage: Makyo can help with that
<apuimedo> lazyPower: the bundles that have "0" in a bundle are deployed?
<apuimedo> the one for openstack has a few charms like that
<lazyPower> apuimedo: i'm sorry i dont follow.
<jamespage> rick_h_, Makyo: first cut here - lp:~james-page/charms/bundles/openstack-base/bundle
<jamespage> rick_h_, is there a nice way I can programatically query for the latest charm revisions?
<lazyPower> apuimedo: can i get a link to the bundle you are looking at?
<rick_h_> jamespage: sure thing https://api.jujucharms.com/v4/ceph/meta/any
<jamespage> rick_h_, ta muchly
<apuimedo> sure
<apuimedo> lazyPower: https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig
<urulama_> jamespage: if you need just revision, you can use this as well https://api.jujucharms.com/v4/ceph/meta/id-revision
<apuimedo> you'll see that there's quite a few with units: 0
<lazyPower> apuimedo: in the case of subordinates, thats a requirement
<rogpeppe1> jamespage: that first line looks potentially spurious
<rogpeppe1> jamespage: did you mean "series: trusty" there?
<apuimedo> lazyPower: ah, good :-)
<lazyPower> however i'm not sure why neutron-openvswitch has num_units: 0
<lazyPower> its not a subordinate that i can see
<lazyPower> i lied, line 2 - it sure is
<lazyPower> so, yeah - everything in here with num_units: 0 is due to the charm being a subordinate apuimedo
<apuimedo> it is
<apuimedo> ;-)
<apuimedo> thanks lazyPower
<lazyPower> np
<apuimedo> mbruzek: I was in your ancestral town on Sunday
<apuimedo> took a panorama picture from the bus
<mbruzek> apuimedo: wow!
<mbruzek> apuimedo: Cool!
<Makyo> jamespage, First glance: first line should be 'series: trusty' and last line should be removed, but it looks okay otherwise.
<apuimedo> I wish they would have stopped so I could buy pastries
<Makyo> Er, sorry.  First line should be removed entirely, last line can stay.
<Makyo> jamespage, ^
<jamespage> got it
<jamespage> Makyo, how do I actually test this?
<Makyo> jamespage, one sec, spinning up a GUI with latest code; dragging the yaml there will run it through the bundle validation.
<Makyo> jamespage, Actually, you should be able to use demo.jujucharms.com - dragging the bundle.yaml onto the canvas should validate the bundle.
<Makyo> jamespage, (recent updates to the GUI affect committing uncommitted bundles, which is unrelated)
<jamespage> Makyo, error - console?
<jamespage> I'm such a web brower numpty - apologies
<Makyo> jamespage, oops, hmm, that's partly our fault.  That message should be changed for that site.  Running it locally, pastebin in a sec.
<mbruzek> apuimedo: please send me the picture if you would.
<apuimedo> It's a bit moved
<Makyo> jamespage, Here's a working v4 bundle http://paste.ubuntu.com/12000801/ (v4 bundles need a machine spec, even if it's empty, to allow placement directives like "lxc:ceph/0")
<Makyo> jamespage, I validated that with https://github.com/juju/juju-bundlelib (git clone; make; devenv/bin/getchangeset bundle.yaml)
<Makyo> jamespage, The only thing that might be a problem is that I had to include juju-gui in order for one placement to work, which may not fly when deploying via the gui.
<jamespage> Makyo, updated again
<jamespage> (I wrote a small parser to query the charmstore api and update revisions)
<Makyo> jamespage, Here's one that passes validation: http://paste.ubuntu.com/12000995/ We have to use the placement directives from the older style of bundles to reference the bootstrap node (cc rick_h_ urulama )
<Makyo> jamespage, we also no longer have a top-level YAML node ('openstack-base') in the charmstore
<rick_h_> Makyo: oh, yea bootstrap node is a no no
<Makyo> rick_h_, Meaning I should take it out?
<rick_h_> Makyo: no, meaning that I expect that to not be pretty
<Makyo> rick_h_, ah, alright.  Yeah, if we're referencing the bootstrap node, we have to use v3-style placement directives, otherwise it gets lost looking for a machine named "0"
<jamespage> Makyo, ok - that's validating now
<Makyo> jamespage, Awesome
<beisner> jamespage or gnuoy - almost forgot this puppy.  please see/review/land:   https://code.launchpad.net/~billy-olsen/charms/trusty/rabbitmq-server/ch-sync-cli-fix/+merge/266619
<beisner> wolsen, fyi ^
<wolsen> beisner, last vote is that amulet fails still for it - though it does fix the import issue
<beisner> wolsen, ack.  my vote is to merge as-is, on the basis of fixing an import issue.  and address pre-existing failure of that 1 test separately.
<wolsen> beisner, but I would be +1 on removing the import - yeah
<beisner> wolsen, that may or may not still happen, but as it's syncd into all of the other os-charms, that is the state of things.
<beisner> wolsen, ie.  we won't very well be able to t-shoot the functional test with the import error.
<wolsen> beisner, if jamespage, gnuoy, or dosaboy don't respond in short order I'll land it
<beisner> wolsen, ack, tyvm
<beisner> dosaboy, wolsen - thanks fellas
<beisner> coreycb, fyi, dashboard 051-basic-trusty-juno-git is failing atm http://paste.ubuntu.com/12002116/   so far, that's the only *git* test failing in today's pre-release checks.
<coreycb> beisner, ok I'll take a look
<beisner> coreycb, ta
<beisner> wolsen, ping
<wolsen> beisner, pong
<beisner> so rmq passes locally, because the deployed instances and the bastion/host running tests are in the same ip space.   it fails in uosci, because the bastion host is on a different /24 than the deployed units.
<beisner> and...  the undercloud (serverstack) sec group looks like this:   http://paste.ubuntu.com/12002355/
<beisner> ie. packets just don't arrive
<beisner> so is my theory
<beisner> this hasn't been an issue, because no other os-charm tests attempt communication from the machine that's executing tests.
<beisner> it's all done via charm hooks and juju run, etc., which puts traffic on the same side of the fence so to speak.
 * wolsen looks at the code again
<wolsen> beisner, seems like a perfectly plausible scenario
<wolsen> what you are describing
<beisner> wolsen, well, maybe not.  the first 2 make it and check ok.  the 3rd check, which is to send to one rmq unit, then check the other rmq unit for that message, is what fails.
<beisner> idea 1 sec..
<wolsen> beisner, yeah it shouldn't be the port issue since rabbitmq-server is exposed
<wolsen> beisner, agree with your idea- I think its likely that there's a timing thing going on
<beisner> wolsen, well no dice even with a 2 min wait.
<wolsen> beisner, hmm may want to get some queue information to see what the hapolicy is on the queue
<wolsen> make sure it is what we think it is
<beisner> wolsen, i think the test is fine. i think rmq or its cluster is not ok.
<beisner> i've got the enviro stood up, can send msg and chk msg from same unit ok.   but when i send one manually to one unit, it never arrives at the 2nd.
<wolsen> beisner, well actually if you look at the "cluster status", its only reporting itself in the cluster (each of them actually)
<beisner> indeed wolsen
<beisner> oh wait this thing is hard-coding a vip @ 192.168.77.11
<beisner> wolsen, ^
<jose> I need haaaaalp! anyone knows a list of *all* (or at least most) ports that Juju uses?
<wolsen> beisner, that sounds like a bug as well, but I'm not sure its _the_ problem
<wolsen> beisner, as the clustering doesn't rely on a vip iirc
<beisner> wolsen, right, this test isn't checking based on that, but when the tests are extended, they will need to consume the vip env var as pxc and hacluster do.
<beisner> wolsen, so the earlier tests should actual fail out on this.  ie.      # Verify that the rabbitmq cluster status is correct.
<wolsen> jose, 22, 17017, and 8040 for the storage port I believe (though check your ~/.juju/environments.yaml for the storage-port)
<wolsen> beisner, well that would be dashing if it did fail on that
<beisner> lol yep wolsen
<jose> thanks wolsen
<beisner> wolsen, so this is my first dive into the rmq test.
<beisner> so if i deploy 2 rmq units, do they just know to cluster togther, like bunnies?
<wolsen> beisner, well if 2 bunnies got together you'd have a lot more than a cluster :P
<wolsen> beisner, but essentially, that's the theory
<wolsen> iirc, there will be an exchange of erlang cookies and the configuration files updated
<wolsen> beisner, do you have hook logs?
<beisner> yeah but destroy, redeploying ...
<beisner> wolsen, i mean yah, we have a dozen or more jenkins jobs, all with full logs and etc pulls.
<wolsen> beisner, duh, should've looked there thx
<beisner> wolsen, lol my proposal with the delay just freakin passed test 20.  http://10.245.162.77:8080/view/Dashboards/view/Amulet/job/charm_amulet_test/5639/console
<beisner> live view ^
<wolsen> beisner, with the 2 minute delay?
<beisner> wolsen, 30s
<beisner> wolsen, the 2 min was me doing it manually
<wolsen> beisner, well most importantly - it clustered
<beisner> wolsen, i really want to refactor these tests.   we're doing cluster tests inside one named relation check, etc.
<wolsen> beisner, +1
<beisner> wolsen, and check for presence of all units in the cluster check, instead of just checking that the command succeeds.
<wolsen> yep
<beisner> bump bummmm.  30 passes
<beisner> dang. #40 failed
<sebas5384> ping jose
<sebas5384> jcastro: ping
<beisner> wolsen, for test 40 (just 2 rmq units), the cluster-relation-changed hook is never triggered (which looks to be by design).  we get an install hook, and a config-changed hook.  so afaict, it should not be expected to form a cluster.
<beisner> cluster-relation-joined rather
<moqq> 1.24 doesnât use differnt ports does it? i ran an upgrade from 1.23 to 1.24 and all the machines worked as expected except one, and itâs stuck on 1.23 and when i start the agent it fails to connec to the master on 17070
<moqq> ah
<moqq> nevermind
 * moqq facepalm
<mbruzek> hello marcoceppi.  There have been several changes to charm-helpers that I am interested in, can you make a release to pypi?
<mbruzek> or authorize me to do such a thing
<marcoceppi> moqq are you still having high CPU issues on 1.24?
<elopio> Hello.
<elopio> I need to install a deb in my tarmac machine, but the tarmac charm doesn't allow for this.
<elopio> should I extend the charm to get deb urls to install, or just remember to install it manually in case I have to redeploy it?
<marcoceppi> elopio: extending the charm is one way, another way, if it's just a deb taht needs to be added, is to create a subordinate charm that only has an install hook which installs that deb
<marcoceppi> that way you don't have to fork the main charm and you can pack any customizations into that subordinate charm
<elopio> marcoceppi: I'll try sending my change upstream. If they don't apply it soon, the subordinate seems good.
<jcastro> sebas5384: yo!
#juju 2015-08-05
<beisner> jamespage, i totally missed your msg re: dellstack / charmstore bundle.  it's all clear, done with metal stuff, and paused those jobs so they don't clobber ya.
<jamespage> beisner, ta
<urulama> jamespage: morning. we've started on fixing the code for bundle v3 to v4 migration. could you point me or rogpeppe1 to the final openstack bundle.yaml you and Makyo came up yesterday, please. it'll serve as a basis. ty
<rogpeppe1> urulama, jamespage: i'd prefer to get the bundles.yaml (v3 format) so i can use it as part of the test corpus
<jamespage> rogpeppe1, urulama: has both - lp:~james-page/charms/bundles/openstack-base/bundle
<rogpeppe1> jamespage: thanks
<gnuoy> jamespage, beisner net split deployed for Trusty/Icehouse and guest successfully booted and accessed
<gnuoy> beisner, I think https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/266239 is ready to land now if you get any time for a review
<jamespage> rogpeppe1, urulama: I think my v4 bundle.yaml is good now - do I just need to drop the v3 version and push to the official charmers branch to magically make everything work again?
<rogpeppe1> jamespage: it looks to me as if the new v4 support does not support your placement
<rogpeppe1> jamespage: :-\
<jamespage> rogpeppe1, I just tested it with the latest juju-deployer and it all looks OK to me
<rogpeppe1> jamespage: oh, that's great then!
<jamespage> rogpeppe1, I am of course making the assumption that v4 support in deployer == v4 support elsewhere.
<urulama> jamespage: i think that if v4 bundle.yaml is present, the v3 bundle is not taken for migration, so it doesn't matter if it's there or not
<rogpeppe1> jamespage: i can't quite see *how* it works, because AFAICS there's explicit logic to rule out placements of the form "lxc:ceph/2"
<jamespage> rogpeppe1, http://paste.ubuntu.com/12005743/
<rogpeppe1> jamespage: you're using deployer revision 151, right?
<jamespage> rogpeppe1, I'm using 0.5.0 as off pypi yesterday
<rogpeppe1> jamespage: yup, seems like the one
<rogpeppe1> jamespage: interesting
<jamespage> rogpeppe1, if you see that in my bundle, you don't have the latest copy btw
<jamespage> I had to switch / -> =
<rogpeppe1> jamespage: in the v4 bundle?
<jamespage> rogpeppe1, yes
<rogpeppe1> jamespage: hmm, that shouldn't work
<jamespage> rogpeppe1, that;s what Makyo told me todo last night
 * rogpeppe1 has a look
<rogpeppe1> jamespage: ok, i see what's happening
<rogpeppe> jamespage: the deployer thinks it's a v3 bundle
<rogpeppe> jamespage: ... but that doesn't make sense either, because it hasn't got top level bundles; but maybe it has heuristics for that
<rogpeppe> jamespage: your bundle is missing a machines section (all machines mentioned in the placement must be declared)
<rogpeppe> jamespage: if you put that in, i think the deployer will recognise it as a v4 bundle
<rogpeppe> jamespage: ... and then the deployment will fail as i expected
<rogpeppe> jamespage: so if you try uploading the bundle to the charm store, it will fail because it's not in valid v4 syntax
<rogpeppe> jamespage: (i see "invalid placement syntax "lxc:ceph=1" (and 9 more errors)" when i try parsing your bundle
<jamespage> rogpeppe, Makyo gave me this to validate things yesterday:
<jamespage> ./juju-bundlelib/devenv/bin/getchangeset  bundle.yaml
<jamespage> that generates a changeset afaict
 * rogpeppe fetches juju-bundlelib
<rogpeppe> jamespage: this is what i was using to validate: http://paste.ubuntu.com/12005806/
<rogpeppe> jamespage: that's using the same logic that the charm store will use to validate the bundle (except that the charm store also verifies that the charms exist in the store)
<rogpeppe> jamespage: so, line 297 of jujubundlelib/validation.py:
<rogpeppe>    is_legacy_bundle = machines is None
<mnk0> yooooo
<Odd_Bloke> jose: We just hit https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-full-disk-formatting/+merge/267011 with a partner, if you wouldn't mind merging. :)
<mnk0> too many options
<mnk0> juju / kubernetes/ aws elastic beanstalk
<mnk0> how to find the right choice :(
<gnuoy> beisner, I'd really like to add tempest to the mojo tests, did anyone write a charm off the back of the spec you were cooking up?
<beisner> gnuoy, not yet.   and yep i want to add that as well.  i have a local wip for that (basically to do what we do on the other uosci runs, until a tempest charm exists).
<beisner> gnuoy, we've just gathered use cases and wishlists from stakeholders, which i think gives a pretty good view into what we want the charm to do.
<jose> Odd_Bloke: just woke up. will take a look in a few mins and test!
<Odd_Bloke> jose: Thanks!
<Odd_Bloke> jose: I've patched the partner in situ, so it's not burning hot.
<jose> oh, great.
<Odd_Bloke> jose: So get breakfast and a coffee. ;)
<jose> does chocolate milk work? :P
<beisner> jamespage, gnuoy - ok, metal deploys are underway with juju/proposed 1.24.4
<jamespage> beisner, awesome-o
<beisner> jamespage, gnuoy - we can now flip that bit in uosci for mojo runs (juju ppa stable|devel|proposed)
<beisner> gnuoy, +1 on https://code.launchpad.net/~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha/+merge/266239
<gnuoy> \o/ thanks
<beisner> yw, thank you too!
<beisner> gnuoy, which will lead to a rebase and merge on uosci's temp fork @ https://code.launchpad.net/~1chb1n/ubuntu-openstack-ci/mojo-runner-enhance/+merge/265726
<beisner> gnuoy, but i just did a merge test of yours into mine, and there were no conflicts, so i should have that ready in short order.
<beisner> oops wrong link on mine up there
<beisner> https://code.launchpad.net/~1chb1n/openstack-mojo-specs/net-id-ext-port-fix
<beisner> ^ that'll be the one to land after yours, pending lint test...
<beisner> oh heck, it's already merged lol.
<beisner> ha!  where was on on 7/31?
 * beisner points uosci back at os mojo spec trunk
<beisner> gnuoy, er umm, thanks for the merge ;-)
<gnuoy> np :)
<beisner> jamespage, gnuoy - T-K/next + 1.24.4  bare metal 7-machine smoosh a-ok;  there are more U:OS version combos queued up behind that, but I'd says +1 from the container standpoint.   fyi @ http://paste.ubuntu.com/12007371
<gnuoy> tip top
<apuimedo> lazyPower: can a subordinate charm have another subordinate?
<lazyPower> apuimedo: negative
<apuimedo> mmm
<lazyPower> subordinates can be related over relation, but not stacked
<apuimedo> that's a bit of a problem
<apuimedo> ok, I'll think some way to work around it
<apuimedo> thanks
<marcoceppi> apuimedo: what are you trying to acheive?
<apuimedo> well, I was working on a subordinate charm for neutron-server that provides neutron-metadata-agent
<apuimedo> but that one needs the midonet-agent charm in the same scope as well
<apuimedo> marcoceppi: because it's the midonet-agent who proxies the call
<apuimedo> *calls
<apuimedo> since it is not possible
<apuimedo> I'll just modify neutron-server
<apuimedo> so that it relates to midonet-api (for the plugin config) as jamespage told me in the previous review
<jamespage> I hear my name
<apuimedo> and it will also have a midonet-host relation with container scope
<apuimedo> that will pull the midonet-agent charm
<apuimedo> and when neutron-plugin is midonet
<apuimedo> it will configure and run the neutron-metadata-agent
<jamespage> apuimedo, hmm - does that require any kernel level magic?
<apuimedo> jamespage: what does?
<apuimedo> midonet-agent?
<jamespage> the neutron-metadata-agent
<apuimedo> oh
<apuimedo> let me check
<jamespage> apuimedo, there are benefits to having the neutron-api charm (which hosts neutron-server) containerizable
<jamespage> which is why we run dhcp/l3/metadata agents on the neutron-gateway charm, which is definately not containerizable
<apuimedo> jamespage: our reference architecture has a network controller machine
<apuimedo> that runs just neutron-server, neutron-dhcp and neutron-metadata agent
<apuimedo> (well, and midonet-agent, of course)
<apuimedo> we do not need l3 agent
<jamespage> apuimedo, well we have used cases for the gateway charm that do much the same thing
<jamespage> apuimedo, nsx for example just uses it for dhcp (and maybe metadata - can't remember)
<apuimedo> I'm not sure I see the point on deploying an extra charm for just the metadata and the dhcp agents
<jamespage> apuimedo, by splitting out tenant instance facing services, you can scale differently
<apuimedo> which need practically the same configuration and relations as neutron-server
<jamespage> well you have the midonet-agent bit already - that can be reused with the neutron-gateway charm
<apuimedo> jamespage: our gateways scale differently
<apuimedo> I'd have to have the neutron-gateway charm just run neutron-metadata-agent
<jamespage> and dhcp?
<apuimedo> and point to nova-api for the metadata service
<apuimedo> sorry, yes, dhcp too :P
<jamespage> apuimedo, oh - wait - the neutron-gateway charm also runs the nova-api-metadata service
<jamespage> its pretty self contained
<apuimedo> yes
<apuimedo> exactly
<jamespage> the backend comms is over rpc to the nova-conductors
<apuimedo> I don't think we need that
<jamespage> apuimedo, so your intent is to run dhcp and metadata services under the neutron-api charm?
<apuimedo> the metadata proxying goes through midonet
<apuimedo> and the next release won't even have a metadata agent
<apuimedo> that is what matches best our reference architecture
<jamespage> apuimedo, so it will just communicate with the nova-api-metadata directly?
<apuimedo> we usually do it like that
<apuimedo> well, with nova-api
 * jamespage nods
<apuimedo> yes
<apuimedo> (that means adding a "shared-secret" config to nova-cloud-controller
<jamespage> apuimedo, got something I can look at with regards your reference architecture?
<apuimedo> or not configuring either
<apuimedo> jamespage: well, we have the deployment docs
<apuimedo> jamespage: http://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_architecture.html
<jamespage> sure
<apuimedo> http://docs.midokura.com/docs/latest/quick-start-guide/ubuntu-1404_kilo/content/_hosts_and_services.html
<apuimedo> I'm not going to bundle everything on the controller node
<apuimedo> obviously
<apuimedo> we also do HA and stuff, this is just a basic setup
<apuimedo> but we pretty much always keep the Neutron unit as listed
<apuimedo> so, for Juju, I'd do something like https://api.jujucharms.com/charmstore/v4/bundle/openstack-base-34/archive/bundles.yaml.orig
<apuimedo> but with neutron getting its own machine
<apuimedo> and getting midonet-api and midonet-agent pulled in the machine as subordinates with scope:container
<apuimedo> jamespage: ^^
<jamespage> apuimedo, I see
<jamespage> apuimedo, you're intending on using containers?
<jamespage> apuimedo, midonet-api fronts to neutron-api right?
<jamespage> apuimedo, (lots going on - and rrd brain sometimes)
<apuimedo> jamespage: you mean lxc for some services?
<apuimedo> jamespage: midonet-api is what the neutron plugin talks to
<apuimedo> sort of a backend to neutron
<apuimedo> requests go
<jamespage> apuimedo, ack - so in the general approach we take for Ubuntu OpenStack, that would be deployed in its own LXC container
<apuimedo> neutron -> midonet-plugin -> midonet-api -> Zookeeper
<jamespage> by fragmenting into containers, you get the ability to scale each layer independently
<jamespage> as sizing requires
<apuimedo> jamespage: but then it would not be reachable from other machines, would it?
<apuimedo> I remember there was some limitation on lxc communication
<jamespage> apuimedo, hrm - yes it would
<apuimedo> maybe it was between lxc on different machines?
<jamespage> apuimedo, Juju LXC containers are directly network addressable
<jamespage> across machyine
<jamespage> across machines
<apuimedo> so I don't remember what it was
<apuimedo> well, so you mean putting midonet-api in an lxc container
<apuimedo> neutron-api and midonet-agent I'd still prefer to run on the metal
<jamespage> apuimedo, juju deploy --to lxc:3 midonet-api
<jamespage> +1
<apuimedo> running ovs-like bridges on lxc makes me uneasy
<jamespage> apuimedo, that's exactly the point I'm making
<jamespage> apuimedo, neutron-api is currently containerizable in all use-cases
<jamespage> apuimedo, neutron-gateway has all the code you need todo dhcp/metadata etc...
<jamespage> and is designed to go on the bare-metal
<mnk0> juju / kubernetes/ aws elastic beanstalk
<mnk0> how to choose
<jamespage> mnk0, well that first one is pretty nice imho
<jamespage> ;-)
<jamespage> mnk0, you know juju can deploy kubernetes right?
<mnk0> yeah i want to use juju but  im getting confused about how to actually use it
<mnk0> :/
<apuimedo> jamespage: would you approve of a neutron-gateway that does not run nova-api-metadata but that instead goes to nova-cloud-controller and pulls midonet-agent as subordinate?
<apuimedo> I'm not sure how many things I'll have to disable
<mnk0> yeah ive found some interesting information about juju for kubernetes
<jamespage> apuimedo, midonet-agent as a sub - no problemo
<mnk0> but again still newbie
<apuimedo> it seems a bit more troublesome than just adding a couple of services to neutron-server
<jamespage> apuimedo, I don't see the need to use nova-cc for the api-metadata service tho?
<jamespage> apuimedo, trust me - its minimal - I'll even work a diff for that if you like :-)
<apuimedo> jamespage: ok, I'll take another look at it
<apuimedo> the nova-cc thing is for my sanity
<apuimedo> it's what we always have on the field
<jamespage> apuimedo, actually I have an inflight for something similar - let me dig it out
<jamespage> apuimedo, https://code.launchpad.net/~sdn-charmers/charms/trusty/neutron-gateway/ovs-odl/+merge/265237
<jamespage> that SDN option still makes use of l3 and other bits, but that's a typical impact on the gateway charm
<jamespage> including unit tests to validate
<jamespage> apuimedo, you would need to trim down the list of packages and config files, so the diff should be even more minimal
<apuimedo> jamespage: alright, I'll give it a shot
<apuimedo> I'll let you know later ;-)
<apuimedo> jamespage: against neutron-gateway/next, right?
<jamespage> yah
<beisner> jamespage, re: rmq.  what is the minimal scenario that i can expect rmq to form a cluster?   (i'm reworking the amulet tests)
<beisner> jamespage, cluster-relation-joined hook is where that seems to happen, but just deploying multiple rmq units doesn't seem to trigger that.
<jamespage> hmm
<jamespage> I'd expect just multiple units to form a cluster
<beisner> jamespage, that's how the rmq amulet test is written, but it's failing those tests because two rmq units are two separate rmq clusters.  cluster_status on each unit shows a 1-node cluster.
<jamespage> urph
<jamespage> that sounds bad
<beisner> jamespage, but if cluster-relation-joined|changed hooks fire (ie when pulling hacluster or ceph into the picture), rmq forms a cluster
<jamespage> beisner, current stable charm is ok
<beisner> jamespage, so wolsen and i have been t-shooting those rmq tests in next (the tests have logic errors in cluster status checks in that they just check for exit 0 on cluster_status check, instead of actually checking that each unit is in the cluster)
<beisner> jamespage, and in that process, have decided a test rewrite a la the other os-charm tests is in order.
<jamespage> beisner, ok - so I grabbed /next and did a 3 unit deploy
<jamespage> beisner, looks ok to me
<beisner> jamespage, tests consistently show this.  @L261, 283  each unit has it's own 1-node cluster  http://paste.ubuntu.com/12008073/
<beisner> jamespage, just trying to determine broken test vs broken charm, suspect the former.
<jamespage> beisner, do the tests use hacluster and ceph?
<beisner> jamespage, jstat as of moment of fail:  http://paste.ubuntu.com/12008087/
<beisner> jamespage, jstat long version http://paste.ubuntu.com/12008092/
<jose> Odd_Bloke: had to leave for university, but I'll be back home in a couple hours. I'll check by then. Sorry about the delay!
<Odd_Bloke> jose: Longest breakfast and coffee ever. ;)
<Odd_Bloke> jose: (No worries, there's no urgency on it ATM)
<jose> hehe
<jamespage> beisner, oh - wait in the that configuration, we don't form a native cluster
<beisner> jamespage, i eventually got that test to pass by adding some waits.   but this scenario fails even if i wait forever:
<beisner> http://paste.ubuntu.com/12008126/
<beisner> ie. cluster_status on each unit shows that 1-node cluster.
<jamespage> urgh
<jamespage> beisner, the dreaded wait
<jamespage> anyway I really need to eod - ttfn
<beisner> jamespage, ack thanks.  o/
<apuimedo> lazyPower: which is the best way to add a repo/ppa to my juju/maas environment?
<lazyPower> apuimedo: add-apt-repository is how i generally do it
<apuimedo> on which machine?
<apuimedo> (so that it is available when some charm is installed)
<apuimedo> This is for testing the neutron-api charm deployment while I still don't have one package in Ubuntu repos
<apuimedo> lazyPower: ^^
<lazyPower> apuimedo: why not add, adding the repository to teh charm?
<lazyPower> that way it adds the repo, updates teh apt cache consistently until it makes it into distro
<apuimedo> lazyPower: the charm does not currently have an option to add a repo
<lazyPower> hmm, i'm not following
<lazyPower> is this a charm thats outside your control?
<apuimedo> it belongs to the openstack-charmers team
<apuimedo> I'm not sure how they feel about adding a config option to add repos
<apuimedo> jamespage: gnuoy ^^
<lazyPower> ah, typically i fork and publish to my namespace, use that until its depreciated
<apuimedo> but for the moment I can add it
<apuimedo> cool
<apuimedo> that's what I was thinking on doing
<apuimedo> ;-)
<apuimedo> oops
<apuimedo> Gotta run to catch the last bus
<apuimedo> talk to you tomorrow
<apuimedo> thanks lazyPower
<lazyPower> cheers apuimedo
<beisner> jamespage, i know you're past eod - just observed that with next and stable, rmq x 3, cluster happens as expected.  test code just needs love.
<lazyPower> marcoceppi: are you still around?
<marcoceppi> lazyPower: I am
<lazyPower> 1 sec, let me create a multi-file pastebin. i need your eyes for a second on a deployer bug that i cant seem to track down
<lazyPower> https://gist.github.com/chuckbutler/7b5d724eee5d4b5b6c08
<lazyPower> do you see anything obvious with the bundle that i've missed?
<marcoceppi> lazyPower: otp, 2 mins
<lazyPower> marcoceppi: i think i found it actually. missing charm in the store API that's referenced in this bundle
<lazyPower> wait no, its there
<lazyPower> marcoceppi: yeah i'm stumped, if you have any ideas i'm open to them
#juju 2015-08-06
<beisner> o/ good morning
<beisner> jamespage, ack on trusty icehouse, juno, kilo @ proposed, will kick those shortly
<beisner> gnuoy, woot re: initial tempest charm!
<gnuoy> beisner, I have some questions............... :)
<beisner> gnuoy, sure.  let me look at it real quick.
<gnuoy> beisner, when you have a sec have you seen http://paste.ubuntu.com/12013477/ before ?
<gnuoy> It looks like it's looking up the 'orchestration' service in keystone, which doesn't exist
<gnuoy> do we normally disable those tests somehow?
<gnuoy> :q
<beisner> gnuoy, no we run all of the tests for all of the services.  we just did some tuning to the template surrounding heat issues.  one remains:
<beisner> bug 1431013  ... appears to be a heat package issue.  the /etc/heat/templates/ dir is awol.
<mup> Bug #1431013: Resource type AWS::RDS::DBInstance errors <amulet> <canonical-bootstack> <openstack> <uosci> <heat (Ubuntu):Fix Committed> <heat (Ubuntu Vivid):New> <heat (Ubuntu Wily):Fix Committed> <heat (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1431013>
<beisner> gnuoy, this template, gives us just 1 tempest fail, and it's that ^ bug.  http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/templates/tempest/tempest.conf.template
<gnuoy> beisner, fantastic, thanks
<beisner> gnuoy, can we please add charm config options for two installation use case scenarios:
<beisner> gnuoy, http://paste.ubuntu.com/12013518/
<gnuoy> beisner, hmm, that template you pointed me at is the one I'm using
<beisner> gnuoy, ^ that is in the interest of making installation options flexible
<beisner> gnuoy, here is a t-k/next tempest result from yesterday:  http://10.245.162.77:8080/view/Dashboards/view/OpenStack%20Deploy/job/deploy_with_deployer/10381/artifact/tempest-allresults.10381/*view*/
<gnuoy> beisner, I need to nip out for ~1 hour but I think scenario one is already covered
<beisner> gnuoy, ok thanks again
<gnuoy> beisner, I've added pip proxy config option. I don't know where to go with my tempest failure though.
<beisner> gnuoy, i suspect tempest-lib - i may be on an older version of that
<marcoceppi> lazyPower: I think it's all the nulls
<beisner> jamespage, trusty icehouse|juno|kilo @ proposed deploy tests complete;  tempest results== distro/updates runs (ie. looks OK to me).
<jamespage> beisner, awesome-o
<gnuoy> beisner, I'm cloning tempest rather than tempest-lib is that wrong?
<beisner> gnuoy, there may be more than one way to work this.  i've generally had to pip install tempest-lib and keep it updated, as well as gitting tempest itself.
<gnuoy> beisner, I'm happy to encapsulate stuff in the charm but I don't think I'm a hardened enough tempest warrior to work out what the charm should be doing. It just seems like a horrific pip mess
<beisner> gnuoy, it will be
<beisner> gnuoy, all sorts of opportunity for package vs pip version conflicts
<beisner> gnuoy, tldr, what uosci does:  install all of the clients as listed in o-c-t's configure script;   then pip install tempest-lib;   then git tempest and do the things in ./configure.
<gnuoy> beisner, so I'm running the run_tempest with the please-create-a-venv flag and it's pulling in tempest-lib into that venv by the looks of it
<beisner> gnuoy, that's great.  we want that instead i think.
<beisner> so prob tempest-lib 0.7.0
<gnuoy> yep, it is
<redelmann> marcoceppi, hi
<redelmann> marcoceppi, if i run somthing like this in charm "hookenv.log('--------')"
<redelmann> marcoceppi, debuglog show this: "logger.go:40 error: flag provided but not defined: --------"
<redelmann> marcoceppi, could be a bug?
<marcoceppi> redelmann: what version of Juju are you using?
<marcoceppi> OH
<marcoceppi> I know why
<redelmann> marcoceppi, 1.24.3
<redelmann> marcoceppi, client and agent
<marcoceppi> it's being literally passed to juju-log as `juju-log ----------` so juju log thinks it a paramter
<marcoceppi> if you used anything else other than a -, like =
<redelmann> marcoceppi, y suppose that
<marcoceppi> you wouldn't get that error
<redelmann> y = i
<marcoceppi> because the error message says "flag provided but not defined"
<redelmann> marcoceppi, yes it's
<redelmann> marcoceppi, I assumed that
<marcoceppi> redelmann: so if you use any other character other than '--------', say '=========
<marcoceppi> ' it'll work
<redelmann> marcoceppi, ok, thank for you time
<marcoceppi> we can look into patching charmhelpers to escape this properly
<marcoceppi> but to unblock you I'd just change it for now
<mwenning> jamespage, also https://bugs.launchpad.net/charms/+bug/1431445 , thx for any help
<mup> Bug #1431445: Support for Calico in OpenStack: bird <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1431445>
<pmatulis> with maas provider and 'juju add-unit --to <blah>' , is <blah> always a host ID or can it be a hostname too?
<apuimedo> is it already possible to do quickstart of a bundle with charms on launchpad?
#juju 2015-08-07
<stub> marcoceppi: juju-log issues are covered by https://bugs.launchpad.net/juju-core/+bug/1274460
<mup> Bug #1274460: juju-log vs. command line length limits <juju-log> <juju-core:Triaged> <https://launchpad.net/bugs/1274460>
<stub> (and there are similar bugs for the other tools, which share the problems, apart from relation-get which has been fixed already)
<apuimedo> Good morning
<apuimedo> Does somebody here use the juju digital ocean plugin?
<apuimedo> https://github.com/kapilt/juju-digitalocean
<apuimedo> I saw some comment from lazyPower on one of the issues
<apuimedo> for some reason `juju ssh` does not work in my digital ocean environment
<axino> apuimedo: what does it do ?
<apuimedo> axino: from poking around
<apuimedo> I saw that it sshs to the bootstrap machine
<apuimedo> and from there does `nc`
<apuimedo> the first part worked, I can ssh to the bootstrap env
<apuimedo> but the nc part would just leave me waiting forever
<axino> apuimedo: can you reach the second instance from the bootstrap node ?
<apuimedo> axino: I can ssh to it
<apuimedo> but I don't know its pass
<apuimedo> axino: are you using it frequently?
<axino> apuimedo: I'm not :)
<apuimedo> axino: what do you use, an OSt env?
<axino> apuimedo: yes, OS and a bit of AWS
<axino> apuimedo: can you run "juju --debug ssh $whatever_you're_sshing_to ?
<apuimedo> axino: how well does the AWS one work for deploying OSt?
<axino> apuimedo: I'm sorry, I don't get your question
<apuimedo> axino: I mean, does the AWS environment work well for deploying OpenStack charms?
<axino> apuimedo: I never deployed OpenStack charms to AWS
<apuimedo> ah, ok
<apuimedo> jamespage: have you?
<apuimedo> gnuoy: I guess you use the internal OSt to launch openstack charms, right?
<gnuoy> apuimedo, ost?
<apuimedo> OpenStack
<gnuoy> apuimedo, we use ost on ost and then ost on baremetal
<apuimedo> cool
<sebas5384> jcastro: Ubucon LatAm almost starting!
<jcastro> glad to see you made it!
<jcastro> people were, uhhh, worried you didn't make the flight, heh
<sebas5384> yeah!! i were looking after the ride but I didn
<sebas5384> didn't find any sign with my name
<sebas5384> don't know what happen :P
<sebas5384> but hey! I'm here in one pice
<sebas5384> jose: is making an awesome work :)
<urulama> jamespage: hey. frankban just released new deployer and also new juju-gui, which now allows you to deploy v4 bundles (so the new format you've been working with Makyo). Bundle deploy almost works, fails due to hook/config problem, no longer machine placement.
<urulama> jamespage: and v3 -> v4 translation has landed, but that requires new charmstore release, which should happen next week
<frankban> urulama, jamespage: also quickstart fixes to deploy v4 bundles with placement will land next week, for now the deployer should work, v0.5.1 released on Pypi and the juju stable PPA
<apuimedo> marcoceppi: ping
<sebas5384> someone is having problems with python dependencies? http://pastebin.com/qX0c98F0
<sebas5384> I have almost the same problem with every charm deploy
<sebas5384> help?
<sebas5384> http://pastebin.com/qX0c98F0 help ?
<marcoceppi> sebas5384: looking
<sebas5384> marcoceppi: thanks o/
<marcoceppi> sebas5384: can you ssh into the machine and run this:
<marcoceppi> sudo apt-get -y install python-apt python-launchpadlib python-tempita python-yaml
<marcoceppi> sebas5384: apt-get is having a problem installing one of those packages
<sebas5384> yes
<sebas5384> let me see
<sebas5384> hmmm yeah couldn't even download the mysql for the mysql charm
<sebas5384> http://pastebin.com/DaztURCf
<sebas5384> marcoceppi: ^
<sebas5384> it seems there's definitvely an apt problem
<marcoceppi> sebas5384: well, there ya go. Looks like you've got a proxy or networking issue
<marcoceppi> is this on local provider or a cloud?
<sebas5384>  403  Forbidden
<sebas5384> local
<marcoceppi> somethign is up with the networkign wherever you are
<marcoceppi> the archive is reachable here
<sebas5384> maybe is something related to firewalls
<sebas5384> yep definitively is that
<sebas5384> marcoceppi: I'm going to do a demo tomorrow in the ubuconLA
<sebas5384> and I wanted to show the local provider
<marcoceppi> sebas5384: is this at the conference's wifi? or the hotel?
<marcoceppi> or somewhere else?
<sebas5384> conference wifi
<marcoceppi> eek
<sebas5384> i think i must deploy locally in the hotel
<marcoceppi> jose: if you're around (you're probably mad busy) can you help sort this?
<sebas5384> marcoceppi: jose is at the stage in this very moment
<sebas5384> hehe
<sebas5384> i'll talk with him
<marcoceppi> sebas5384: haha, well he should be able to help hopefully
<sebas5384> definitively!
<sebas5384> thanks for your help marcoceppi :)
<sebas5384> marcoceppi++
<sebas5384> we should have some karma bots here :P
<beisner> ping wolsen
<wolsen> pong beisner
<beisner> wolsen, hey so in kilo and later, rmq doesn't allow guest amqp connections unless coming from localhost
<beisner> wolsen, so i've got tests that do all sorts of cool validation ... for combos up to kilo.
<wolsen> beisner, mmm was it just kilo? I thought that was a change in rmq version 3.something iirc
<wolsen> beisner, ahh right
<beisner> wolsen, then i have a work-around in the tests to just force-enable guest from anywhere at kilo and later.
<beisner> wolsen, but...
<beisner> wolsen, that clobbers ssl configs, since it's all in the same funky lil erlang stanza
<beisner> wolsen, ref:  https://www.rabbitmq.com/access-control.html
<beisner> wolsen, tldr:  the rmq charm needs to grow a config option to set loopback_users to [], while preserving all of the mangling happening for ssl configs in that file.
<beisner> wolsen, in order for me to include ssl tests that is.
<wolsen> beisner, what about granting access for a specific user? oh those are made over the relations right?
<beisner> wolsen, it looks to me like that file is only currently touched by the charm when ssl is on.
<wolsen> beisner, oh
<beisner> wolsen, yep, rock:hardplace
<wolsen> beisner, yep... and enabling the option risks security :/
<beisner> wolsen, right, but the charm is untestable (across our whole support matrix) without either adding that option
<beisner> or
<beisner> doing more serious mangling of configs in the test, outside of charm config options
<beisner> which i try not to have to do
<wolsen> beisner, that's fair
<beisner> wolsen, the new test is this:
<wolsen> beisner, fwiw, I'm not opposed to adding the option, just need to have the appropriate amount of "do not use this in the real world"
<beisner> deploy precise-icehouse through vivid-kilo
<beisner> 3 rmq units
<beisner> send amqp msg to each unit, 1 at a time, and check that the message can be read from all other units
<beisner> check that all units are represented IN all units cluster_status
<beisner> so it's more of a core cluster/messaging functional check that works on the whole horizon of combos (sans ssl).
<beisner> i have a whole load of helpers coming out of this too, which will make it easy to add the other scenarios
<beisner> wolsen, i wonder if we can borrow creds from something else that does have perms to talk over the wire?
<beisner> and use that in pika instead of guest
<wolsen> beisner, should would be nice to be able to set a mock relationship here - so you could create it that way!
<wolsen> beisner, we probably could borrow them if you really wanted to - seems like quite a kluge though
<wolsen> beisner, I think its probably better to enable the guest account for what is being tested
<beisner> wolsen, i was just running with the guest@guest pika thing the other tests were using.   i'll look at adding a separate user explicitly who can talk.  but if that involves that funky stanza thing, we'll still be clobbering ssl charm activities.
<beisner> wolsen, the charm isn't plumbed for that file currently, except for in the ssl enablement pieces, so it doesn't look like that will be non-trivial.
<beisner> s/non-trivial/trivial/   bah the double negatives
<beisner> ie. the ssl charm config options are a bit antisocial wrt the file we need to touch
<sebas5384> marcoceppi: the mediawiki scalable bundle is giving errors in the haproxy and mediawiki install hooks http://pastebin.com/8gXkdhKz
<sebas5384> deploying in GCE
<sebas5384> I'm gonna test it in aws later
<marcoceppi> sebas5384: It looks like that ppa doesn't exist anymore
<sebas5384> hmmm
<sebas5384> so the charm is not updated?
<sebas5384> maybe because is precise/
<beisner> wolsen, got it sorted.  the test is just adding a testuser with necessary privs, should work independently of that file and the ssl erlangy stuff.
<wolsen> beisner, sweet - that's even better
<hazmat> tvansteenburgh: was there any clarity on how frankban's branch fixed the memory issue?
 * beisner => eow
<mwenning> hi juju team, how do I set proxy's on bootstrap nodes?  I tried 'http_proxy:  blah' in environments.yaml and it said "unknown config"
<marcoceppi> mwenning: what version of juju?
<marcoceppi> hazmat: tvansteenburgh is out for the next week
<mwenning> marcoceppi, 1.24.3-trusty-amd64
<marcoceppi> mwenning: try http-proxy
<marcoceppi> and https-proxy
<marcoceppi> mwenning: you can also set it on an exiting environment with juju set-env
<mwenning> marcoceppi, thx .   in 30 minutes I'll know if that fixed it ;-)
<mwenning> marcoceppi, is there a way to tell juju to just abort everything and quit?
<marcoceppi> mwenning: ctrl + c?
<mwenning> marcoceppi, nope, juju says something like 'waiting for bootstrap to finish' and keep right on going until all the timeouts finish.
<marcoceppi> mwenning: you could maybe do juju destroy-environment --force
<mwenning> marcoceppi, nope, juju ignores that if it's in the middle of a bootstrap
<marcoceppi> \o.
<mwenning> \o/ , Bootstrapping Juju machine agent
<marcoceppi> mwenning: huzzah!
#juju 2015-08-08
<jose> marcoceppi: re: seb's prob. I'm gonna take a look tomorrow early morning as soon as I get there. should be an issue with apache and the local repo we have, I should be able to fix it real quick :)
<jose> sebas5384: I'm gonna check the local repo tomorrow morning 8am. if you can be there 8:30 to test it would be amazing.
<jose> sebas5384: I believe it's an issue with apache2, so should be easy to fix
<jose> anyways, on my checklist
<sebas5384> jose: great! but I think I'll deploy some wordpress
<jose> sebas5384: using cloud resources?
<jose> or local?
<sebas5384> jose: I think we wont be able to deploy openstack
<sebas5384> actually
<sebas5384> let me try with gce
<jose> sebas5384: so, you've got the third slot. what about if we meet at the speakers lounge during the first two slots so we can figure things out, see if they work and everything, and then maybe you will feel a bit more confident? :)
<sebas5384> jose: I'll be there around 9am
<jose> sebas5384: sounds good. make sure to check in to get your food tickets
<sebas5384> already arrange with elizabeth
<jose> woot woot
<sebas5384> jose: good one
<jose> feel free to ping me if you need anything or lmk if you wanna test before your session
<sebas5384> thanks man
<jose> no worries!
<sebas5384> I'll test the openstack here
<sebas5384> with gce
<sebas5384> lets see what happen
<jose> got your free trial?
<jose> beware, 8 cpus limitation on free trial
<sebas5384> yeah
<sebas5384> jose: hmmm
<sebas5384> didn't know about that
<sebas5384> would it be better if we got some aws credits
<jose> aaand... you should be all set.
<sebas5384> i feel its more stable
<jose> no worries, whatever you feel more comfortable working on!
#juju 2016-08-08
<blahdeblah> Charmers: a little birdy told me that there is currently no charm upgrade story from a non-reactive to a reactive charm; can anyone confirm or deny?  And if confirm, give a bit of background - i.e. is this fixable?
<jrwren> blahdeblah: its not true. charm-upgrade could always work, however, charms would have to be written to do so. very few charms have transitioned from non-reactive to reactive.
<jrwren> Yes, this is absolutely fixable.
<blahdeblah> jrwren: So I think I'll be wanting to do that in the quasi-near future; got any more details about what is required?
<blahdeblah> beisner: ^ FYI - re discussion about NTP charm
<jrwren> blahdeblah: i'm not the best to ask, but remember it is just scripts which execute. Test and make sure things don't break and upgrades will work fine.
<blahdeblah> jrwren: hmm - OK; know of anyone else who has dealt with the issue in more depth?
<jrwren> blahdeblah: lazyPower marcoceppi and some others who appear offline
<blahdeblah> jrwren: Cool - thanks
<jrwren> blahdeblah: for example, iirc many openstack charms are transitioning to reactive
<kjackal> Hello Juju World!
<babbageclunk> hi kjackal!
<jackweirdy> I know it's a mountain of a feature, but is it realistically possible in the long term to use docker instead of lxc for the local provider? It would give you local provider support for mac and windows for free
<beisner> hi blahdeblah, jrwren - possible, but not at all simple.  https://github.com/juju-solutions/charms.reactive/issues/77
<beisner> jrwren, we have no classic openstack charms rewritten in reactive for this reason.  new charms, definitely reactive :-)
<jrwren> beisner: ah! I was confused by the new ones. Thanks for the correction.
<lazyPower> jrwren - i forget the specifics of what its not doing, however i know cory_fu has been looking into this
<stokachu> anyone run into this:
<stokachu> https://www.irccloud.com/pastebin/MYC9neDg/
<stokachu> beta 14 attempting to destroy a beta13 controller
<lazyPower> stokachu - well thats a fun bug :)
<lazyPower> looks like something certainly did get an upgrade that beta13 isn't responding to
<stokachu> :D
<stokachu> yea
<rick_h_> stokachu: lazyPower betas don't get upgrades
<stokachu> i can't even force destroy it with kill-controller
<stokachu> rick_h_: i know upgrades aren't supported which is why im attempting to destroy the controller and start fresh
<lazyPower> rick_h_ - we need to adjust your keyword scoring mechanism
<stokachu> rick_h_: at least we should be able to do that between version
<lazyPower> stokachu - i can get you a revision of charmbox thats beta-13 if you need a temporary hand
<lazyPower> s/hand/downgrade/
<stokachu> so i was able to manually terminate the controller in aws
<stokachu> and now it's attempting to destroy through provider :\
<stokachu> lazyPower: thanks though
<lazyPower> stokachu - welcome to the pain of having stale machine data in your config
<stokachu> lazyPower: ugh, yea lemme get that charmbox rev so i can try to destroy that way
<stokachu> rick_h_: are you saying that if i install beta14 and attempt to bring down a beta13 environment in order to be compatible with beta14 that this is expected?
<rick_h_> stokachu: yes, this is why we say 2.0 isn't prod ready. each beta is an island
<rick_h_> stokachu: we're not expected to support long running controllers until we hit RC
<stokachu> well that's fun
<rick_h_> stokachu: yea :(
<lazyPower> stokachu docker pull jujusolutions/charmbox:devel@sha256:fe61e24d903890502048a18783dc84c6e6d970efa32c1d97cffd22cf7af9fc92
<stokachu> hah installing docker to destroy a juju controller
 * lazyPower rubs his hands together evilly
<lazyPower> yessss, let the hate flow through you
<rick_h_> stokachu: normally we suggest just shutting down using the cloud tools
<rick_h_> stokachu: with juju unregister
<lazyPower> rick_h_ - what is this juju unregister you speak of?
<stokachu> rick_h_: ah much easier than installing docker
<rick_h_> lazyPower: say you get a controller because of a juju share + juju register
<lazyPower> :O
<rick_h_> lazyPower: you need a way to get rid of that controller from your juju controllers output
<lazyPower> Details:
<lazyPower> Removes local connection information for the specified controller. This command does not destroy the controller.  In order to regain access to an unregistered controller, it will need to be added again using the juju register command.
<rick_h_> lazyPower: and unregister does that
<lazyPower> time to mail the list!
<stokachu> so i killed the instance in aws and did a juju unregister
<stokachu> that should be all i need?
<rick_h_> stokachu: yea
<stokachu> ok cool
<rick_h_> stokachu: unless you used storage/networking bits in which case you need to clean up the cloud bits used for that
<stokachu> nah nothing fancy
<rick_h_> stokachu: yea, just being clear in case folks are watching :)
<lazyPower> and list=pinged
<lazyPower> stokachu - i'll get you to install docker one way or another ;)
<stokachu> :D
 * D4RKS1D3 Hi to everyone
<lazyPower> o/ D4RKS1D3
<D4RKS1D3> Hi lazyPower :)
<D4RKS1D3> Someone knows if is posible configure a service with two network interfaces?
<lazyPower> D4RKS1D3 - depends on the charm and how its coded to handle networking. Most charms just leverage public-address/private-address according to how the charm author intended
<lazyPower> some charms have extra-network-bindings
<D4RKS1D3> But if I create a new charm, juju has de capabilitie to put more than one nic
<D4RKS1D3> And this capabilitie is available in 1.25.3 or 2.0?
<lazyPower> D4RKS1D3 - seems like we need to get more docs around the feature - but here's a bug to track https://github.com/juju/docs/issues/1163
<lazyPower> D4RKS1D3 - and network spaces surfaced preliminary in 1.25, its being expanded in 2.0.
<zeestrat> rick_h_: Are you looking at supporting upgrades for the RC's or only 2.0 proper?
<rick_h_> zeestrat: once we hit RC you should be able to upgrade a deployment from RC to RC and to 2.0 GA
<zeestrat> rick_h_: Cool. Do you have a set amount of beta's and RC's that are you planning to go through? I see beta15 and rc1 in the timeline on LP.
<rick_h_> zeestrat: we're waiting to make a couple of backward incompatible changes before going to RC
<rick_h_> zeestrat: so not set
<rick_h_> zeestrat: it's more we try to do weekly betas with what's in
<rick_h_> zeestrat: and once we get the work items we know will cause issues for folks landed those betas with that work should be toward the end of our beta line
<zeestrat> rick_h_: Gotcha. So we're probably looking at Q4 for some of the later RC's and hopefully the GA?
<rick_h_> zeestrat: we're really hoping the next 8 weeks for sure
<zeestrat> rick_h_: Good to know. Thanks :)
<Spads> hi
<Spads> a while ago someone here helped me with a command that dumped debug state for reactive relation stuff
<Spads> while in debug-hooks
<lazyPower> Spads - sounds like charms.reactive get_states
<Spads> I run this from the shell?
<lazyPower> in the hook context during debug-hooks, yep
<Spads> cool thanks, confirmed it says the right thing... so now why is my code notr unning!
 * Spads debugs
<lazyPower> Spads - welcome to the story of my life :)
<lazyPower> Spads - whats even scarier is when you say "Why does this code work?"
<Spads> hahaha
<Spads> lazyPower: okay I think I need help here
<Spads> my @when('juju-info.available') function never seems to run
<Spads> despite:
<Spads> 'juju-info.connected': {'conversations': ['reactive.conversations.juju-info.global'], 'relation': 'juju-info'}, 'juju-info.available': {'conversations': ['reactive.conversations.juju-info.global'], 'relation': 'juju-info'}}
<Spads> like I put hookenv.log calls in the function and it's simply never run
<Spads> I'm using the juju-info interface layer
<Spads> What am I doing wrong?
<lazyPower> Spads - oh hey, sorry i stepped away from my desk. /me reads backscroll
<lazyPower> that sounds correct. the state is raised when the relationship is established...  is this on a requires or a provides side of the relationship? (hint: should always be on requires)
<Spads> yeah, I just realised that @when is a synonym for @when_all
<Spads> I assumed it would be an or!
<lazyPower> @when_any
<Spads> so my attempts to be all-encompassing faaaailed
<Spads> yeah it's cool
<lazyPower> it does happen :)
<Spads> I narrowed it back down to the surgical event again
<Spads> and now I have other people to blame
<lazyPower> nice!
 * Spads ð¥ to bare except: statements
<lazyPower> dont charm-blame us ;)
<Spads> hahahaha
<lazyPower> i guess in this context, it would be charm-layers us
<lazyPower> i should add that as an alias for the lulz
<Spads> well the reactive layers stuff is kind of horrible in the sense that there are all these magic strings that aren't very well documented or inspectable
<Spads> if you get one of those wrong...
<lazyPower> magic strings?
<Spads> yeah
<Spads> all the '{provides:foo}-derp-{lerp,dorp}'
<Spads> and then working out what you're supposed to key off of on the other side
<Spads> and the docs for the layer just say {{name}}
<Spads> which, uh, thanks?
<lazyPower> well, thats a byproduct of how juju implements relationships
<Spads> like stuff just happens as strings
<lazyPower> an interface is bound to a realtionship name. We need a way to bind on that. so templating the reactive state was the middle of the road
<Spads> no type checking, no logging of what it's doing
<lazyPower> all the typechecking and etc should be handled by the interface itself
<Spads> yeah, I just wish it had a way of telling me precisely what it's doing and why I'm failing
<Spads> because we've been SO CONFUSED about the interface name vs the relation name vs the remote service name...
<Spads> and you just try them all and waste hours when something doesn'tw ork
<Spads> because it's all just strings!
<lazyPower> Spads - i really want you to file bugs about stuff like that
<Spads> ok
<Spads> will do
<lazyPower> if its been causing you headache, we need to get bugs filed against our charm developer docs to make them easier for the world at large. I'm certain you're not the only one who's run into this.
<lazyPower> relationships were the most contentious document prior to the reactive refactoring, so its no surprise to me that they yet still remain largely mysterious and misunderstood
<Spads> https://bugs.launchpad.net/charm-helpers/+bug/1611024
<mup> Bug #1611024: magic strings in reactive relations lead to confusion <Charm Helpers:New> <https://launchpad.net/bugs/1611024>
<lazyPower> Need a quick review on https://github.com/juju-solutions/interface-juju-info/pull/3
<lazyPower> Thanks Spads, i broke that up into the respective places i think it targets.
<Spads> thanks
<mattrae> hi, i'm attempting to upgrade an environment from juju 2.0 beta 7 to beta 14. after installing the beta 14 juju packages, juju status is asking me to log in. when i try to log in it hangs. i'm hoping i can upgrade and not have to redeploy the environment. https://pastebin.canonical.com/162628/
<lazyPower> mattrae - 2.0 are listed as betas and treated as islands, we wont support controller upgrades until 2.0 hits -rc
<lazyPower> there will be upgrade paths from release-candidates forward
<lazyPower> the current method is to destroy the controller and re-bootstrap on the new beta
<mattrae> lazyPower: thanks for the details about upgrades. we'll see if re-bootstrapping is a possiblity
<mayuri_> hello everyone..!  Good moring
<lazyPower> o/ mayuri_
<mayuri_> I am writing a juju charm and need help for the same
<lazyPower> mayuri_ - Have you read the developer docs? https://jujucharms.com/docs/stable/developer-getting-started   this is the best resource to get started writing a charm.
<mayuri_> yes
<mayuri_> i have read these docs
<mayuri_> i have a specific query..
<kwmonroe> fire away mayuri_
<cholcombe> what's the juju2 equiv of destroy-machine --force?  I have some stuck units i can't remove
<mayuri_> can you please go through the this question?
<mayuri_> http://askubuntu.com/questions/808638/how-to-get-ip-addresses-of-all-units-in-a-service-in-juju-charm
<kwmonroe> cholcombe: remove-machine --force
<cholcombe> kwmonroe, cool thanks
<mayuri_> I am not able to get ip addresses if all units which are participating in a relation.
<mayuri_> can anybody help me with that?
<mbruzek> kwmonroe: Prabakaran wants to check in his layer code on launchpad, but got an error. What should be the recommended location for the layer code?
<kwmonroe> mbruzek: i think lp:~ibmcharmers/layers/layer-<charm>/trunk
<kwmonroe> mbruzek: but i'm not sure if /layers/ needs to be a project first or not
<kwmonroe> i know you have reservations that layers is too generic, but i'm not too worried about that
<Prabakaran> kwmonroe: shall i try with this branch lp:~ibmcharmers/layers/layer-<charm>/trunk
<mbruzek> kwmonroe: I am worried about layers may be too generic to create a project for that.
<kwmonroe> oh, i see mbruzek.. is that because LP would require that to be unique across all namespaces?
<mbruzek> kwmonroe: That is my concern, I am chatting with sinzui right now about this.
<kwmonroe> unfortuantely mbruzek, i don't think many are using LP for their layered source.. i just checked all the big data branches and all our layered stuff is in github
<kwmonroe> mbruzek: Prabakaran, just so i'm clear.. the problem here is because pushing to lp:~name/charms requires a series to be in the branch, and multiseries charms break that?
<kwmonroe> iow, you can't push a trusty/xenial multiseries foo charm to lp:~name/charms/foo/trunk?
<Prabakaran> ya my charm is supported on both xenial and trusty
<Prabakaran> i tried this bzr push lp:~ibmcharmers/layer-ibm-spectrum-symphony/trunk ....
<Prabakaran> is not working
<Prabakaran> k
<mbruzek> Prabakaran: Yes the reason for that is there is no project named layers-ibm-spectrum-symphony
<mbruzek> Prabakaran: So you could create said project but that would be tedious to do every time you have a new layer.
<Prabakaran> let me try like this lp:~name/charms/<charm name>/trunk
<mbruzek> Prabakaran: no that really isn't the right location for _layer_ code.
<mbruzek> Prabakaran: kwmonroe and I are looking into creating a project named "layers" if that makes sense
<Prabakaran> so i will have to push built charm only to these branches right?
<Prabakaran> ok that would help
<Prabakaran> its time for my bed..if anytning please mail me  on this. i will try and come back to you. thanks mbruzek kwmonroe for your help on ths
<mbruzek> Prabakaran: we will continue with an email
<Prabakaran> thanks mbruzek
<kwmonroe> have a good night Prabakaran!
<kwmonroe> mbruzek: wasn't there a time when lp:~name/charms/<series>/<charm>/[trunk | layer] was a thing?
<mbruzek> kwmonroe: I am not clear on that.
<kwmonroe> so the project was still /charms/, with layered source in the ./layer branch and built charm in ./trunk
<mbruzek> I am worried about people confusing .../charms/ with __charm__ code and not layer code
<kwmonroe> ack.. it's a bit m00t here anyway since that won't work for multi-series charms anyway.  i think <series> was required in the /charms/ project.
<mbruzek> maybe my concern is unfounded, but I have had to explain a few things like this recently
<mbruzek> kwmonroe: It seems like we have discussed this before
<kwmonroe> yeah mbruzek, who's the expert here?  seems clear that you and i don't know what we're doing, and i loathe going round-and-round with you.
<mbruzek> kwmonroe: I am trying to remember _when_ we talked about this last.
<mbruzek> kwmonroe: I loathe you too buddy
<kwmonroe> mbruzek: it would have been right around the time ibm switched to layered charms, so early 2016.  and i'm fairly sure we went with ./layer vs ./trunk for the branch name precisely because we couldn't find a common root project to replace charms and we didn't want every charm to be a new project.
<mbruzek> kwmonroe: It looks like you created lp:~ibmcharmers/layer-ibm-base/trunk on 03-23-2016 and modified it on 06-02-2016
<cory_fu> tvansteenburgh just showed me ngrok.io for letting other people access services run with local provider and it's the coolest thing I've seen all week
<cory_fu> Just wanted to mention it here, in case it's new to anyone else
<magicaltrout> not month though
<magicaltrout> can't be that great ;)
<magicaltrout> hmm
<magicaltrout> interesting
<bdx> oh the possibilities
<bdx> cory_fu, tvansteenburgh: thanks for sharing
#juju 2016-08-09
<bjf> i'm trying to bootstrap a juju 2.0 controller behind a firewall. how do i specify a proxy to use?
<bjf> the bootstrap is stuck trying to get the tools (i think)
<thumper> bjf: there is config for proxies, http_proxy, https_proxy, ftp_proxy no_proxy
<thumper> etc
<thumper> also apt_http_proxy
 * thumper looks for docs
<thumper> bjf: https://jujucharms.com/docs/2.0/juju-misc#configure-proxy-access
<bjf> thumper, juju 2.0 doesn't recognize "juju set-env"
<bjf> thumper, if i set that in the environments.yaml file do i set it under "environments:" or under the next level "maas:" ?
<thumper> bjf: set-model now
<thumper> bjf: also there is no environments.yaml for 2.0 now
<thumper> bjf: but clouds, accounts etc
<bjf> thumper: so that doc is pretty much completely wrong :-)
<thumper> wallyworld: do you know where the docs are now?
<thumper> bjf: damn...
<bjf> thumper, meh, it's fast moving .. i'm sure it will get fixed up
<bjf> thumper, so, i have a fairly recent 2.0 install on this system, never a 1.x and yet there is a .juju/environments.yaml file
<thumper> hmm...
<thumper> weird
<bjf> thumper, there is no set-model either
 * thumper looks
<thumper> set-model-config
<thumper> `juju help commands` is helpful
<bjf> thumper, ok, lots more there than i thought
<bjf> thumper, that can only be used after i already have a controller
<thumper> bjf: look at `juju help bootstrap`
<thumper> you need to pass in some config yaml
<bjf> thumper, and that config.yaml can just be name=value pairs?
<thumper> look like --config=somefile.yaml works
<thumper> and in there you have you keys and values, like:
<thumper> default-series: xenial
<thumper> apt-http-proxy: http://10.250.171.1:8000
<thumper> etc
<bjf> thumper, ack, thanks, will give is a try
<thumper> kk
<thumper> let us know how it goes
<bjf> will do
<bjf> bootstrapping .. will take a while
<bjf> thumper, i must not have it right. it's been sitting at "Fetching tools" for some time
<bjf> my config file:
<bjf> $ cat juju-bootstrap-config.yaml
<bjf> http-proxy: http://squid.internal:3128
<bjf> https-proxy: http://squid.internal:3128
<bjf>  
<bjf> the command line: juju bootstrap kernel-controller kernel --constraints tags=juju-controller --debug --config=~/juju-bootstrap-config.yaml
<thumper> hmm....
<thumper> wallyworld: thoughts ^^^^?
<wallyworld> um
<bjf> thumper, ok, may be my problem
<wallyworld> looks ok, assuming there's a custom cloud called kernel
<wallyworld> if the proxies are not working as expected, then tools will not be fetched
<bjf> wallyworld, thumper yeah, i think i don't have the proxy right .. debugging
<bjf> thumper, wallyworld thanks for the help. got it bootstrapped. had nothing to do with the proxy, maas config issue i hadn't noticed before
<thumper> bjf: sweet
<wallyworld> great
<kjackal> Hello Juju World!
<babbageclunk> Morning!
<lazyPower> Mornin #juju o/
<magicaltrout> ahoyu
<magicaltrout> -u
<lazyPower> ohaiyu magicaltrout
<lazyPower> speaking of... We still need to do that sync
<magicaltrout> yeah we do
<magicaltrout> next week would actually be good because i'll be post op eye surgery and pretty incapacitated but with a bunch of spare time for talking
<lazyPower> oh!
<lazyPower> i scheduled for friday, any specific day next week? I'm happy to move
<magicaltrout> yeah friday is actually op day, so that wont be happening ;)
<magicaltrout> tuesday or wednesday would be good
<lazyPower> lol! how dare you take care of yourself
<lazyPower> same approx time work? I wanted to catch the UK/US overlap as best i could
<magicaltrout> 3pm UK would suit
<lazyPower> Updated for Tuesday, 8/16
<lazyPower> thanks tom :)
<magicaltrout> no probs
<magicaltrout> should be good
<lazyPower> just lmk if you need to resched. eye surgery... oi
<lazyPower> i've thought about going down that path to get rid of the spectacles... i'm still iffy about having someone cut on my eye
<magicaltrout> should be 5 days unable to see properly cause i have to wear pretective contacts post op because of a thin cornea or something
<magicaltrout> so i have a bit more recovery time and annoyance, but should be good to get rid of the glasses
<lazyPower> I'm going to live vicariously through you, and then goad myself into doing the same
<magicaltrout> they said the op takes about 30 seconds an eye?!
<lazyPower> assuming you dont grow eye gremlins
<lazyPower> if you grow eye gremlins all bets are off
<magicaltrout> lol
<magicaltrout> yeah well i might go blind
<magicaltrout> you never know
<lazyPower> i sincerely hope thats not the case
<magicaltrout> hehe
<lazyPower> i'm sure you'd pick up TTD pretty quick, but i digress
<lazyPower> i'd rather not test the state of accessability in the world
<magicaltrout> well everyuone i spoke to have said it was the best thing they've had done
<magicaltrout> so I figured what the heck
<lazyPower> Looking forward to derailing the meeting over the download from the experience :)
<lazyPower> #destroyallmeetings
<magicaltrout> hehe
<magicaltrout> dynamic UID and GID on containers where root access isn't allowed
<magicaltrout> in docker
<magicaltrout> what a bloody hack
<lazyPower> yup
<lazyPower> the new security paradigm in docker confuses me
<magicaltrout> we have a bunch of NFS backed containers and the UID/GID for dev/staging/prod have to be different which is fair enough, so I had a semi hack with gosu in place
<magicaltrout> but that doesn't stop exec /bin/bash giving you a root shell
<magicaltrout> which makes security folks sad
<magicaltrout> admittedly the person would already have broken into your host, but thats beside the point
<magicaltrout> so now I have a hack that creates a user in the docker file, does its stuff, then in the entrypoint sudo's(sadface) modifies itself to set the new UID/GID, chowns a bunch of stuff then removes its own root access.....
<lazyPower> https://www.minio.io/  -- neat
<D4RKS1D3> Hi people, we have in juju some funny command to change the public ip in one charm?
<lazyPower> D4RKS1D3 - not sure what you mean by "some funny command ot change the public ip"
<lazyPower> are you asking if you can change the units public-address?
<D4RKS1D3> yes lazyPower
<lazyPower> D4RKS1D3 - i may be incorrect, but as i understand it, the public-address is auto-discovered by the agent, and is fed by metadata from your cloud. You could reasonably remote into the unit and manually change this in the agent config, but i dont know that I would advise doing that, as i'm not sure what unintended side effects it may have
<lazyPower> D4RKS1D3 i do believe that our openstack provider has the notion of a floating ip. and I would urge you to mail the list about that
<D4RKS1D3> I have my lxc working to run openstack
<D4RKS1D3> I mean this lxc public ip
<D4RKS1D3> these*
<balloons> rogpeppe, any concerns with migrating https://launchpad.net/gnuflag over to using git instead of bzr? I'm asking because we're encountering a bzr issue trying to build a juju snap using launchpad.
<balloons> rogpeppe, you can see the details here: https://launchpadlibrarian.net/277983664/buildlog_snap_ubuntu_xenial_amd64_juju_BUILDING.txt.gz
<rogpeppe> balloons: is that the only remaining bzr dependency?
<balloons> rogpeppe, if godeps could pull from a git source we would be a-ok. It's one of 2 remaining bzr depends. Interestingly enough the other is tomb, which I think could be updated to point to the git repo
<balloons> so, it may very well be the only one
<balloons> rogpeppe, it's interesting you still need a fork for the package after all these years
<rogpeppe> balloons: i'll take a look. i don't think there'll be any problem moving gnuflag.
<balloons> rogpeppe, awesome. I'm happy to make the change in the dependencies.tsv and do a PR once it's done, or you can propose
<rogpeppe> balloons: sure
<rogpeppe> balloons: and it looks like we could move the tomb dep to gopkg.in/tomb.v1 without probs too
<rogpeppe> balloons: i've created https://github.com/juju/gnuflag
<rogpeppe> balloons: have fun :)
<balloons> rogpeppe, ty! :-)
<beisner> tvansteenburgh, got a bit of a hot one here.  seeing test leaks (false passes) over this guy:  https://github.com/juju-solutions/bundletester/issues/54
<beisner> and a patch to accompany:  https://github.com/juju-solutions/bundletester/pull/55
<tvansteenburgh> beisner: merged
<beisner> tvansteenburgh, thx sir
<x58> wg 15
<bdx> kwmonroe: I'm going to be charming up a few grails apps over the next few days/weeks
<bdx> kwmonroe: I'll be making use of openjdk charm quite a bit it looks like :-)
<lazyPower> bdx - how did your ES/Kibana deployment go? get them all charm upgraded?
<bdx> errr .. making use of the openjdk *layer
<magicaltrout> don't trust anything written by a texan
<bjf> https://jujucharms.com/docs/2.0/authors-charm-writing seems to be completely out of date for JuJu 2.0 .. can anyone point me at more up-to-date docs for writing my first charm?
<bdx> lazyPower: I barked up that tree for a while ... the app is being sold to a customer .... our project managers don't think it makes sense for our devs to spend cycles there I guess
<bdx> at least I tried
<bdx> :-(
<lazyPower> bjf - wow you found a really crutfy doc, have a look here
<lazyPower> https://jujucharms.com/docs/2.0/developer-getting-started
<bdx> magicaltrout: are you tex-ist?
<magicaltrout> bjf: https://jujucharms.com/docs/2.0/authors-charm-building
<kwmonroe> cool bdx!!  let me know if you run into things that you'd like.  eg, we could make jvm params configurable and pass them on the relation.. so charms that required java could do @when(java.ready), function start_me_up(), java {java.get_tuning} myJar
<lazyPower> bdx - no worries :) was just curious since we spent some time troubleshooting a really ancient tutorial
<lazyPower> er, s/tutorial/installation
<magicaltrout> no bdx just kwmonroe-ist ;)
<bjf> lazyPower, magicaltrout thanks
<lazyPower> magicaltrout - stop linking to the author docs, they're basically deprecated at this point.
<lazyPower> bjf - feedback / bugs / et-al welcome on the developer guide
<lazyPower> bjf - if you do enounter any head scratchers - https://github.com/juju/docs/issues
<magicaltrout> lazyPower: i just googled charm layer writing
<magicaltrout> and that was top
<magicaltrout> surely that one can't be that out of date
<lazyPower> i cannot change google :( i'm sadly not a panda
<bdx> magicaltrout: hes going to get you for that one
<bdx> like the boogieman
<magicaltrout> at a scan it looks current lazyPower what am I missing?
<lazyPower> magicaltrout - it sure can, the author docs were the developer guide before we went through and re-wrote them as the dev guide. THey linger for purposes unknown to me
<magicaltrout> bdx: he's too lazy to get any body
<bjf> magicaltrout, i got that by googleing "juju charm tutorial"
<kwmonroe> i feel like i should get somebody.. but i'm just gonna stay on the couch instead.
<magicaltrout> lazyPower: you say that but
<magicaltrout> https://jujucharms.com/docs/2.0/developer-layers
<magicaltrout> then you click on the Getting started link at the bottom
<magicaltrout> and it 404s :)
<lazyPower> because it has a hard coded /devel link in there
 * lazyPower sighs
 * lazyPower resigns
<magicaltrout> hehe
<kwmonroe> heh
<magicaltrout> adios lazyPower ! ;)
<kwmonroe> noooooes
<magicaltrout> got my talk switched so i'm not clashing now lazyPower
<magicaltrout> amsterdam on the 31st
<magicaltrout> london on the 1st
<magicaltrout> eye op on the 12th
<magicaltrout> what could go wrong?! ;)
<lazyPower> you could remind me that we have more work to do in the docs thanks to decisions that were made outside of anyone else knowing?
<lazyPower> some cowboy out there owes us an explanation
<magicaltrout> don't forget you have more  work to do on the docs due to decisions that were made outside of anyone else knowing
<magicaltrout> I think som cowbory out there owes you an explanation
<magicaltrout> kwmonroe: ?
 * lazyPower sets off explosions and walks away
<lazyPower> cool guys never watch the explosion
<magicaltrout> hehe
<lazyPower> they're too busy walking away from it
<magicaltrout> it've seen that in films
<magicaltrout> so it must be true
<kwmonroe> anyone know how to rewrite jujucharms.com/docs commit history?  asking for a friend.
<magicaltrout> hehe
<magicaltrout> here you go if you want to see a treat http://pastebin.com/F7mP83AB
<magicaltrout> the biggest hack ever to get the security i need in my docker entrypoint
<magicaltrout> jesus its taken all afternoon to figure out the correct order of hackage
<lazyPower> kwmonroe - first time caller, long time listener -  I hear its as simple as git rebase -i revno && git push upstream master --force
<kwmonroe> heh magicaltrout, "/bin/rm -rf /etc/sudoers", what could possibly go wrong?
<kwmonroe> i'm gonna giggle when /etc/sudoers is a symlink to /
<magicaltrout> lol i know
<magicaltrout> well i can clearly refine parts, but as I don't want sudo access any more
<magicaltrout> i'm just like "f-it, blow it away"
<lazyPower> magicaltrout  - <<jackie face meme>>
<lazyPower> y u do dis
<kwmonroe> yeah magicaltrout, you not only said "f-it", you said "recursively f-it"
<magicaltrout> recursion is my life
<kwmonroe> you should totally do "rm -rf /${SUDOERS_FILE} || rm -rf /etc/sudoers".  for security.  make sure you get that leading slash in there.
<magicaltrout> i think this is more  security by obscurity
 * magicaltrout doesn't do what kwmonroe says anyway :P
<kwmonroe> anyway bdx, sorry for this trout spam, but i am really curious about your interactions with the java interface and openjdk layer.  we could go really nuts with (making jvm params configurable, emitting a changed state so principal charms know they need to restart to get new jvm bits, etc).  i feel it's a bit silly atm because it's just "install java... and done i guess."
<magicaltrout> you promised that when i said i was using jdk charm
<magicaltrout> never happened ;)
<kwmonroe> i remember magicaltrout.  but you were just learning python, so i didn't want to make things too complicated.  bdx can handle it.  :P
<magicaltrout> hehe
<kwmonroe> you were all like "pyth... what comes next?"  and i was all like "on dude, just type o-n!"
<magicaltrout> sad times
<kwmonroe> :)
<bdx> kwmonroe: I think the elastic stack services could greatly benefit from those mods
<bdx> kwmonroe: for example elasticsearch - cloud greatly benefit from the configurability of JAVA_OPTS
<bdx> ES_HEAP_SIZE=10g
<bdx> that currently defaults to 2g
<kwmonroe> glad you think so bdx, because that entitles me to ask you questions about the implementation.  so.. if a principal service is running, would you (as the operator) be upset if some admin changed an openjdk config value and it restarted your service?
<kwmonroe> or would you rather that simply trigger a status update that sets the openjdk status to "settings changed, please restart the principal service"?
<kwmonroe> because, for example, restarting datanodes in a big data environment just because the heap size changed may not be a good idea for in-flight stuff.
<kwmonroe> on second thought.. it would be up to the principal charm author to decide how to react to java states.. so i guess it's not a big deal.  java should tell/emit states and let whomever cares about it to deal with it.
<magicaltrout> they could check the config change
<magicaltrout> and see if its worth a reboot
<kwmonroe> yup, and the "worth" determination would be presented as a status change.. so the juju status would report "jdk settings changed; do something (or not)".  i think i like that.
<kwmonroe> still, that's not on openjdk (or any java provider) to determine that.  the principals that consume java would wire that in.
<bdx> kwmonroe: so, people are writing java web apps because they want the security of the strongly typed/compiled app (majorly) right
<bdx> kwmonroe: I would say puting each app in a silo would be optimal
<bdx> kwmonroe: or each unit of an application
<bdx> I see how that makes implementing shared javaops difficult though
<kwmonroe> yeah bdx, but we can't do each unit of an app.. or even each app in a silo.  if you have ES, tomcat, and some big data stuff all related to the 'openjdk' charm, and you update the openjdk config, that's going to update for *all* apps related to java.
<kwmonroe> which is not great, i agree.  you'd almost need to deploy openjdk as "es-jdk" and another for "bigdata-jdk" and another for "foo-jdk" and tweak the config as needed for each of those to get the silo'd config.
<bdx> kwmonroe: yea .. I see that ... I think using it as a layer seems more reasonable in that scenario
<bdx> but also slightly defeats the purpose
<kwmonroe> yeah, interesting.. i hadn't considered the java layer being a base layer for specific java apps.
<kwmonroe> i think i see what page you're on now.  note, we're not on the *same* page, but at least i see yours over there. ;)
<bdx> yes!
<bdx> someone understands me
<bdx> baha
<bdx> its lonely overhere at creativedrive .... surrounded by php and rails devs
<kwmonroe> haha
<bdx> im the only python guy in the company
<bdx> bringing it
<kwmonroe> +1 bdx, i have faith in you!
<bdx> kwmonroe: thx, I did too ... until I realized what one of our software projects was
<bdx> -> https://www.morpheusdata.com/
<mayurisapre> hello everyone..
<mayurisapre> i am writing a charm and need help for the same
<kwmonroe> mayurisapre: is this related to https://askubuntu.com/questions/808638/juju-charm-how-to-get-ip-addresses-of-all-units-in-a-service-in-a-charm-hook-i/810156#810156?
<mayurisapre> yes
<mayurisapre> hi kwmonroe..
<mayurisapre> i read your answer, bit still facing the same issue
<kwmonroe> mayurisapre: relation-list -r {id} should be returning all units in that relation. where are you calling relation-list?
<mayurisapre> i am calling it from myCharm
<mayurisapre> relation-changed hook
<kwmonroe> mayurisapre: what's the relation name that you're calling?
<kwmonroe> mayurisapre: relation-ids {name} <-- what's {name}?
<mayurisapre> halbaas
<bdx> icey, kwmonroe: possibly you could hear me out on layer-consul
<bdx> https://github.com/jamesbeedy/layer-consul
<mayurisapre> I have also tried this with mysql and wordpress
<mayurisapre> with db relation
<mayurisapre> in that case as well i got the same result
<bdx> icey, kwmonroe: I immagine layer-consul could/should be used for the base layer for charm-consul, and subordinate charm-consul-agent
<bdx> imagine*
<bdx> the primary differences in charm-consul and charm-consul-agent is https://github.com/jamesbeedy/layer-consul/blob/master/templates/consul.json.tmpl#L12
<bdx> would be true for consul, and false for consul-agent
<bdx> consul-agent wouldn't need to do any custom config on consul-relation-joined, it would be the consul server(s) that would run the `consul join <consul-agent-ip-address>` on consul-agent-relation-joined
<bdx> right?
<bdx> here, let me finish whipping this together, then I'll ping you
<kwmonroe> mayurisapre: i deployed mysql and 3 wordpress charms and got this: http://paste.ubuntu.com/22850696/
<kwmonroe> mayurisapre: the important bit is that last relation-list call that shows 3 values that correspond to the 3 wordpress units connected to the mysql charm
<mayurisapre> this is juju 2.0 right?
<kwmonroe> correct
<mayurisapre> i am currently using 1.25
<mayurisapre> do you think this is causing the issue?
<kwmonroe> i don't think so mayurisapre, but i'll deploy on 1.25 and check
<kwmonroe> bdx: is charm-consul a principal and charm-consul-agent a subordinate?
<kwmonroe> bdx: if not, could you use leadership so the leader in a multi consul deployment set server: true and the rest false?
<kwmonroe> mayurisapre: what does this return for you? juju run --unit mysql/0 'relation-ids db'
<mayurisapre> i just tried it again
<mayurisapre> with juju run --unit mysql/0 'relation-ids db' it listed all three unit
<mayurisapre> but in debug mode
<mayurisapre> in db-relation-changed it listed just 1 unit
<kwmonroe> mayurisapre: how are you invoking debug mode?
<mayurisapre> with debug-hooks command
<kwmonroe> mayurisapre: how are you calling debug-hooks?  juju debug-hooks mysql/0?
<mayurisapre> yes
<kwmonroe> mayurisapre: when you do that, the db-relation-changed hook will fire once for each connected unit.  so it's possible you're only seeing the 1st wordpress unit.
<mayurisapre> but if I don't use debug mode even then in execution i get same results..
<mayurisapre> i checked that from logs
<kwmonroe> mayurisapre: if "juju run --unit mysql/0 'relation-list -r db:x'" is showing multiple results (one for each connected unit), then it's working as it should
<kwmonroe> in a debug-hooks terminal, you'd have to run ./hooks/db-relation-changed, then exit, then let the debug-hook proceed to the next db-relation-changed hook for a subsequent unit
<mayurisapre> yes..things are clear to me now.
<mayurisapre> thanks a lot for your time.
<mayurisapre> i really appreciate it.
<kwmonroe> np mayurisapre, i'm glad you got it figured out!  let us know if you have any other questions.
<mayurisapre> yes. It was really helpful.
<mayurisapre> thanks a lot again.
#juju 2016-08-10
<kjackal> Hello Juju World!
<xnox> Hello, i'm on xenial and i have juju-2.0 beta14
<xnox> i'm trying to bootstrap local / lxd provider
<xnox> and that fails with a tools info mismatch error
<xnox> 2.0-alpha1-xenial-amd64, 2.0-alpha2-xenial-amd64
<xnox> what am I supposed to do?
<lazyPower> xnox - does it work correctly if you issue the following:  juju bootstrap lxd-text lxd --upload-tools ?
<lazyPower> you *shouldn't* need to do that, as we've updated the streams, but i'm unsure of what actually failed without log output
<arosales> fyi, jujuchcharms.com firewall is being updated if folks notice any intermittent deploy failures from the store or jujucharms.com unavailable
<petevg> @cory_fu: you're right. Grabbing the patch from the merged commit is cleaner: https://github.com/juju-solutions/layer-apache-bigtop-base/pull/37
<kwmonroe> suchvenu: are you still having trouble with charm build?
<suchvenu> yes
<kwmonroe> suchvenu: what does your layer.yaml look like?
<suchvenu> http://pastebin.ubuntu.com/22927683/
<kwmonroe> suchvenu: looks ok to me: http://paste.ubuntu.com/22928082/
<suchvenu> ya.. it used to work !
<kwmonroe> suchvenu: will you pastebin the output from "charm build -l DEBUG --no-local-layers"?
<suchvenu> kwmonroe : I edited some things under deployable charm folder under trusty and running the test... Will do this once that is over
<xnox> lazyPower, juju bootstrap --config agent-stream=devel localhost localhost -> works
<xnox> lazyPower, my guess is that tools in "released" stream are not up to date with latest beta of 2.0
<lazyPower> xnox - interesting... i know beta-14 has published tools. It may have been due to the upgrade of the firewall, i'm not entirely certain though.
<lazyPower> we had some intermittent service outages this morning, that may have been the culprit
<xnox> lazyPower, ok, when i rebootstrap the environment next, hopefully things will work fine. thanks =)
<lazyPower> kwmonroe mbruzek tvansteenburgh aisrael - https://github.com/juju-solutions/charmbox/pull/45   -- this effects you if you're actively using charmbox on a regular basis
<cmars> do we have an influxdb charm that's any good?
<cmars> hmm, https://jujucharms.com/u/chris.macnaughton/influxdb/4 looks the most recent, i'll try it out
<kwmonroe> lazyPower: mbruzek: nice work (PR 45)!
<lazyPower> thanks kwmonroe
<lazyPower> fyi - https://github.com/juju-solutions/charmbox/pull/48  -- since it rewrote history it wa snever going to cleanly apply
<lazyPower> i went ahead and pushed it up and confirmed that the branches are now a 2 line diff
<cory_fu> lazyPower: Three lines if you count whitespace.  ;)  https://github.com/juju-solutions/charmbox/compare/devel
<lazyPower> oooo
<lazyPower> busteddd
<cory_fu> But yeah, it looks awesome
<cory_fu> Now we just have to keep it nice and clean
<lazyPower> cory_fu - thanks for starting this
<lazyPower> i was inspired by your work back in march
<cory_fu> :)
<bdx> marcoceppi: reminder to clear my stale aws controllers
<marcoceppi> bdx: are they still there?
<marcoceppi> I thought I did
<bdx> http://paste.ubuntu.com/22954686/
<bdx> marcoceppi: ^
<marcoceppi> bdx: cleaning up
<bdx> thx
<cholcombe> anyone else seeing a large amount of error's about failing to download and a sha mismatch on their debug logs?
<cholcombe> i'm using juju 2 beta14 and i'm seeing tons of them
<magicaltrout> there is a  firewall  upgrade today cholcombe
<cholcombe> magicaltrout, ahh that'd do it haha
<magicaltrout> aye
<cholcombe> so i take it i'm sol for awhile?
<magicaltrout> i think the  general  consensus is keep trying
<cholcombe> ok
<lazyPower> cholcombe - additionally that happens with charms when cross-models they dont match the revision on the controller
<lazyPower> i ran into this last week
<lazyPower> its fix committed, but not released yet
<cholcombe> lazyPower, dang i really need that fix.  it's happening constantly :(
<cholcombe> all i'm doing is saying deploy what i have locally and it freaks out
<lazyPower> cholcombe - try juju upgrade-charm until it sticks
<cholcombe> ok
<lazyPower> thats the current work-around
<cholcombe> well at least it's something
<cholcombe> lazyPower, thanks!  I can test again :)
<lazyPower> nice
 * lazyPower hattips
 * magicaltrout isn't convinced thats one word
#juju 2016-08-11
 * lazyPower sticks his finger in magicaltrout's ear
<Murf> hi all, can juju talk directly to a dockerd api or via dockerd socket? I would like to use it but dont want to do it with cloud provider accounts etc, just want it to orechestrate my local existing docker environment (sort of like the way rancher connects to dockerd)
<kjackal> Hello Juju World!
<randlema1> ello
<lazyPower> Murf - juju does not have any notion of talking directly to the docker socket, no
<lazyPower> Murf - what you can do, is either use KVM and provision those, or with some manual hacking (as simple as applying a profile) you can use lxd.
<rick_h_> Murf: it's an interesting idea, but we've always had a problem in that juju needs flexibility to work the way it does.
<rick_h_> Murf: to respond to relations, scale out, etc. While do to that with docker you have to rebuild images with different config/properties/etc
<magicaltrout> docker makes me want to cry currently :)
<webmichael> anyone had a dying charm they can't destroy ? the lxc is gone. even restarted JUJU still shows life: dying
<webmichael> destroyed the service first
<lazyPower> webmichael - i have encountered this before yes, its typically when an extant relationship is in error state.
<jcastro> balloons: your email says you pushed beta14 to the snap store but I'm still on beta13, snap refresh shows no changes
<balloons> jcastro, and you are attempting to install juju?
<balloons> snap install juju --beta --devmode?
<jcastro> I have it installed already
<aisrael> any known issues with MAAS 1.9 and Juju 2 beta 14?
<aisrael> particularly with machines failing to provision?
<jcastro> balloons: oh, iirc you didn't publish in the beta channel before right?
<balloons> jcastro, you might have juju-nskaggs installed
<jcastro> I do
<balloons> and right, I pushed that one to stable
<balloons> there's an actuall juju package now, not my namespaced version
<jcastro> ok so what do I need to do to get on the right track?
<balloons> snap install juju --beta --devmode
<jcastro> do I remove juju-nskaggs?
<balloons> it's not needed no
<balloons> it's my own personal juju I can rev and play with
<jcastro> ok
<jcastro> alright so I can now just ignore the ppa version and use this modulo bugs right?
<jcastro> balloons: ok one last thing, shouldn't /snap/bin/juju take precedence over /usr/bin/juju?
<jcastro> I've installed it but I think I am missing something wrt the path
<balloons> jcastro, by design it does not
<jcastro> ok so I need to explicitly remove the old package then?
<balloons> you can change your PATH if you want snaps first, but they are second by default
<balloons> you can run /snap/bin/juju if you wish
<balloons> if you don't want competition for `juju`, uninstall the debian packages
 * jcastro nods, uninstalls
<jcastro> I will push for the future with you
<jcastro> balloons: hey so tldr, when can we start telling people to use it this way, aka. what's left to not need --devmode?
<balloons> jcastro, :-). Your questions should be answered here; https://github.com/juju/juju#building-juju-as-a-snap-package
<balloons> it's upstreamed now :-)
<jcastro> hah, of course, bash completion
<jcastro> because I just got bash completion
<jcastro> the balloons giveth, the balloons taketh away.
<jcastro> oh, that's for ubuntu-core
<balloons> jcastro, bash completion is something that is not likely anytime soon from a snappy perspective. It's inherently insecure
<bjf> i'm trying to "juju deploy ubuntu --to fozzie.maas" and nothing appears to be happening. i'm using juju 2.0 and maas 2.0 is my "cloud".
<balloons> jcastro, ohh lookey there. We need a better bug, that's a wishlist
<bjf> juju says it is deploying the charm but i can see the system is not deployed
<bjf> also, juju debug-log is showing no output at all
<balloons> jcastro, as far as telling people things, I'm glad you mentioned running juju on gentoo, arch, etc. There's no reason it shouldn't "just work". People can and should adopt snaps with 2.0 - that means now
<balloons> jcastro, please push adoption; I'm happy to work on any issues you find. --devmode is acceptable for now, especially on classic systems. Nothing to grimace about imho
<petevg> marcoceppi: ping. Specifically, I'm pinging you about wordpress, which you are listed as the maintainer for. I reviewed and approved https://code.launchpad.net/~jamesbeedy/charms/trusty/wordpress/apache2_trusty_fix/+merge/297720, but the charm still lives in the ~charmers namespace.
<petevg> Did you have plans for moving it elsewhere?
<petevg> Also, do we have a ticket open about the failing tests?
<cory_fu> marcoceppi: Also, on an unrelated note, can we query the store for https://jujucharms.com/zookeeper/trusty/1 to verify that the only use of it is https://jujucharms.com/openstack-midonet-liberty/bundle/0/ ?
<cory_fu> (actual deploys, not just relations)
<magicaltrout> how acceptable is it reply to people trying to sell you "lists", with a simple one line reply "F*** off"
<lazyPower> magicaltrout - *waves hand* these are not the scumbags you are looking for
<magicaltrout> well the clog up my inbox
<magicaltrout> i think i've had 4 today
<jcastro> magicaltrout: heya what's the title of your talk at mesoscon?
<magicaltrout> jcastro: http://sched.co/7n8J
<jcastro> thanks
<magicaltrout> thats not up to date though jcastro so it'll be on wednesday
<magicaltrout> and pentaho london meetup @ bluefin on thursday
<magicaltrout> busy week :P
<jcastro> \o/
<marcoceppi> cory_fu: we should be able to
<marcoceppi> petevg: I'll take it over
<cholcombe> lazyPower, did the consul agent relation get removed ?
<lazyPower> wut?
<mbruzek> cholcombe: I don't think we removed it
<cholcombe> mbruzek, hmm ok.
<cholcombe> mbruzek, trying to figure out why i can't relate vault to it
<mbruzek> and you could relate them before?
<cholcombe> it has a consul:consul-agent relation
<cholcombe> yeah
<cholcombe> but this was prob 6 months ago
<lazyPower> cholcombe - were you using icey's consul charm at the time?
<cholcombe> lazyPower, i don't think he has a consul charm
<icey> lazyPower: I've never published a consul charm
<icey> maybe I should have
<lazyPower> "never published" - that doesn't mean you weren't working on one
 * lazyPower raises an eyebrow
<mbruzek> cholcombe: This was a charm written originally by hazmat
<icey> lazyPower: I was poking at a fork of the consul one
<mbruzek> cholcombe: https://github.com/mbruzek/consul-charm
<lazyPower> in either case... https://github.com/ChrisMacNaughton/consul-charm/blob/master/metadata.yaml
<lazyPower> neither have a consul-agent relation
<mbruzek> cholcombe: I checked history of metadata.yaml I see no such consul-agent relation
<cholcombe> :(
<cholcombe> alright maybe i'm smoking crack.  i swear this worked before haha
<mbruzek> https://github.com/ChrisMacNaughton/juju-interface-consul
<mbruzek> That one has a relation
<mbruzek> cholcombe: So you were not using the promulgated consul charm 6 months ago.
<cholcombe> icey, tsk tsk haha
<icey> heh it's an interface, not a charm ;-P
<icey> bdx: could you talk to cholcombe about your work on consul + vault?
<cholcombe> if you add juju-gui last after adding all your other applications (services) can it not find any of them?
#juju 2016-08-12
<kjackal> Hello Juju World!
<jamespage> tvansteenburgh, hey - I was just trying out juju-deployer with juju 2.0 and some of our openstack bundles in oct
<jamespage> hit upon this
<jamespage> 2016-08-12 08:35:31 [ERROR] deployer.env: Command (juju deploy -m jamespage:default --constraints mem=1G --series xenial xenial/ceilometer ceilometer) Output:
<jamespage>  ERROR path "xenial/ceilometer" can not be a relative path
<jamespage> i've fixed locally by ensuring that abspath is used - but I'm a little unfamiliar with the codebase so suspect my change will break everything else!
<kjackal> Hey, is there something wrong with the zookeeper charm on the store? juju deploy zookeeper fails here while  juju deploy cs:trusty/zookeeper-1 is fine!
<kjackal> For trusty ^
<MonsieurBon> Hi all
<MonsieurBon> I have deployed neutron-gateway and connected it to rabbitmq-server but still it shows 'Missing relations: messaging'. What am I missing?
<MonsieurBon> I have tried both relation types amqp and amqp-nova. Same behaviour
<shruthima> Hello Team We have pushed IBM-HTTP charm for review , but it is not reflecting in the review queue. Bug-link: https://bugs.launchpad.net/charms/+bug/1612535 Please suggest if anything we are missing.
<mup> Bug #1612535: New Charm: IBM HTTP Server <Juju Charms Collection:New> <https://launchpad.net/bugs/1612535>
<aisrael> Anyone seen this in beta 15?
<aisrael> ERROR unknown object type "ModelConfig" (not implemented)
<aisrael> after trying to deploy a charm or bundle
<lazyPower> juju add-unit hello-juju-world
<lazyPower> aisrael - checking now, any specific charm or is this anything you attempt to deploy?
<aisrael> any charm, but also juju upgrade-juju
<valeech> Good morning charmers. I have never written a charm and have decent experience with shell scripting. I do have a good amount of experience with other languages. My question is, would attending the juju summit be worthwhile?
<aisrael> beta13, after upgrading to 15
<lazyPower> aisrael - are you attempting to upgrade a beta-14 controller ot beta-15?
<aisrael> I'm re-bootstrapping now
<lazyPower> aisrael - upgrades aren't supported :(
<lazyPower> thats likely the culprit
<aisrael> lazyPower: well, that'd be the problem then
<marcoceppi> valeech: absolutely
<marcoceppi> valeech: in addition to presentations from the community, we have charm experts you can pair up with who know bash, python, etc to work on a charm with
<shruthima> ya we are facing ERROR unknown object type "ModelConfig" (not implemented) issue in beta 14
<valeech> marcoceppi: Great! I just donât want to be that guy that shows up and holds everybody back during any labs because he hasnât met the prerequisites. We all have experiences that :)
<shruthima> Hello Team We have pushed IBM-HTTP charm for review , but it is not reflecting in the review queue. Bug-link: https://bugs.launchpad.net/charms/+bug/1612535 Please suggest if anything we are missing.
<mup> Bug #1612535: New Charm: IBM HTTP Server <Juju Charms Collection:New> <https://launchpad.net/bugs/1612535>
<marcoceppi> valeech: understood, thanks for checking! you won't be holding anyone back :)
<tvansteenburgh> jamespage: it won't break anything, abspath is required for local charms when using deployer with juju2
<MonsieurBon> Hi, if I can't get glance to work with swift using juju, is this the right channel to ask a question about that?
<xnox> charms are hard.
<xnox> is there an example of a subordinate charm, which is minimal.
<xnox> looking at nrpe charm... it's 60 files big.
<kjackal> xnox: openjdk seems simple, https://github.com/juju-solutions/layer-openjdk
<kjackal> xnox: here is the charm from the store https://jujucharms.com/openjdk/
<xnox> kjackal, that's much better! thanks. And sort of, what i'm trying to do.
<xnox> just provision generic blobs, onto any instance/machine/charm.
<kjackal> nice!
<xnox> what's this? https://github.com/juju-solutions/layer-openjdk/blob/master/layer.yaml
<xnox> reading https://jujucharms.com/docs/1.25/authors-charm-building
<marcoceppi> xnox: this is probably a better starting poing: https://jujucharms.com/docs/stable/developer-getting-started
<xnox> marcoceppi, what version is "stable" ?
<xnox> is that 1.25 or 2.0?
<marcoceppi> 2.0, but charm development doesn't change between versions
<xnox> that's odd, 2.0 is not stable
<marcoceppi> xnox: again, charm development and juju development are two different things
<marcoceppi> you can use the latest charm development guide which will make charms that work on 1.25 or 2.0
<xnox> ok
<xnox> *sigh*
<xnox> charm build -> bzr ERROR: Not a branch
<xnox> ... convert my charm from git -> bzr
<xnox> charm build -> fatal: Not a git repository
<xnox> so my layer/charm should be.... both?!
<Guest_84847> Allah is doing
<Guest_84847> sun is not doing Allah is doing
<Guest_84847> moon is not doing Allah is doing
<Guest_84847> stars are not doing Allah is doing
<Guest_84847> planets are not doing Allah is doing
<Guest_84847> galaxies are not doing Allah is doing
<Guest_84847> oceans are not doing Allah is doing
<Guest_84847> mountains are not doing Allah is doing
<Guest_84847> trees are not doing Allah is doing
<Guest_84847> mom is not doing Allah is doing
<Guest_84847> dad is not doing Allah is doing
<Guest_84847> boss is not doing Allah is doing
<Guest_84847> job is not doing Allah is doing
<Guest_84847> dollar is not doing Allah is doing
<Guest_84847> degree is not doing Allah is doing
<Guest_84847> medicine is not doing Allah is doing
<Guest_84847> customers are not doing Allah is doing
<Guest_84847> you can not get a job without the permission of allah
<Guest_84847> you can not get married without the permission of allah
<Guest_84847> nobody can get angry at you without the permission of allah
<Guest_84847> light is not doing Allah is doing
<Guest_84847> fan is not doing Allah is doing
<Guest_84847> businessess are not doing Allah is doing
<Guest_84847> america is not doing Allah is doing
<cholcombe> can someone ban ^^
<Guest_84847> fire can not burn without the permission of allah
<Guest_84847> knife can not cut without the permission of allah
<Guest_84847> rulers are not doing Allah is doing
<D4RKS1D3> Any admin in the room?
<Guest_84847> governments are not doing Allah is doing
<cholcombe> apparently no admins are here :(
<cholcombe> marcoceppi, do you have admin powers?
<marcoceppi> cholcombe: no, I don't
<D4RKS1D3> is not chanserv in this server?
<lazyPower> niemeyer - ping
<niemeyer> lazyPower: Hi
<lazyPower> niemeyer - can we get some whitelisted admins on mup?
<lazyPower> it would be nice ot have channel guardians when you're not looking
<niemeyer> lazyPower: What would it do?
<lazyPower> there's a fair bit of spam up above, this is the fourth time in 4 months
<lazyPower> niemeyer - well, say we !warn user, and !ban user. help police the channel from spam bots
<lazyPower> it seems like a nicer way to manage delegation of concerns to well known and respected members of the community to help self police
<niemeyer> lazyPower: That's a nice idea
<lazyPower> xnox - that seems wrong, can you get me a paste with the output of charm build and links to your layers?
<niemeyer> lazyPower: We'll need a new plugin.. doesn't feel too complex
<lazyPower> niemeyer - i'm happy to collab/help on that. we need to do something if this keeps up :( we're getting targeted for spam sporadically
<niemeyer> lazyPower: Have you considered using chanserv?
<niemeyer> lazyPower: It has that sort of delegation built in
<lazyPower> well, i cant? I'm not a room op?
<niemeyer> lazyPower: Well, the point is precisely to have some of those
<lazyPower> niemeyer - who would i petition to recommend room ops?
<lazyPower> afaik you're the only one i'm aware of. i'm sur eyou're not the *only* but, you're the only one to me <3
 * niemeyer hugs lazyPower
<niemeyer> lazyPower: So, I don't know either.. who are the most active people here in the channel, which tend to be around responding to passers by?
<lazyPower> myself, magicaltrout, kwmonroe , mbruzek, marcoceppi   stand out in my mind
<lazyPower> jose is sporadically
<jose> me what?
<lazyPower> jose a special little turtle :)
<mbruzek> hi
<jose> niemeyer: actually, what is mup running on? supybot?
<jose> if so, you'd need to give channel access to mup and then user access to us
<niemeyer> jose: https://github.com/go-mup/mup/
<niemeyer> jose: I don't think we need any bots for this.. delegation of permissions is builtin into the freenode infra
<jose> I assume it has channel moderation features?
<jose> yeah
<jose> I mean, some prefer to do it via a bot, some prefer to do it via chanserv
<jose> as you wish
<xnox> lazyPower, so i had a bug in my layer.yaml
<xnox> but with either .git or .bzr repository, one gets a warning / error about the other repo type.
<xnox> I don't have "repo:" key in my layer.yaml
<xnox> my layer is private. I can reproduce something public later.
<lazyPower> xnox - ok, it woudl be good to get a bug. those error messages look less than informative. would be good to get a bug filed so we can get that patched
<xnox> ack.
<niemeyer> Hmm.. the kickban support doesn't seem to have compiled into Freenode's chanserv :/
<mbruzek> xnox: https://github.com/juju/charm-tools/issues is the place to file that bug
<niemeyer> lazyPower: See if it's working for you.. try to /msg chanserv flags #juju niemeyer!*@* +b
<lazyPower>  /msg chanserv flags #juju niemeyer!*@* +b
<lazyPower> doh
<niemeyer> :)
<lazyPower> [11:28:49] -ChanServ-	You are not authorized to execute this command.
<lazyPower> do i need to logout/back in with nickserv?
<niemeyer> No, I don't think so
<lazyPower> didn't think so either, but its been a bit since i've had channel responsibilities ;)
<niemeyer> lazyPower: Just to be sure, try to do that on someone else.. it might be preventing on me specifically
<lazyPower> same when i attempt against mbruzek
 * mbruzek waves
<mbruzek> Are you trying to kick me?
<lazyPower> well, trying to ban to be specific.
<niemeyer> lazyPower: Ok, unfortunate, but it's what I expected.. freenode's chanserv didn't have the kickban support compiled in
<niemeyer> lazyPower: mup might be the better option
<niemeyer> lazyPower: I'll have a look at that, if someone else doesn't get there before I do
<lazyPower> ok, thanks niemeyer for taking a look
<niemeyer> I can't promise to do that soon enough, though, so here is your immediate solution ^
<mgz> niemeyer: thanks
<mgz> niemeyer: can I beg for some in #juju-dev as well?
<niemeyer> mgz: done
<lazyPower> sgtm, thanks niemeyer
<niemeyer> Added thumper as well, so there's coverage on the other side
<marcoceppi> niemeyer: thanks
<jose> to all Canonical people around. can we please avoid the use of paste.canonical.com for public bug reports? makes it hard to read/understand.
<jose> jcastro: ^
<mgz> ideally bug reports should have log files uploaded, rather than linked, anyway
<mgz> the other issue is our CI bug reports often have private links in, which is sadly required but should have context beyond that
<mayurisapre> hello everyone..
<mayurisapre> what is the good way to share any variable between hooks in a particular charm?
<mskalka> mayurisapre, there is using relations
<mskalka> however if you want to access relation data outside of a relation hook there's some special tomfoolery involved
<mayurisapre> i want to share a variable between charms install, stop and relation-* hooks
<mskalka> mayurisapre, check this page out: https://jujucharms.com/docs/stable/authors-relations
<mskalka> there's also a good page with the relation-get and set info there, let me see if I can dig it up
<mskalka> here we go: https://jujucharms.com/docs/stable/authors-hook-environment
<mayurisapre> relation-get /relation-set hook tools can share variable between 2 charms which have relation
<mayurisapre> but i want to share data within a single charm
<mskalka> ahhh, misinterpreted your question then
<mayurisapre> between say install hook and stop hook
<mskalka> that I have no idea about, sorry :/
<mayurisapre> ohh okay..
<mayurisapre> np
<lazyPower> mskalka - unitdata
<lazyPower> 1 sec getting a link
<lazyPower> mskalka mayurisapre  - https://pythonhosted.org/charmhelpers/api/charmhelpers.core.unitdata.html
<mayurisapre> hey thanks..
<mayurisapre> this will help me..
<lazyPower> mayurisapre - no problem :) let me know if there's anything else
#juju 2016-08-13
<Murf> lazyPower, rick_h_ thanks for the answers re juju/docker, I will have a play with kvm
<opennode> Hi! Im not able to get network spaces working with MAAS 2.0rc4 provider and Juju 2.0beta12 - LXD containers always get address from 10.0.0.0 - no matter what I specify for âbind. Im wondering if Im doing something wrong or its a bug or there is some undocumented configuration changes neededâ¦ Can anyone shed a light on this?
#juju 2016-08-14
<opennode> HI! Is multiple nics/networks scenario supported with juju 1.25.6 and MaaS provider - for unit LXC container deployment?
#juju 2017-08-07
<jamespage> tvansteenburgh: hey [  i
<ak_dev> kjackal: kjackal_ : Here is the final bundle (should work except for some of the pods restarting)
<ak_dev> https://usercontent.irccloud-cdn.com/file/BuWwI0Ic/bundle.yaml
<ak_dev> as discussed here https://github.com/AakashKT/ovn-kubernetes-charm/issues/3 , it is an OVN issue
<kjackal_> thanks ak_dev will give it a look tomorrow
<ak_dev> kjackal_: yeah sure :-)
<ak_dev> good night
#juju 2017-08-08
<D4RKS1D3> Hi, I want to know why when I put the command "juju add-machine <machinenameinmaas>" is too slow
<D4RKS1D3> when I had the old version of juju works quickest!!
<D4RKS1D3> Any suggestion?
<kwmonroe> cory_fu: i seem to recall you having an issue with travis.. was it by chance calling proof?  i'm not sure what i'm missing here:  https://travis-ci.org/juju-solutions/layer-apache-bigtop-base/builds/262305210?utm_source=github_status&utm_medium=notification
<kwmonroe> in that scenario, my travis.yml is calling make sysdeps lint, which works fine until it gets to proof on line 24: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/9fca22db0e3f77921dcfa5058c3402d32e5ea1cb/Makefile#L19
<cory_fu> kwmonroe: I saw that error with the charm-tools snap when it was unconfined because there was a version conflict between the library provided by the system and one of the CT deps
<cory_fu> kwmonroe: It looks like sysdeps isn't using the snapped version of charm-tools
<kwmonroe> cory_fu: how'd you fix it?
<cory_fu> The charm-tools snap is now confined, so it doesn't care what libs are installed at the system level
<kwmonroe> cory_fu: "now confined" as in the edge version?
<cory_fu> I'd recommend switching to the charm-tools snap
<cory_fu> Pretty sure it's released to stable
<cory_fu> Let me check
<kwmonroe> hm, i got this when i tried to snap install the snap without --classic:  https://travis-ci.org/juju-solutions/layer-apache-bigtop-base/builds/262285702?utm_source=github_status&utm_medium=notification
<cory_fu> Hrm.  Yeah, apparently it wasn't released to stable
<kwmonroe> oooooh.. charm-tools vs charm?  i didn't know there were 2 things
<cory_fu> No, it should just be charm, it seems
<kwmonroe> ack, charm edge ftw.
<cory_fu> marcoceppi: What's the process for moving that up to stable?
<kwmonroe> marcoceppi: you in charge of charm snaps moving....
<kwmonroe> :)
<cory_fu> heh
<marcoceppi> cory_fu: which revision?
<cory_fu> marcoceppi: 17 is currently in edge, and that's the fully confined one we came up with in Warsaw
<cory_fu> stable is currently on 15
<marcoceppi> cory_fu: does it currently corrospond to a release in charm-tools repo? or is it all snap fixes?
<cory_fu> Just snap changes, as far as I can recall
<cory_fu> It does double the size of the snap, though, unfortunately
<cory_fu> We could investigate paring it back down
<marcoceppi> cory_fu: I just moved it to candidate to bake for a day or two
<cory_fu> k
<hloeung> marcoceppi, cory_fu: so is the charm snap the recommended path forward now, with the deb package no longer supported?
<hloeung> marcoceppi, cory_fu: I'm using the snap (currently 15) but just thought I'd ask
<marcoceppi> hloeung: yes, the deb is being deprecated
<hloeung> marcoceppi, cory_fu: also, any ideas if there will be a bundletester snap?
<marcoceppi> hloeung: good question, cory_fu would be best to field that question
<marcoceppi> well, cory_fu and tvansteenburgh
<hloeung> ah thanks. It's just a bit of work to get an environment set up for charming. "Install some of these packages", "install some snaps", "pip install bundletester"
#juju 2017-08-09
<aisrael> Have there been any notable changes to `juju debug-log`? If I run it with --tail, it will sometimes exit on its own.
<kwmonroe> aisrael: i've noticed that too, but it only seems to happen when tailing a log from a jaas controller.  is that the same for you?
<kwmonroe> might also be that debug-log uses an ssh connection that shuts down after x mins of inactivity based on client config.  i haven't dug too deeply into it yet.
<kwmonroe> ^^ speculating on the ssh bits there.. i really don't know how debug-log gets its data
<aisrael> kwmonroe: Nope, this is a local controller. I haven't poked around the controller machine's logs yet, but that'll probably be the next step
<Zic> hi here, I have my kubernetes-worker in "Error", and when I looked at "juju debug-log", I saw this:
<Zic> unit-kubernetes-worker-1: 16:37:21 ERROR juju.worker.dependency "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-kubernetes-worker-1/charm: open /var/lib/juju/agents/unit-kubernetes-worker-1/charm/metadata.yaml: too many open files
<Zic> I can "cat" this file on the kubernetes-worker-1 unit
<Zic> so I don't understand why it returns a "too many open files" :(
<kwmonroe> Zic: you running on lxd?  i've seen "too many open files" on large(ish) lxd deployments and have fixed them with this guide:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
<kwmonroe> specifically the limits/sysctl changes ^^
#juju 2017-08-10
<D4RKS1D3> hi i am trying to deploy masively machines in one cluster but is not working in paralell
<D4RKS1D3> Someone the reason?
<D4RKS1D3> It could be change in some place?
<magicaltrout> hello folks if i want a purely manually provisioned setup
<magicaltrout> what do i bootstrap?
<rick_h> magicaltrout: https://jujucharms.com/docs/2.2/clouds-manual
<magicaltrout> yeah just found it
<magicaltrout> thanks
<rick_h> magicaltrout: <3
<magicaltrout> whats the deal with k8s and gpus?
<magicaltrout> do i need to do anything special these days?
<rick_h> magicaltrout: hmm, think there's a tutorial for that.
<rick_h> magicaltrout: oh hmm, it's a link from the tutorial to sam's blog post https://medium.com/intuitionmachine/how-we-commoditized-gpus-for-kubernetes-7131f3e9231f
<magicaltrout> yeah thats old school though
<magicaltrout> i bumped in to chuck in IAD a few months ago who said run the edge snaps
<magicaltrout> but i think they are mainline now
<rick_h> magicaltrout: doh, maybe marcoceppi or tv<tab dammit not in the room> have more info
<marcoceppi> magicaltrout: don't need to do much but just deploy
<marcoceppi> 1.7.0 (and 1.6.0) will support 'em out of the box
<magicaltrout> fair enough, i have a cluster coming up, i'll deploy a test container and see what happens thanks
<magicaltrout> rick_h: did you get my response re thursdays vs tuesdays?
<rick_h> magicaltrout: yes, checking with other folks that want to be in there about their schedules
<magicaltrout> ah right
<magicaltrout> no probs
<magicaltrout> i got my openldap snap basics done today so i'm hoping to get that into a charm and connected to our ranger snap pretty sharpish to complete our base deployment
<magicaltrout> I don't think we have too much more work get through, although juggling with about 1000 other things means it just ebbs and flows a bit
<jhobbs> is there a way to use maas system_id instead of hostname as a placement directive with juju bootstrap? juju bootstrap maas2 --to <system_id> doesn't work as it expects a hostname
<jhobbs> which is a bit awkward as maas doesn't use hostname's as primary keys for nodes, it uses system ids
<magicaltrout> hey SaMnCo, good blog, the US Government is currently mining coins on your behalf! ;)
<SaMnCo> magicaltrout: wait for the upcoming one then...
<SaMnCo> Upcoming next week or so, I just need to rebuild my cluster post move
<SaMnCo> Almost done :)
<magicaltrout> nice
<magicaltrout> I got asked to test GPU support via openstack on K8S, your blog was top of my google ;)
<wpk> jhobbs: apparently not, file a bug please (maybe --to @instanceid)?
<jhobbs> wpk thanks, bug filed https://bugs.launchpad.net/juju/+bug/1709995
<mup> Bug #1709995: Can't use system id as a placement directive for MAAS <cdo-qa> <juju:New> <https://launchpad.net/bugs/1709995>
<wpk> jhobbs: I'd do it but I'm on my vacations and I promised myself not to work ;)
<jhobbs> wpk: aww get out! thanks for looking into it, though :)
#juju 2017-08-11
<braziercustoms> Ubuntu 16.04  cojure-up. I keep getting failure when it gets to glance..  I have searched and cannot find a solution anywhere. http://paste.ubuntu.com/25287893/
<ak_dev> kjackal: kjackal_ : https://jujucharms.com/u/aakashkt/kubernetes-ovn/
<ak_dev> Thanks for all your help :-)
<ak_dev> oh yeah, do tell me how I can fix the diagram on that page
<kjackal_> hi ak_dev to fix the diagram you will need to specify the exact position of each service in the graph. This is done in an annotations section in bundle.yaml. Have alook here: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml
<magicaltrout> hey kjackal_ i've not had time to delve back into it yet
<magicaltrout> but i did notice https://docs.google.com/document/d/1z55a7tLZFoRWVuUxz1FZwgxkHeugtc2nHR89skFXSpU/edit# lxc hit the mesos working group roadmap
<kjackal_> Wow! This is awesome magicaltrout. Not sure what the timetebale for that is. What does "it is in our roadmap" mean?
<kjackal_> I see it is probably towards the end of the year
<magicaltrout> they have a bunch of volunteers on the working group meetings, I doubt LXC comes anywhere high up the list of important stuff, but at least it got added
<magicaltrout> I'll swing by one of those meetings at some point to delve further
<kjackal_> magicaltrout: just pinged this student that is interested in working on this. Lets see if he is going to come forward and volunteer
<magicaltrout> cool, i should have some bandwidth from late september going forward
<magicaltrout> to help
<magicaltrout> you gonna be in NYC kjackal_ ?
<kjackal_> magicaltrout: yeap I will be seeing you there!
<magicaltrout> cool
<ak_dev> kjackal_: hey, thanks, will have a look at it
<ak_dev> kjackal_: the connection are done the same way? via annotations?
<kjackal_> ak_dev: the connations are infered by the relations among charms
<ak_dev> kjackal_: oh
<braziercustoms> (braziercustoms) Ubuntu 16.04Â  cojure-up. I keep getting failure when it gets to glance..  fails to get charm from charm store....  see error  http://paste.ubuntu.com/25287893/
<ak_dev> kjackal_: it does not seem to take up right relations then, since it is way different than what is shown
<kjackal_> ak_dev: It could be that some lines overlap
<braziercustoms> Anyone can help at all? I'm kinda new to this and want to understand better but I can't get it to work properly.. http://paste.ubuntu.com/25287893/
<ak_dev> kjackal_: thanks, its done now :-)
<rahworkx> notice an issue when removing a user on version 2.2.2.. after i remove user and attempt to add the exact same username it fails with message "ERROR failed to create user: username unavailable"
<stokachu> cory_fu: braziercustoms is running into http://paste.ubuntu.com/25287893/
<stokachu> braziercustoms: can you `pastebinit ~/.local/share/juju/bootstrap-config.yaml`
<cory_fu> braziercustoms: Is outbound network access from the openstack-novalxd deployment restricted?  Specifically, it looks like the controller can't connect to api.jujucharms.com.
<braziercustoms> @stokatchu sure @cory_fu it started with fresh ubuntu install.  I installed xubuntu desktop and openssh.  The only things I changed before conjure-up was gave the machine a static up on my network.
<braziercustoms1> @stokachu http://paste.ubuntu.com/25292705/
<stokachu> braziercustoms1: is this for all charms or just a few of them?
<braziercustoms1> @stokachu It lists a different charm each time I try to conjure-up
<cory_fu> I imagine it's just reporting the first one it happens to try to deploy, with the order being undertermined
<braziercustoms> @cory_fu that's what I was thinking.. but I will say if you don't quit when it pops up the install appears to go on and I end up with running machines
<cory_fu> Very strange
<cory_fu> stokachu: Is it possible that there's some delay before the lxd bridge is ready?
<stokachu> yea it's possible
<braziercustoms> @cory_fu @stokachu am I the only one with this problem? Why me?  Is it my hardware? I'm playing on a dell poweredge 6950.  16 cores 64GB ram. SAS drives..
<stokachu> that error is not related to hardware
<stokachu> braziercustoms: i dont really have an answer as to why this is happening atm, i would need to do some more digging
<braziercustoms> @stokachu anything I can do to help?
<stokachu> braziercustoms: i guess open an issue at https://github.com/conjure-up/conjure-up/issues/new and make sure to provide the data requested in the template
<stokachu> so i can try to reproduce
<braziercustoms> @stokachu ok.
<aisrael> Hey, anyone run into an issue where packages in wheelhouse/ aren't being installed by a reactive charm? End result is hook/install fails because charmhelpers isn't installed.
<BrazierCustoms> @stokachu I think its a complete issue with the bridge. I just tried the --edge version and it asked which interface to create the bridge for and the release version did not..
<BrazierCustoms> so maybe the release version never created the bridge?
#juju 2017-08-12
<BrazierCustoms> @stocachu Thank you! and I'm sorry I tried the --edge version days ago because you have directed it to me before, but it did the same thing. this was a week or so ago.
<BrazierCustoms> not days
<BrazierCustoms> idk the days were starting to run together.
#juju 2018-08-06
<veebers> wallyworld: fyi https://github.com/juju/charmstore-client/pull/175
<wallyworld> veebers: not sure about the extra dep introduced
<veebers> wallyworld: as per pr comment, it's a transient dep from charmstore
<wallyworld> ok
<wallyworld> normally if not needed to build we can ignore, eg if it's just a charmstore tets dep
<veebers> ack, it was needed the build failed withou it
<thumper> https://github.com/juju/worker/pull/5
<thumper> anyone...
 * anastasiamac lloking excitedly \o/
 * anastasiamac stopped looking :(
<thumper> why?
<anastasiamac> thumper: well, i liked the code but without the tests and sample output m scared to +1 :) commented on PR
<jam> thumper: testing that I'm connected
<thumper> jam: you are
<babbageclunk> thumper: not that I'm looking because I should be looking at wallyworld's humongodiff, but it seems weird/upside-down to be implementing a juju/juju interface (worker/dependency.Reporter) in juju/worker? What about exposing some way to iterate over the workers instead?
<thumper> It doesn't import it, it just mentions it
<babbageclunk> sure, but still
<thumper> and I'm planning to move all that code into the worker package
<thumper> catacomb, dependency and some common workers
<babbageclunk> ok, if dependency.Report lives there then I'm happy.
<babbageclunk> *er
<thumper> it will...
<thumper> I'm looking at removing the tomb.v1 dep in worker package
<kelvinliu> thumper, thanks for reviewing the PR, good point for the win os issue, just updated the PR, would you mind take a look again?
<thumper> it is used in the tests
<thumper> kelvinliu: ok
<thumper> is there a doc somewhere about what needs to change for tomb.v1 -> tomb.v2?
<thumper> veebers: do you remember the tomb changes?
<veebers> thumper: there is no doc I don't think. I recall the changes
<thumper> I've worked it out I think
<thumper> but thanks :)
<veebers> thumper: sorry was on a BS errand :-\
<thumper> all good
<thumper> anastasiamac, babbageclunk test added
 * thumper is done for today
<icey> I've noticed a problem with containers on 4.2.1 represented on: https://pastebin.ubuntu.com/p/bHmBydJ4Hx/ ; basically, "Container 'juju-5099ed-1-lxd-1' already exists"
<icey> ah, looks like it's fix-committed: https://bugs.launchpad.net/juju/2.4/+bug/1779897
<icey> unfortunately, in the edge snap, I can't launch a container with: `0/lxd/0  down                pending  bionic           missing Cancel not valid`
<magicaltrout> if anyone has any good ideas
<magicaltrout> er irc fail
<stickupkid> does anybody know how this has made it's way through go vet (and the CI)?
<stickupkid> caas/kubernetes/provider/k8s.go:1495: NotValidf format %d has arg v of wrong type k8s.io/apimachinery/pkg/api/resource.Quantity
<veebers> stickupkid: if it's the same error babbageclunk saw, I think the cleared out his pkg/ dir and it wasn't triggered again (although I think it is an actually error?)
<stickupkid> veebers: k, i'll try that
<stickupkid> veebers: the %d is totally wrong though, it should be %v
<stickupkid> veebers: that worked, for some reason `make clean` didn't fix it
<veebers> stickupkid: odd. I wonder why having stuff in pkg/ triggers it but not a clean run (even though it's a legit error)
<stickupkid> veebers: yeah, i don't really know
<stickupkid> thought I'd ease back in and fix it, the error message doesn't make sense anyway https://github.com/juju/juju/pull/9022
<stickupkid> manadart: got a second?
<manadart> stickupkid: Yep.
<stickupkid> HO?
<rick_h_> stickupkid: after you settle your email and such let me know so we can sync up please
<stickupkid> rick_h: now?
<rick_h_> stickupkid: sure
<rick_h_> stickupkid: meet you in the standup HO
<manadart> stickupkid: Gave you a bit of a bum steer earlier. I was thinking host rather than series.
<manadart> You just need to change this: https://github.com/juju/juju/blob/832d4e53b51fa10a902a2376b03f776bd2b813af/juju/version/version.go#L11
<manadart> ^ ... thinking *arch* rather than series.
<stickupkid> manadart: if I change that, i wonder what would break :S
<rick_h_> stickupkid: bwuahahaha, everything! :P
<stickupkid> rick_h_: exactly, I think I'll get anastasiamac to review that, feels dangerous
<rick_h_> stickupkid: :)
<stickupkid> i'll do some testing locally, see what's the damage
<rick_h_> stickupkid: make sure to give it a few tries from a couple of different series/etc. Also check local lxd vs a cloud for instance
<rick_h_> stickupkid: +1
<manadart> externalreality: Got a sec for a HO?
<rick_h_> hml: ok, second try at working that up. At least raises the idea of an optional-hint as close to the start of the line as I can get it
<hml> rick_h_: sorry, iâm not following, lost context
<rick_h_> hml: the wording for the CA cert question in add-cloud
<hml> rick_h_: ah, ty!
<rick_h_> hml: and looks like jsonschema supports file: paths via uri, but relative paths can be tricky
<hml> ack
<rick_h_> hml: but tracking google cloud credential use is probably best bet there anyway
<hml> rick_h_: unforunately add-credential works very differently than add-cloud
<hml> rick_h_: much easier to validate when you read in the entire thing, rather than query the user for each piece
<hml> :-/
<rick_h_> hml: ? I don't follow
<hml> rick_h_: add-cred doesnât use pollster, which is causing the âfunâ with add-cloud
<rick_h_> hml: ah crap
<hml> rick_h_:there might be a piece i can use, checking it out
<knobby> does anyone know of an example bundle that uses storage? I'm trying to make a ceph bundle that uses storage to allocate ebs from aws. I have a deploy line that works, but I want to put this into a bundle.yaml
<veebers> Morning all o/
<hml> morning veebers
<anastasiamac> veebers: hml: o/
<hml> anastasiamac: o/
<veebers> Hows things hml, anastasiamac o/
<anastasiamac> stickupkid: saw ur PR (9023) will take a look today... i would have wanted to also change some test that ensure that we r bootstapping on latest lts (which is now bionic and was xenial before)
<hml> veebers: not bad, you
<veebers> can't complain too much :-) A bit cold this morning, but that's what the fire is for
<anastasiamac> hml: saw u had some mad weather... r u k?
<anastasiamac> veebers: u have live fire? (m so jealous!)
<hml> anastasiamac: yeah.  just went by, didnât cause damage at my house luckily
<anastasiamac> hml: oh good :D did u cats sensed it was coming?
<anastasiamac> ur*
<hml> anastasiamac: i donât think so, my girls donât to be bothered by storms.  though some friends have a dog who is very frightened by them.  the dog even managed to get behind the washing machine one time, no idea how she managed that
<anastasiamac> hml: :)
<veebers> anastasiamac: well, it's contained in a fireplace but yes :-)
<anastasiamac> veebers: niiice
<veebers> hml: oh wow, glad you're all ok and the house is fine!
<hml> veebers: all good.
<thumper> babbageclunk, anastasiamac: updated... https://github.com/juju/worker/pull/5/files
<babbageclunk> thumper: sorry, just reviewing something else, I'll look at this nect
<babbageclunk> xt
<anastasiamac> thumper: looking too
<anastasiamac> thumper: and un-looking since babbageclunk won that race
<babbageclunk> (:
<anastasiamac> babbageclunk doesn't like winning races?..
 * anastasiamac likes when babbageclunk wins review races :)
<babbageclunk> (:
#juju 2018-08-07
<wallyworld> anastasiamac: here's the huge PR https://github.com/juju/juju/pull/9024
 * anastasiamac squints at it
<babbageclunk> anastasiamac: aw man, swap?
<anastasiamac> babbageclunk: nuh
 * babbageclunk sighs
<anastasiamac> also babbageclunk, i won that one as already lgtm'ed :)
 * babbageclunk snost and lost
<anastasiamac> tra-lah-lah
<babbageclunk> wallyworld: reviewed 9006
<wallyworld> awesome ty
<wallyworld> will look after these cmr bugs
<thumper> babbageclunk: https://github.com/juju/juju/pull/9025, I need to go add a few tests to show things hooked up...
<thumper> wallyworld: ^^
<wallyworld> will look soon
<thumper> sure, np
<thumper> it still needs some changes
<babbageclunk> thumper: 1st sentence of the description seems like it's truncated?
<thumper> uh... yeah
<thumper> probably matches my thought process while typing
<thumper> babbageclunk: so writing the extra tests I felt that I really wanted the Reporter interface defined in the worker package, so ...https://github.com/juju/worker/pull/6
 * thumper wonders if the spam attack on freenode is done yet
<thumper> seems not
<wallyworld> babbageclunk: i just glanced at review, ty. quick comment - params.IsCodeNotFound(err) sadly does barf if err is nil :-(
<wallyworld> i found out the hard way
<thumper> wallyworld: https://github.com/juju/worker/pull/6 ??
<thumper> it is very small
<wallyworld> thumper: lgtm
<babbageclunk> wallyworld: oh stink! Weird, reading it it doesn't seem like it should. Maybe a typed nil is getting in there?
<anastasiamac> babbageclunk: wallyworld is not in the channel?...
<babbageclunk> ha, I started replying to him before he left
<babbageclunk> wallyworld: oh stink! Weird, reading it it doesn't seem like it should. Maybe a typed nil is getting in there?
<anastasiamac> :)
<wallyworld> babbageclunk: yeah, i think that's what it is
<babbageclunk> wallyworld: oh, probably because it's a *params.Error explicitly on the struct.
<thumper> I think this one is ready now... https://github.com/juju/juju/pull/9025
<wallyworld> looking
<thumper> wallyworld: there are even some worker import cleanups for ya
<wallyworld> i know!
<wallyworld> thumper: looks good to me, all seems fairly straight forward
<thumper> wallyworld: yeah, it isn't complex
<thumper> just joining dots
<wallyworld> yup
<thumper> wallyworld: this is needed to move catacomb, dependency, and workertest out of the juju repo into the worker.v1 https://github.com/juju/testing/pull/139
<wallyworld> ok
<wallyworld> lgtm
<thumper> where is the bot on this?
<thumper> wallyworld: https://github.com/juju/worker/pull/7/files
<thumper> this just moves the catacomb, dependency and workertest packages to worker.v1
<wallyworld> ok
<wallyworld> +4000!!!
<babbageclunk> lol
<wallyworld> thumper: did we really want to add the dep engine to the worker package. i guess it can go there. but i think maybe the intent was to keep the work package purely as a way to manage go routines nicely
<wallyworld> should the dep endgine be a separate repo
<thumper> wallyworld: it is in a package
<thumper> wallyworld: the engine runs workers
<wallyworld> ok
<thumper> it is quite tightly connected
<wallyworld> lgtm
<stickupkid> anastasiamac: I'm working through the tests atm, but I was more wondering if you think there is any potential issues in changing this, that I might want to test ?
<anastasiamac> stickupkid: if u look for hardoded test with "xenial" in it, the test names will give u an indication of whether they now should be 'bionic" :)
<drcode> hi all
<drcode> I have installed MAAS , dose juju charm  have supprot for ubuntu 18.04 ?
<stickupkid> does anyone know how the provider tests work around uploading binary versions? bit of background, I'm bumping the LTS from xenial to bionic and it looks like the right version are being uploaded, but i get an error message stating that there are no versions available...
<stickupkid> otherwise i'll start digging to see how the mock layer works... etc - this seems to hit aws-ec2 and joyent
<anastasiamac> stickupkid: commented on the PR but the info feels kind of obvious ;) u've got it under control
<stickupkid> anastasiamac: perfect - thank-you
<anastasiamac> stickupkid: as for the verion q above, are u sure ur local copy get uploaded?
<stickupkid> https://pastebin.canonical.com/p/cYsSJ2KGHw/
<stickupkid> just reading the test log it would seem so, but then again, i'm just started digging into how the test is setup.
<anastasiamac> stickupkid: ah, ignore my question.. i thought u were testing live...
<stickupkid> live works, tests doesn't :p
<anastasiamac> stickupkid: this i can help u dig for :) gimme a sec
<anastasiamac> stickupkid: m stil digging but my output from ec2 TestStartStop is different to urs -https://pastebin.ubuntu.com/p/Xqp4k73PmZ/
<anastasiamac> stickupkid: i have stuff related to availability zones and storage before image search is hit... did u change something else on ur system?.
<anastasiamac> stickupkid: also, have played with new images coming in into testing but this looks a place to make sure u have bionic equivalents for xenial entries... https://github.com/juju/juju/tree/develop/environs/imagedownloads/testdata
<anastasiamac> stickupkid: or mayb not, as they seem to be only used for kvm and wrapped command testing...
<axino> bdx: hi
<axino> bdx: did you deploy cs:~omnivector/sentry recently ?
<stickupkid> anybody think I should spend the time on cleaning up LTS - https://github.com/juju/juju/pull/9023#issuecomment-411081657
<hml> stickupkid: clean up is good, itâs hard to get to later and weâll have forgotten the pain by the next LTS.  my 2 cents.  :-)
<hml> rick_h_:  i think i found a flaw in the way the EnvVar are used in juju add-cloud, they donât get validated.  :-/
<stickupkid> hml: aren't those things validated in interactive mode?
<rick_h_> hml: they're not validated once they're loaded as the default values and sent through the schema work?
<hml> stickupkid: not if i stick the env var in as a default prompt value
<rick_h_> hml: is it because default values are not validated at all?
<hml> rick_h_: i think thatâs it
<rick_h_> well, file a bug I guess. that's a bummer
<hml> rick_h_: iâm still working on that code so i can fix it.
<hml> just annoying.
<hml> must be something with how default and defaultprompt work.
<rick_h_> hml: ok, if it's easy. otherwise let's try to move on.
<rick_h_> we've got 2.4.2 freeze friday and that's not the most important bug for 2.4.2
<hml> rick_h_: this code isnât going into 2.4 branch anyways
<rick_h_> hml: right, I just mean that the hope was to address some issues for 2.4.2 this week.
<hml> rick_h_: ack
<hml> wallyworld: ping
<wallyworld> hey, just about to have meeting, can we catch up in say 30 at release call?
<hml> wallyworld: sounds good
<veebers> wallyworld: I built your mariadb charm, but I get the error on deploy: from charms.layer.basic import pod_spec_set. I suspect I need to update that somehow? (I"m using snap installed charm command for that)
<wallyworld> veebers: in a meeting, but if you pull the latest source code it will work
<wallyworld> i updated the charms in my repo last night
<wallyworld> to ue the latest layer stuff
<veebers> ack
<veebers> ah I see
<wallyworld> hml: rick_h_: i'm still in k8s meeting, will be late
<hml> wallyworld: release call is over, ping me when youâre available pl
<hml> pls
<thumper> wallyworld: I have a small review for you as well... https://github.com/juju/juju/pull/9032
 * wallyworld adds it to queue, still in k8s calls
<thumper> 395 files changed, +591 â4,900
<thumper> babbageclunk, veebers, hml, doesn't have to be wallyworld to review
<thumper> while large, the branch only touches imports, and only touches worker.v1 imports
<thumper> (and updates dependency.tsv for the imports)
 * veebers looks
<veebers> thumper: why is this going into 2.4, is it for a bug fix?
<thumper> it is just code cleanup, and I prefer to keep things clean???
<veebers> thumper: ack
<veebers> thumper: with the worker.v1 dep change, is there much difference with the latest and what we release in 2.4.1?
<veebers> looks like there is only a couple hours between shas in that diff, not sure off the top of my head which sha we released with
<wallyworld> veebers: can you join us https://hangouts.google.com/hangouts/_/canonical.com/weekly-caas
<veebers> wallyworld: omw
<thumper> veebers: it is exactly the same
<thumper> veebers: well, I took the packages from 2.4 branch and moved them to worker.v1
<thumper> then updated deps
<thumper> no code changes due to this move
<veebers> ack
<wallyworld> hml: jump back in release HO?
<hml> wallyworld: omw
<veebers> wallyworld: I can't trigger the deps error you saw, I do a godeps in juju develop, go to charmstore-client, make deps and make build and it's all hunkydory
<veebers> (this is me saying "builds for me" :-))
<cory_fu> wallyworld: Found the resource type validation spot.  :)
<wallyworld> yay
<veebers> cory_fu: nice!
<babbageclunk> thumper: hmm, I wonder where those 4000 lines of code went?
<thumper> babbageclunk: into the worker.v1 package
<thumper> repo
<thumper> babbageclunk: as a different pr yesterday
 * babbageclunk forgot sarcasm tag
<thumper> the sarcasterix
<thumper> sarcasm never works well over writtern communication
<thumper> some birds have worked out how to use our nectar feeder now
<thumper> hazaah
<babbageclunk> I can't believe unicode doesn't have the irony mark in it! https://en.wikipedia.org/wiki/Irony_punctuation
<babbageclunk> hmm, hang on, maybe it does
<babbageclunk> â¸® boom ?
<kwmonroe> great babbageclunk, you've crashed my irc client.
<veebers> thumper: done
<babbageclunk> kwmonroe: sorry!
<veebers> hahah
<veebers> thumper: nice! You expecting a lot of Tui to hit it? I love our neighbors have a couple set up, we get so many Tui around
 * veebers watches his charm push crawl to a halt pushing a docker image
 * babbageclunk was tempted to make that sorry sarcastic but it seemed churlish.
<kwmonroe> no worries babbageclunk, i've been meaning to upgrade my quassel for a while.  this is great motivation :)  i also recently realized i can't send a comment with two hash tags without it hanging..
<babbageclunk> Oh, I was wondering how were still talking after your client crashed, but I guess the quassel intermediary is still running fine?
<babbageclunk> gah, menno'd it - ...how you were...
<kwmonroe> yeah babbageclunk, the server is on something ridiculous like precise (maybe older).  it doesn't care about much.  my client is like quassel version 0.pre-alpha.scary on bleeding edge osx and it hangs if i look at it funny.
<kwmonroe> the good news (maybe?) is that i can fall back to the android client to say things like "great babbageclunk" when my desktop quassel crashes ;)
<babbageclunk> kwmonroe: heh
<veebers> cory_fu: would love to compare notes when you've sorted the snap build parts; see if there is room for improvements for how we do things currently
<cory_fu> veebers: Definitely
<cory_fu> wallyworld, veebers: PR for tracking master and fixing the proof errors: https://github.com/juju/charm-tools/pull/434
<wallyworld> cory_fu: wow, quick, looking in a sec
<cory_fu> wallyworld, veebers: ugh.  test is looking for the exact proof error.  I'll have to fix that, but please review anyway.
<cory_fu> I'll follow-up on the PR tomorrow, but you're welcome to use that branch for a local build in the meantime.  Heading out for now.  o/
<wallyworld> see ya
<veebers> cory_fu: have a good one o/
<wallyworld> kelvinliu: i'm free now for 1:1. man so many meetings this morning. entering hour 3
<kelvinliu> wallyworld, yup, cu in HO
<thumper> veebers: hoping to get some tui, we did have one around the other day before we up the feeder up
<veebers> thumper: we have so many around here they are almost pests ^_^  There will be 4-8 Tui in the tree beside the driveway. Love it
<veebers> wallyworld: do you have your mariadb charm pushed and published to the staging charmstore, I cannot get the image pushed with the charm :-|
<wallyworld> veebers: just in 1:1 with kelvin. but no because i couldn't get charm built
<wallyworld> will try again today
<veebers> ok, I'll keep trying. I haven't seen the error you mentioned earlier about blank resources, trying to repro
<veebers> With Discourse does anyone else feel less likely to 'like' a post because it's a heart and that seems just a little too much for a post?  as in "I like your comment, but I don't like-like it."
<rick_h_> veebers: share the love dude
<veebers> rick_h_: ^_^
<babbageclunk> thumper: do you have any utilities for making testing code that uses pubsub a bit more convenient?
<wpk> veebers: I believe the proper wording is "I'm not ready for a relationship with your comment"
<veebers> wpk: hah ^_^
<veebers> wpk: How goes things ?
<wpk> veebers: good, good. Spent some nice time with wallyworld, thumper, and jam in Montreal a few weeks ago :)
<veebers> wpk: oh, very cool!
<wpk> veebers: coincidentally, IETF102 was at the same time, at the same place, as Canonical management sprint
<veebers> wpk: excellent timing
<wpk> veebers: it would be even nicer if people from Canonical would come to IETF meetings (people from RH come), but still ;)
<veebers> :-)
<thumper> babbageclunk: like what?
<babbageclunk> thumper: not sure really - just something to make all the goroutines and channels a bit simpler. I might have to roll my own anyway, since it's more request/responsey.
<thumper> babbageclunk: perhaps we could chat after lunch, I found some patterns that might be useful
<babbageclunk> ok, that would be cool
<kwmonroe> babbageclunk: do you want to sub and test off juju core pubs, or charm pubs?
<babbageclunk> kwmonroe: I mean testing code that uses the pubsub hub - https://github.com/juju/pubsub
<babbageclunk> not sure about those other pubs.
<veebers> kelvinliu: had a quick look, everything seems good with that ci-run change, I've rebuilt that unit test job out of interest, see what it does this time around
<kelvinliu> veebers, awesome, thanks
<veebers> nw
<kwmonroe> babbageclunk: aaaahhh. ok then, go ahead and crach my irc client at will.  i thought you were referring to a generic pub/sub testing model.  eg, you have a bundle that includes something like nagios (which you don't control), yet you want your bundle re-tested and published any time nagios is updated.
<kwmonroe> ^^ that's obviously charm specific and has nothing to do with juju/pubsub.  but by golly, i could tell you a lot about it.
<babbageclunk> kwmonroe: ah, right! Yeah, I could see that being pretty difficult.
 * thumper merges 2.4 into develop
<veebers> wallyworld: I can't imagine this is a good thing to see: WARNING juju.workers.caasunitprovisioner k8s event watcher closed, restarting
#juju 2018-08-08
<wallyworld> veebers: that happens and we just create watcher. don't know why we get dropped
<veebers> wallyworld: ok. I haven't yet reproed the error you saw in your email, gonna leave things running, go to lunch and see if anything pops up
<wallyworld> veebers: the error happens immediately on deploy for me
<veebers> wallyworld: do you do anything more than just: juju deploy cs:~veebers/caas-mariadb ?
<wallyworld> you need storage
<wallyworld> either static pvs or dynaimic
<wallyworld> or you can comment out the storage bit to test wothout
<wallyworld> edit metadata.yaml
<veebers> wallyworld: ah, I commented out storage. So I need to setup storage somehow to repro?
<wallyworld> to repro what?
<veebers> wallyworld: the error you see in the logs
<wallyworld> which one?
<wallyworld> there were a couple
<veebers> wallyworld: or was your comment about  neededing storage just regarding deploying that charm
<wallyworld> yeah, was just about deploying
<veebers> wallyworld: There was only one in your email, let me grab it real quick
<wallyworld> sorry, haven't got it paged in
<veebers> wallyworld: unexpected error: resource name may not be empty
<veebers> wallyworld: I can deploy that charm (and others) and see no errors at all :-|
<wallyworld> not even this one? ERROR juju.worker.dependency "caas-unit-provisioner" manifold worker returned
<wallyworld> unexpected error: resource name may not be empty
<veebers> (except for WARNING juju.workers.caasunitprovisioner k8s event watcher closed, restarting which I see every 6-8 minutes)
<veebers> wallyworld: nope, none.
<wallyworld> hmmm
<wallyworld> i'll see if i can reproduce
<veebers> juju debug-log -m controller --replay | grep ERROR comes back clean
<veebers> wallyworld: oh, which k8s cluster where you using? That shouldn't matter though
<wallyworld> k8s deployed on top of aws
<kelvinliu> wallyworld, veebers would you take a look this PR when u got time? https://github.com/juju/juju/pull/8997/files  thanks
<wallyworld> kelvinliu: the jenkins 2.3 and 2.4 builds work?
<veebers> can do
<veebers> kelvinliu: the re-run of the unit tests for your test run passed this time around, so the failure seems like an intermittent test not something you're build changes have introed
<kelvinliu> wallyworld, i think the build was working fine.
<kelvinliu> veebers, awesome!
<wallyworld> kelvinliu: i left a few comments
<veebers> kelvinliu: once you've fixed the operator image job, do a 'resume build' on the latest 2.3 commit to get a full run through
<kelvinliu> veebers, sure
<kelvinliu> wallyworld, thanks. looking now
<thumper> sweet, my branch merged into develop
<thumper> all that 2.4 goodness
<thumper> veebers: seems like the merge job for juju/clock is also not talking to github
<veebers> thumper: aye, they will all need redeployed
<thumper> ok... are you sorting that?
<veebers> thumper: I am, I pinged you the PR for the changes, but I'm just going to go ahead and re-deploy it now anyway as that's how I roll
 * thumper nods, looking now
<veebers> I'm def going to template/use a project those jobs on Friday, those 30-ish files could be 2 files
<veebers> thumper: try again now on the clock PR
<thumper> veebers: build noticed
<veebers> sweet
<thumper> I'll wait for that to complete
<thumper> (just to be sure)
<thumper> then merge
<kelvinliu> veebers, http://localhost:18080/view/ci-run/job/ci-run/1004/console  2.3 buildjob is passing
<wallyworld> kelvinliu: more dep changes just landed in develop
<kelvinliu> wallyworld, yes, I saw thumper 's comments. I m fixing an 2.3 build issue now, but it's not related dep change.
<wallyworld> np
<kelvinliu> thumper, wallyworld veebers I just sync the toml file from dev branch dependencies.tsv, please a take a look again when u got time, I m going for lunch now, be back soon.
<thumper> I trust that the build will fail should we be missing things
<jam> hey wpk, nice to see you around
<wallyworld> kelvinliu: one thing we will need to do is figure out how to allow people to switch branches, eg git checkout 2.4 will leave the vendor dir in place. and that will mess up the 2.4 build
<wallyworld> maybe a make target to move it out of the way / rename. we don't want to have to donload the whole lot again when switch back to devleop
<anastasiamac> kelvinliu: veebers: wallyworld: with ^^ in mind, if m working on 2.3 (or 2.4) do i now neeed to run 'make dep' instead of 'godeps -u dependencies.tsv' locally?
<wallyworld> anastasiamac: 2.3 and 2.4 will retain use of godeps and you can use via the make target if you want as the makefile is checked in code
<wallyworld> the issue is how to move the vendor dir out of the way
<anastasiamac> wallyworld: nice. thnx
<wallyworld> as that will mess with the 2.3 and 2.4 builds
<kelvinliu> wallyworld, the dependencies are cached on local, so rm -rf vendor then run make dep will not be too slow
<anastasiamac> wallyworld: i see. m guessing ur saw rog's suggestion in the doc on how to deal with it
<kelvinliu> anastasiamac, if u work on 2.3 or 2.4, u can just follow previous workflow, either use make dep/godeps or godeps -u dependencies.tsv
<anastasiamac> kelvinliu: yep, got it :)
<kelvinliu> anastasiamac, all should be working fine
<wallyworld> i haven't seen the doc update yet
<kelvinliu> anastasiamac, actually 2.3 2.4 is still using godeps.
<anastasiamac> kelvinliu: when u say "still" do u mean, they will keep use it? :D
<kelvinliu> anastasiamac, yes. unless we want to migrate 2.3 and 2.4 to dep
<anastasiamac> kelvinliu: ack
<wallyworld> jam: thumper: so rog has suggested a bash script to move vendor dirs from upstream repos away before doing a build. http://paste.ubuntu.com/p/fVBJXMnbdf/ But that seems rather error prone to me and is working against the grain. I think we are better off just embracing upstream tooling rather than coming up with a workaround to compound the issues with using a homegrown solution. Agree?
<jam> wallyworld: so that is 'for everything in $GOPATH hide vendor/ when building and then restore wehn we're done' ?
<wallyworld> everyhting in dependencies,tsv
<wallyworld> every upstream in there
<wallyworld> you may have other stuff in gopath
<jam> wallyworld: and this is *instead* of using dep ?
<wallyworld> yeah
<wallyworld> i thik he's trying to find a workaround to not switch to it
<wallyworld> i haven't tried the swrript, i only just noticed it as a comment on the googel doc
<wallyworld> i can see though ways thing could mess up, eg you moved the vendor dirs, then updated dependencies.tsv; you can end up with your upstream repos messed up if stuff is removed from dependencies.tsv etc. just more moving parts to go wrong
<kelvinliu> veebers, i think the branch check condition issue has been solved. http://localhost:18080/view/ci-run/job/ci-run/1014/  http://localhost:18080/view/ci-run/job/ci-run/1010/
<veebers> kelvinliu: ok will check shortly, just truing to get this streams stuff sorted
<kelvinliu> veebers, thx
<wallyworld> kelvinliu: did you see my question about how we deal with switch branches?
<kelvinliu> wallyworld, yes, I did. sorry, I relied to anastasiamac only
<kelvinliu> wallyworld,  if u work on 2.3 or 2.4, u can just follow previous workflow, either use make dep/godeps or godeps -u dependencies.tsv
<wallyworld> kelvinliu: oh sorry, you included me, i missed it
<wallyworld> what i meant was not that
<kelvinliu> ah, typo, make godeps but no make dep
<wallyworld> but when you git checkout 2.4 branch, the vendir dir will interfere with the build
<wallyworld> so it needs to be moved out the way
<wallyworld> when you say thinsg are cacheed loally, where is that?
<kelvinliu> wallyworld, in this case, we either gitignore vendor in 2.3/2.4 or rm -rf vendor. the dependencies are cached on local, so rm -rf vendor then run make dep will not be too slow
<wallyworld> will it really be fast to rm -rf vendor and then later run dep ensure again?
<wallyworld> gitignore vedor will not do anything
<wallyworld> as the dir will still be there right?
<wallyworld> it will just be untracked in git
<kelvinliu> wallyworld, ll $GOPATH/pkg/dep
<wallyworld> so if that is the case and dep ensure is fast, we need a make target to remove vendor or something
<wallyworld> and that will need to be in the 2.3 and 2.4 makefiles
<kelvinliu> wallyworld, vendor is just files copied from this cache dir but not git repos. so dep cache the git repos in there to do version ensuring
<wallyworld> and we'll need to document that workflow
<wallyworld> ok, we were fearful that dep ensure would dowmload everything from upstream
<wallyworld> after the vendor dir was removed when swiching to 2.4 and back to develop
<kelvinliu> wallyworld, ic. i am thinking currently we need to run make godeps to ensure the correct revisions after switch branches because 2.3 and develop branches could have different dependencies.tsv.
<veebers> thumper, wallyworld would you mind giving this a quick eyeball and a +/- 1? This is an update to the scripts jerff uses, and an update to them: http://paste.ubuntu.com/p/vmPJvP2nMx/
<wallyworld> kelvinliu: indeed we will and that is what we do now each time. but when switch from develop we will *also* need to move the vendor dir
<kelvinliu> wallyworld, if we have vendor gitignored on 2.3, 2.4 and develop branches. it will solve this problem?
<wallyworld> how?
<wallyworld> gitignore doesn't remove the dir when switch branches does it?
<wallyworld> if there's a vendor dir there, it will still be there after git checkout 2.4
<kelvinliu> wallyworld, if it's ignored, we don't need remove it
<wallyworld> what stops go build from using it?
<wallyworld> surely not gitignore?
<kelvinliu> wallyworld, ah, u r right..
<kelvinliu> wallyworld, unless we commit the vendor, but it would be too big for us
<wallyworld> right, hence the suggestion to adjust the make targets for godeps in 2.3 and 2.4 make files to rm -rf vendor
<kelvinliu> wallyworld, well, we can just rm -rf vendor in make godeps target
<wallyworld> that way the workflow will be the same across all branches
<wallyworld> so we'll need a 2.3 and 2.4 PR
<wallyworld> the assuption is that dep ensure a second time ia very fast
<wallyworld> assuming stuff is cached like you say
<kelvinliu> wallyworld, yes, it's fast
<veebers> wallyworld, thumper if you have no issues with it I'll push and get hatch to try again (it's late for him and I need to get dinner sorted for the boy :-))
<wallyworld> veebers: looks ok at first glance
<veebers> actually, this diff now as I sorted out 2 small issues http://paste.ubuntu.com/p/XvMm34HKVF/
<veebers> wallyworld: sweet, I'll push it up and try it, if it fails I'll revert it and try again later (tonight or tomorrow)
<wallyworld> sgtm, no harm if there's an issue
<wallyworld> veebers: you happy that we are good with the dep stuff?
<kelvinliu> wallyworld, well, it's fast if we think ~10s is not too slow
<wallyworld> that's reasonable, it takes godeps a few seconds sometimes
<wallyworld> is that on an SSD?
<anastasiamac> could I get a review plz on https://github.com/juju/juju/pull/9035 - tiny check to ensure local charms have hooks at deploy
<kelvinliu> wallyworld, yes. half of the time was for dep to figure out the revision to copy to
<wallyworld> i can like with 10s personally
<anastasiamac> wallyworld: thumper: this PR (9035) will b heading into 2.3 and 2.4 if we can/like
<veebers> wallyworld: yep, it looks like kelvinliu has the ci-run bits sorted
<wallyworld> ok, gr8 ty
<wallyworld> kelvinliu: since it's EOD soon let's land tomorrow so we can get discourse post etc sorted out
<wallyworld> and not rush
<kelvinliu> wallyworld, yup
<wallyworld> anastasiamac: done, ty, IS wil be happy
<anastasiamac> wallyworld: re: error and suggestion to build charm... we rarely make suggestions in errors coming from api (which this one is)... and digging in deploy command will be to snow-flaky... so to compromise, we could rename this error to say "poorly built charm:.." instead of "invalid charm:..."
<anastasiamac> what do u think?
<wallyworld> invalid charm is probaly ok, cause that's what it is
<anastasiamac> ack
<anastasiamac> i was tossing and turning btw one and the other all day... at the end went with "invalid" coz it felt moer definitive, final almost.. whereas "poorly built" implied "hey, u've tried to built it but did such a bad job of it, we have to b polite here"
<wallyworld> inalid works for me
<anastasiamac> +1
<veebers> wallyworld, thumper: after a couple iterations need to make this change: http://paste.ubuntu.com/p/rsKTJh5XMF/
<veebers> wallyworld, thumper FYI the fixes worked, the gui release/streams is sorted
<wallyworld> veebers: thank you
<naturalblue> Hi Everyone. I asked this question over on #maas but was redirected to ask it here. I hope someone can help.
<naturalblue> It's about maas with juju lxds and maas's builtin proxy feature
<naturalblue> I have setup maas and juju. I have multiple VLANS and all of them have a gateway address on the maas controller. I have enabled builtin proxy but when I juju deploy an lxd container the proxy settings are not configured and i have to manually enter the lxd and create/add the maas proxy settings into apt.d/proxy.conf file
<naturalblue> i have been told that possible juju is not setting the inherited maas proxy setting sent down by maas. Any help would be appreciated.
<wallyworld> jam: ^^^^ ? i can't recalloff hand the supported behaviour
<jam> naturalblue: juju has model-config for apt-proxy and http-proxy. You need to set them for Juju to set it on the containers it creates. We don't auto-populate the values from maas.
<naturalblue> jam: ah i see. What happens if you have some lxds that need 1 proxy because they only have some bindings and others that have different bindings. Is it possible to set and apt-proxy setting to a specific lxd on deployment
<jam> naturalblue: we don't support per-machine/per-container proxy settings atm.
<naturalblue> jam: so if i have different containers that have different bindings I will need to make manual changes to apt/apt.d/juju-proxy.conf to reflect this. It's not so bad
<naturalblue> jam: i last thing. Is there a way to see the network settings for all units in a model. I wish to see if all machines/units/lxds have 1 common binding that i could use in the model-config and circumvent having to change multiple apt config files
<jam> naturalblue: juju status --format=yaml I believe lists bindings for applications. But that would be newly introduced in 2.4, IIRC.
<naturalblue> i am on 2.4 so i will give it a go
<naturalblue> jam: thats a cool command unfortunately it only shows the public-addresses and none of the other. Still handy to have though. Thanks
<naturalblue> jam: I added the proxy settings into the model-config and redeployed the lxd units. it passed down the https and ftp into the 95-juju-proxy-settings file but for some reason didn't pass the http setting.
<jam> naturalblue: can you give the output of "juju model-config" for me ?
<naturalblue> ok
<naturalblue> will i paste it here or pastebin
<naturalblue> jam: will i paste it here or pastebin
<jam> pastebin is good
<naturalblue> col
<naturalblue> jam: https://pastebin.com/PbpmtTuK
<jam> naturalblue: you didn't set apt-http-proxy or http-proxy
<naturalblue> theres something not right there. i did set them and when i ran the command earlier it showed them all being set.
<naturalblue> that paste also shows that https is not set but it was and has actually been passed down to the lxd containers
<naturalblue> i set them using the command "juju model-default apt-http-proxy=http://172.30.100.1:8000". i did this for apt-http/https as well as juju-http/https
<naturalblue> for some reason all the config settings for http & https have gone but the ftp has stayed
<naturalblue> i must have ran acommand somewhere that set them back to the default rather than the one i wanted. i am still not sure the difference between juju model-config, juju model-default and juju model-defaults
<naturalblue> it would seem that although there are 2 commands juju model-default and juju model-defaults they are both identical in what they do
<naturalblue> jam: https://pastebin.com/rTcSL9PT This shows all the settings correct (juju moel-defaults comand) but the other command (juju model-config) only shows the ftp settings
<manadart> jam: externalreality approved it yesterday, but I've made some additions. Mind taking another look at https://github.com/juju/juju/pull/9029 ?
<jam> manadart: will do
<jam> naturalblue: you'll need to "juju model-config --reset apt-https-proxy" etc. The value from model-defaults is snapshotted when the model is created, and only used again if you reset the value
<naturalblue> jam: Yeah thanks. I worked it out and removed all the excess proxys using juju model-default --reset juju-http-proxy,juju-ftp-proxy etc
<naturalblue> then i redepoyed the model. at first i removed the http-proxy etc instead of the juju-http-proxy but found everything just got stuck on initialising agent, so tried again with http-proxy instead of juju-http-proxy and its working away now. i'll see how it goes
<jam> naturalblue: if you set 'juju-http-proxy' then juju will set a JUJU_HTTP_PROXY value for the charms to use, but it *won't* set HTTP_PROXY
<jam> naturalblue: its been a discussion since often charms actually need to enable and disable the proxy because NO_PROXY doesn't work the way you would like
<jam> (all of the C libs only support exact IP matches, not CIDR, which means you can't just say "don't proxy everything in the local datacenter")
<stub> cory_fu: Can we release a charms.reactive update? PostgreSQL charm is failing under trusty, fixed by https://github.com/juju-solutions/charms.reactive/commit/328ae1b2f69f07059b17766a9db0586c3243cc14
<stub> I think I can bump the version number and push it to pypi if we are good
<stub> I can release just up to my commit if the series upgrade stuff needs more baking
<naturalblue> jam: Thanks for the help. Everyday's a schoolday :) It's working now and the deploymets have worked for me. Now to debug the openstack services themselves.
<naturalblue> Thanks again
<jam> naturalblue: happy to help. I'm glad you got it working.
<cory_fu> stub: +1, let's do it
<stub> cory_fu: ta. HEAD or just up to that commit?
<cory_fu> stub: Up to HEAD, I think.  I don't think the upgrade series flags will affect anyone who doesn't want to start testing them.
<cory_fu> stub: Also, you'll want to update the VERSION file and changelog
<stub> cory_fu: Shall I call that 'preliminary support for operating system series upgrade'?
<stub> (in the changelog)
<cory_fu> Yeah, that sounds good
<stub> Technically that will be a new feature, so 0.7.0. Or do I get to call this 1.0.0 ?
<cory_fu> stub: :)  I'm not really sure why we've been holding off on calling it 1.0.0, other than potentially reserving that for breaking changes.  I'm not averse to making the switch.
<cory_fu> stub: Oh, I think layer:basic might be range-pinned to <1.0.0
<cory_fu> stub: Nope, it's charms.reactive>=0.1.0,<2.0.0
<cory_fu> stub: GTG if you want to pull that trigger.  :)
<stub> k
<cory_fu> stub: Speaking of layer:basic, did you see my reply to the issue you opened about the import stuff.  Would you like to discuss that further?  I'm still leaning toward it being better than not but if it's causing problems or seems like really bad form, I'd like to try to resolve it
<stub> cory_fu: I was going to think further and do some tests.
<cory_fu> stub: Ok, sounds good.  Let me know what you find.
<stub> cory_fu: I can come up with some rationalizations, but they might just be me not liking stuffing everything into charms.layer in the first place.
<stub> cory_fu: I submitted an issue on layer:status, which looks like it is relying on naming convensions, with a way of avoiding that need.
<cory_fu> stub: Fair enough.  It suits my mental organizational scheme better but it was always intended as a convention and not a hard rule.
<cory_fu> I honestly thought that more people would use external libraries rather than every single layer embedding a small lib.
<cory_fu> But I've never done that myself, either, so...
<stub> With layers and interfaces requiring separate repos, there is already too much to juggle.
<cory_fu> Yeah
<stub> And if we could stuff them together in shared repos, you don't need external libraries because they can share code already
<stub> So when interface:pgsql can point to a subdir of the pg charm source, I stop needing an external library to share my helpers like the ConnectionString class.
<stub> https://github.com/juju-solutions/charms.reactive/pull/187
<stub> cory_fu: ^^ (tests pass locally, but I'll see what travis says too)
 * stub twiddles formatting
<hml> stickupkid: Iâm not having issues with localhost and develop:  https://pastebin.canonical.com/p/cbHKgszbz2/
<hml> stickupkid: which version of the lxd snap are you using?
<drcode> hi all
<drcode> I got strange error:  bootstrap.go:538 failed to bootstrap model: cannot start bootstrap instance: unexpected: ServerError: 400 Bad Request
<drcode> any idea?
<stickupkid> hml: 3.3 as well, i'll restart :p
<drcode> I am using MAAS + juju in virtualmachine
<drcode> juju command: juju bootstrap maas-cloud maas-cloud-controller --bootstrap-series=bionic
<stickupkid> hml: thanks for this, was about to go down a rabbit hole
<hml> stickupkid: did a restart work?
<stickupkid> hml: i'll do it in a bit
<hml> stickupkid: my lxd config has been up and running for a long time, havenât re-setup since then
<stickupkid> hml: k, thaknk
<stickupkid> s/thaknk/thanks/
<drcode> what I am doing wrong?
<drcode> Ce faci?
<rick_h_> drcode: try with --debug to see what it's trying to do. That feels like a cloud api call that got told no for some reason
<rick_h_> drcode: so we would need more info on what cloud/etc you're doing
<drcode> can I past is here?
<rick_h_> drcode: I'd suggest using a pastebin like https://paste.ubuntu.com/
<drcode> thank you
<drcode> https://paste.ubuntu.com/p/G2pZZwn6Rd/
<anastasiamac> drcode: is ur maas install configured to have bionic images?
<stickupkid> manadart: you got 5 minutes, in about 30 minutes, to pick your brains before end of day for you?
<manadart> stickupkid: Sure, just say when.
<stickupkid> hml: worked after restart
<drcode> I will check it , is it recommanded to use bionic?
<hml> stickupkid: nice
<drcode> do I need to setup NAT in MAAS , or juju can use MAAS proxy?
<anastasiamac> drcode: u r asking for it according to ur msg: [23:38:39] <drcode> juju command: juju bootstrap maas-cloud maas-cloud-controller --bootstrap-series=bionic
<drcode> ok
<drcode> I think I had problem with NAT
<drcode> Can I tell juju to use MAAS proxy?
<manadart> hml: I was able to clear the test Neutron model networks in the end; see what you think.
<hml> manadart: ack, iâll take a look
<stickupkid> manadart: now? HO?
<manadart> stickupkid: Ja.
<hml> rick_h_: when you get a chance, i think i have the pr comments resolved if you could pls take a 2nd look.  ty!  https://github.com/juju/juju/pull/9015/commits/462a644b24ab064b4ae9172e328867db428d2d96
<rick_h_> hml: cool ty looking
<hml> rick_h_: it helped once i stopped the square peg round hole problem  :-)
<rick_h_> hml: good to hear
<rick_h_> hml: do we still test if the cert is parse-able?
<rick_h_> hml: or am I missing that in there?
<hml> rick_h_: yesâ¦ - itâs checked when the filename is given
<hml> rick_h_: pollster now allows a VerifyCertFile to be provided
<rick_h_> ah doh, yea right after it's read. missed it
<hml> rick_h_: not intuitive just reading the code.  ;-)
<rick_h_> hml: it's not bad, I just missed it.
<veebers> Morning all o/
<hml> veebers: morning
<veebers> hey hml o/ hows things this morning (also known as afternoon in some places ;-))
<hml> veebers: good.  and u?
<veebers> hml: all good, it's a bit cool but have the fire going
<thumper> fyi, https://bugs.launchpad.net/juju/+bug/1786099 in 2.4.2 proposed
<thumper> I'm aware and looking at it
<thumper> it is a bit icky...
<thumper> let's just say there is an easy and wrong fix
<veebers> kelvinliu__: did you happen to push your ci-run changes up?
<thumper> where's wally?
<kelvinliu__> veebers, yes
<kelvinliu__> veebers, i did
<veebers> kelvinliu__: sorry as in merge it to master?
<kelvinliu__> veebers, ah, im going to merge if u r happy
<veebers> kelvinliu__: yes please
<veebers> I'm going to be doing updates to ci-run so don't want to overwrite your changes
<kelvinliu__> veebers, do it now, thx
<kelvinliu__> veebers, done.
<veebers> mean, cheers
<wallyworld> thumper: want a teady bear still?
<anastasiamac> wallyworld: if thumper does not have ur attention, can i?
<wallyworld> guess so
<wallyworld> i you insist
<anastasiamac> yes, m draging u into a corner.. standup?
<wallyworld> ok
#juju 2018-08-09
<wallyworld> kelvinliu__: before we land the dep changes we need to update the godeps make target in 2.3 and 2.4 to rm -rf ./vendor
<kelvinliu__> wallyworld, yes, will do
<kelvinliu__> wallyworld, https://github.com/juju/juju/pull/9039 https://github.com/juju/juju/pull/9040 mind to take a look these tiny PRs? thx
<wallyworld> kelvinliu__: yep, looking
<thumper> wallyworld: I thought we were going to try for a move?
<thumper> rather than rm
<wallyworld> thumper: considered it. but more moving parts. by having to run make ensure-dependencies anyway (just like godeps -u ...) it will regenerate it from local cahce and only takes approx 10s
 * thumper nods
<thumper> ok
<thumper> wallyworld: https://github.com/juju/juju/pull/9041
<veebers> kelvinliu_, wallyworld: query about the dep change, it only ever looks in the vendor/ dir right? So to consider a dep  you need to "dep ensure -add ..."
<veebers> also, I love that some of our tools use --arg and others use -arg :-|
<wallyworld> veebers: right
<wallyworld> that -add will do a complete operation to update toml, lock copy shit to vendor etc IIANM
<veebers> ack cheers
<veebers> wallyworld: FYI https://github.com/juju/juju/pull/9042, only fixes the one error message I saw, couldn't repro what you saw
<wallyworld> veebers: lgtm, i'll see if i can repro the other error
<veebers> wallyworld: you want to catch up real quick re: working on the cloud container status work, or should I move onto '--resource' taking a file path for oci-image details, oh or actually I could look real quick at the operator-image gen for edge snaps
<wallyworld> let's do the operator image gen so we don't break stuff again
<veebers> ack, I'll hit it now. Hopefully a quick fix
<veebers> charms don't need to be in a series named directory nowadays do they? It's all sorted in the metadata.yaml?
<babbageclunk> I think that right veebers
<babbageclunk> that's
<veebers> cool, cheers babbageclunk
<veebers> wallyworld, thumper we're unlikely to push a 'stable' version number to edge snap right? (i.e. \d.\d.\d)
<wallyworld> wouldn't think so
<thumper> correct
<veebers> I'm wanting to put an extra check in the operator image step so nothing released gets overwritten
<veebers> cool thanks !
<kelvinliu_> wallyworld, github.com/juju/juju/dependencies_test.go this test ensures tsv does not include spaces. i am going to delete this test file together with the dependencies.tsv
<wallyworld> sgtm
<veebers> wallyworld, kelvinliu_ operator image for edge snap should be under jujusolutions namespace right (as opposed to our ci namespace)
<wallyworld> yup
<veebers> mean, on it
<kelvinliu_> `jujusolutions/caas-jujud-operator` veebers
<veebers> kelvinliu_: ack, the awesome person who wrote the image upload job made only the name and version needed ;-)
<kelvinliu_> veebers, sorry for hiding the details inside the job....
<veebers> kelvinliu_: no no, it's great, I'm saying you've made it really easy
<kelvinliu_> veebers, :)
<veebers> wallyworld: that's an odd message on the pr job failure
<wallyworld> veebers: it really didn't like talking to github. the last one timed out
<veebers> wallyworld, kelvinliu_ seems it's having trouble within the lxd container?
<veebers> wallyworld: hmm, interesting
<veebers> yeah, sat there and timed out after 60 minutes
<wallyworld> each fetch of a dep took waaay too long
<wallyworld> much better now
<wallyworld> seems to be working again
<veebers> yeah it's already to dep 96, failed on 20 or so last time
<manadart> For review - basic end-to-end worker for upgrade-series: https://github.com/juju/juju/pull/9017
<manadart> Uses a pattern for testing the worker that I think could be used more widely.
<stickupkid> nice easter egg manadart:
<stickupkid> https://github.com/juju/juju/pull/9017/files#diff-1730051dd986d1473fe1e5ef57af3985R144
<stickupkid> manadart: you've got a gometalinter issue on that PR
<stickupkid> "worker/upgradeseries/worker_test.go:1::warning: file is not goimported (goimports)"
<manadart> stickupkid: Is that just the ordering/grouping? I fixed it and it has proceeded past there now.
<stickupkid> hml: code review done
<hml> stickupkid: ty
<hml> rick_h_: should you be able to bootstrap by cloud type?
<rick_h_> hml: sorry? I don't understand
<hml> rick_h_:  e.g. juju bootstrap lxd without defining any additional clouds
<hml> just having the default ones
<rick_h_> hml: no, because you have to setup the remote endpoint.
<rick_h_> hml: juju bootstrap localhost should work
<stickupkid> juju bootstrap lxd should work
<stickupkid> lxd and localhost are interchangeable
<stickupkid> \o/
<hml> stickupkid: thatâs what iâm seeing
<stickupkid> it should work though, is it not?
<hml> stickupkid: but it doesnât make sense.  since doesnâ twork with any other cloud, i hope
<hml> stickupkid: i didnât expect it to workâ¦ and confused why a bug thought it should
<stickupkid> erm... i just kept it around, so I didn't break backawards compatibility
<hml> i get the same results with bootstrap locahost and bootstrap lxd
<rick_h_> stickupkid: hml oh....ummm...that doesn't seem right.
<hml> rick_h_: +1
<stickupkid> turns out some people use `juju bootstrap lxd` aka jam :p
<hml> rick_h_: iâm seeing this in 2.4.1, so for backwards compat?
<rick_h_> orly? hmmm
<hml> yuck
<rick_h_> ugh
<hml> rick_h_: came across it in my bug and thought it was an error
<rick_h_> yea, I mean you can't juju add-cloud with google/aws/etc so I wouldn't expect the add-cloud to be the same as the thing you bootstrap
<rick_h_> as jam says "lxd is 'special'" hmmm
<stickupkid> if there was a differentiator with lxd and localhost, that would make life a bit easier it certain places, but 2.4 you can use lxd
<stickupkid> and thumper told me not to break stuff :D
<rick_h_> yea, not breaking stuff is good...
<rick_h_> so then the question is do we make the add-cloud something not lxd...
<rick_h_> lxd-remote
<stickupkid> what's the bug?
<rick_h_> hml: stickupkid so what if I asked we s/lxd/lxd-remote in this? https://pastebin.canonical.com/p/2WpmnYBvzy/
<hml> rick_h_: it might be confusing unless with use lxd-remote everywhere.  since lxd and lxd-remote are the same yes?â¦ but even then giving that folks bootstrap lxd.
<hml> would we end up with localhost, lxd and lxd-remote?
<rick_h_> hml: so we'd allow lxd but remove it from the docs/help
<stickupkid> i'm fine with lxd there
<rick_h_> hml: and have localhost and lxd-remote
<stickupkid> should we HO?
<pmatulis> one can still add a local 'lxd' type cloud manually so it makes sense to keep it 'lxd' with add-cloud
<pmatulis> then prompt for 'local' or 'remote'
<rick_h_> pmatulis: what is a local lxd type cloud locally that's not localhost?
<stickupkid> a local cluster
<stickupkid> s/cluster/remote/
<pmatulis> rick_h_, 'localhost' is the *name* of a local 'lxd' type cloud
<pmatulis> maybe we should be changing the name, not the type
<pmatulis> "lxd-local"?
<rick_h_> ugh, I knew that half hearted lxd thing would bite us
<rick_h_> fine, that's a lxd-remote where the IP addr is what you're on, or one of the other nodes in that cluster.
<stickupkid> this was why i was originally saying we should split providers between local and remote
<stickupkid> but i'm still thinking this way is actually good, tbh
<rick_h_> here's the thing. If you add-cloud a lxd what happens to the built in "localhost"
<rick_h_> you can't use both
<rick_h_> so you show-clouds and localhost and something else you named is just the same reference?
<hml> well this is awesome:  https://pastebin.canonical.com/p/QGjnqNvG86/
<pmatulis> bootstap
<stickupkid> you can, if you use `juju set-default-credential`
<rick_h_> stickupkid: not following there
<stickupkid> HO?
<rick_h_> yea, omw to standup
<stickupkid> manadart: CR please https://github.com/juju/juju/pull/9045
<manadart> stickupkid: K.
<hml> rick_h_: stickupkid  i forgot to add an lxd group and put ubuntu in it.  :-/
<stickupkid> ah of course, forgot about that
<rick_h_> hml: ah gotcha
<hml> rick_h_: does the juju snap install the lxd snap too?
<hml> is that a thing
<rick_h_> hml: I don't think so as we expect lxd to be on the system
<rick_h_> hml: and if we did that if you had the deb lxd we'd introduce a lxd in a snap the user isn't expecting
<stickupkid> wow we get a lot of failures in CI :|
<pmatulis> hml, snaps cannot add other snap afaiu. if a snap needs something it has to drag in the source code somehow. for instance, you don't install deb 'zfsutils-linux' when you use the lxd snap
<pmatulis> even though those tools are a deb, i'm pretty certain that it can't install a snap either
<hml> rick_h_: pmatulis if lxd is apt installed, a snap install gets juju only.  but if the apt of lxd is purged, a snap install of juju, installs lxd it appears:  https://pastebin.canonical.com/p/WHbxVqV2Qs/  or am i crazy again today.  ;-)
<pmatulis> hml, i've never seen lxd and juju influencing each other ito installing, regardless of how each is installed
<pmatulis> has something changed?
<pmatulis> i sure hope not
<pmatulis> hml, lemme know if you want me to test something
<hml> pmatulis: iâm curious if you can reproduce
<pmatulis> hml, it appears that lxd snap will always be installed when installing juju snap providing that lxd deb is not installed
<pmatulis> (tested on Bionic)
<hml> pmatulis: interesting, i saw the same, yu
<hml> ty
<pmatulis> that's definitely new behaviour
<hml> pmatulis: whatâs the landscape project to specify for a doc bug?  or add juju docs to a bug?
<pmatulis> hml, docs team hasn't done anything with landscape for a very long time. can you give me details on the issue you see?
<pmatulis> and not sure what you mean by your second question
<hml> pmatulis:  we need a doc change related to https://bugs.launchpad.net/juju/+bug/1784018 - where it talks about setting up lxd, there is mention that ipv6 is not supported,
<hml> pmatulis: lxd init configures ipv6 by default
<hml> pmatulis: i mean the lxd setup for juju in the docs, does NOT mention that IPv6 is not supported.  :-)
<pmatulis> i don't see landscape in any of that (?)
<hml> pmatulis: i didnât mean that the bug was a aginst landscapeâ¦ just want to add juju docs as a project that needs to fix something as part of the bug resolution
<pmatulis> hml, you want to file a juju docs bug you mean?
<pmatulis> you can't do that within LP
<hml> pmatulis: sure, thatâs another way to accomplish the same thing
<pmatulis> (can't add the juju docs project within LP i mean)
<pmatulis> the easiest is to simply file a docs bug and then link to it in an LP comment
<pmatulis> anastasia does this fairly often
<pmatulis> https://github.com/juju/docs/issues/new
<hml> pmatulis: ty
<pmatulis> rick_h_, fyi, https://bugs.launchpad.net/juju/+bug/1786324
<rick_h_> pmatulis: k, ty
<bdx> hello all, maas storage question
<bdx> I filed this a few months ago https://bugs.launchpad.net/juju/+bug/1765959
<bdx> looks like it was marked a duplicate of https://bugs.launchpad.net/juju/+bug/1765959
<bdx> oops, duplicate of https://bugs.launchpad.net/juju/+bug/1691694
<bdx> I'm wondering if its really a duplicate ....
<bdx> my bug, #1765959 has nothing to do with provisioning storage via bundle
<bdx> here is the workflow I'm trying https://paste.ubuntu.com/p/5qfqkDYyhn/
<bdx> it seems #1765959 (and #1691694) are still a valid bugs
<bdx> from what I can tell, I'm provisioning the storage pool and attaching the storage correctly (my nodes raid disks are tagged with 'raid')
<bdx> is storage for the maas provider just totally borked right now?
<bdx> per ^ bugs
<bdx> I feel like ^bugs would have had more priority on them if its true that maas storage is borked
<bdx> thats why I feel it must be me
<bdx> but I also feel my workflow checks out
<bdx> either way
<bdx> any insight here would be appreciated
<bdx> thx thx
<bdx> copied ^ over to discourse
<Guest18678> kaniini has invited you to join #litepub
<veebers> rick_h_: is bionic supported by 2.4 and 2.3? (I ask re: ci-tests)
<veebers> wallyworld: I'm seeing your merge failures, I'm not sure what to make of it yet though
<wallyworld> yeah, there's been a couple of different scenarios
<boser28> kaniini has invited you to join #litepub
<rick_h_> veebers: should just be 2.4 I think
<rick_h_> veebers: since we needed to get things to work in bionic
<veebers> rick_h_: ack, the jenkins change stickupkid suggested will break things for older branches, I'll counter propose something that should work for all branches
<rick_h_> veebers: ah gotcha yea I bet he didn't think of that
<veebers> yeah, the ci stuff is a big beast
<bitch17> kaniini has invited you to join #litepub
 * thumper sighs
* thumper changed the topic of #juju to: https://jujucharms.com, general chat on https://discourse.jujucharms.com, this channel is going to require registered users while we deal with spam bot (see https://freenode.net)
<thumper> I thought I could clear the registration requirement
<thumper> but three people in the last 20 minutes is too much
<veebers> wallyworld: so, jumping on the pr lxd machine ~/.config is owned by root, I don't know why yet, but that's why we see the perms denied thing, as git it trying to check it's config. I'm looking into it now
<wallyworld> veebers: in meeting, ty for looking
<veebers> I think the pr jobs need a bit of a revamp, but still haven't solved this specific issue ;-)
<veebers> "make lxd-setup" introduces the ~/.config/lxc dir owned by root, scripts/setup-lxd.sh is run via sudo
<veebers> if the file exists before hand it's fine. is it possible the xenial lxd images changes recently so .config isn't created by default?
<veebers> fg
<veebers> wallyworld: I'm testing out a fix or two for different things, please don't re-merge your PR as I'm using it as a test bed :-)
<wallyworld> ok :-)
<babbageclunk> hey vinodhini
<cory_fu> veebers: Hey, so apparently builds.snapcraft.io already supports triggering builds on updates to parts, so it looks like we won't actually need to do anything other than watch master on charmstore-client.  See: https://forum.snapcraft.io/t/further-automation-of-build-snapcraft-io/2926
<cory_fu> veebers: And, along those lines, would you mind taking a look at https://github.com/juju/charm-tools/pull/435
<veebers> cory_fu: oh cool, seems nice and easy then
<veebers> cory_fu: sure can
<anastasiamac> kelvinliu_: just to be 100%... when I run 'make dep' and get back only message that says 'skipping dep', it's because i have all i need and dep does not need to do anything?
<kelvinliu_> anastasiamac, there is a env var flag to turn on/off this target. JUJU_MAKE_DEP
<veebers> anastasiamac, kelvinliu_: wallyworld has a change that will invert that (i.e. will always do deps
<anastasiamac> kelvinliu_: so i need to set  JUJU_MAKE_DEP= true before running 'make dep'?
<veebers> once it land
<veebers> s
<anastasiamac> veebers: yes... but until it lands :)
<kelvinliu_> anastasiamac, so u can run JUJU_MAKE_DEP=true make dep,
<veebers> But I'm holding it up a little bit as it's my testbed for pr changes
<veebers> anastasiamac: right, yeah that was more of a PSA than "this will help you" :-)
<wallyworld> anastasiamac: the behaviour here hasn't changed - make never used to run godeps -u dependencies.tsv byu defualt
<anastasiamac> kelvinliu_: thnx
<anastasiamac> wallyworld: i never used make before :)
<wallyworld> but we are swapping tht around now
<kelvinliu_> anastasiamac, np
<wallyworld> yeah exactly - so people can use make we are making it work
<wallyworld> by default
<anastasiamac> wallyworld: i ran godeps directly... yep, +1 on making ppl trust and use make !!
#juju 2018-08-10
<anastasiamac> thumper: wallyworld: m pretty sure that this si not intentional but with the change of default behavior of create backup in 2.4, it's possible to not download a copy and not store it remotely
<anastasiamac> https://bugs.launchpad.net/juju/+bug/1786253/comments/1
<anastasiamac> i'd like to put a stop to that... what's the point of running backup if there is no output
<wallyworld> well that doesn't look very reasonable
<wallyworld> +1 to fix
<anastasiamac> also, is it really practical for us to allow --verbose here?
<wallyworld> in 2.3.9 even
<wallyworld> if the behaviour was first doner there
<anastasiamac> who would want the contents of backup from command output as stdout
<anastasiamac> i'll check 2.3
<wallyworld> --verbose should be for extra status info etc
<anastasiamac> k, yes verbose is just metdata
<anastasiamac> and confirmed that the change in behavior only went in from 2.4-beta2
<anastasiamac> oh no.... ran backup unit tests and got all this test backup archive files left behind...
 * anastasiamac adding proper filesystem cleanup to test suite
<veebers> Sorry for all the spam you're probably getting wallyworld, I'm close to a fix. I'll just push the oneliner the fix for your issue then propose a PR for the re-do of the merge jobs etc.
<wallyworld> no worries
<babbageclunk> kelvinliu_: dep seems to work pretty great for me thanks! I had one hiccup where I normally work in a symlinked dir ~/juju, rather than in $GOPATH/src/github.com/juju/juju, and dep complains about that.
<babbageclunk> Do you know whether there's any way around it?
<anastasiamac> veebers: u r only sorry that wallyworld is spammed? :(
<veebers> anastasiamac: hah I hadn't realised others would to, ah because you commented on it right? Sorry to you too
<babbageclunk> kelvinliu_: also, do you mind if I turn off the -v on dep ensure? It makes building pretty noisy - if I'm just changing a leaf package I'd normally only see a few lines of output.
<anastasiamac> veebers: i was just picking on u :) in fact, it improves my blood flow as everytime email arrives, my machine pings and I startle/jump
<veebers> ^_^
<babbageclunk> (The symlink thing isn't much of a problem for me, I normally build through a wrapper script anyway so I just change to the directory that works.)
<kelvinliu_> babbageclunk, sure, i think the -v  could be just for debuging.
<babbageclunk> kelvinliu_: yeah, I figured that might be it.
<babbageclunk> kelvinliu_: oh, the other noisy thing is that we now list all of the packages in the go install line rather than just "github.com/juju/juju/...". Is that needed to avoid the vendored dirs?
<babbageclunk> kelvinliu_: I'm tempted to hide it, if that's ok.
<anastasiamac> veebers: is it k to propose against develop or will it make ur current task worse?
<kelvinliu_> babbageclunk, awesome, great to see u tempted to hide the annoying long list of packages,
<kelvinliu_> babbageclunk, the purpose of listing packages was to exclude vendor
<veebers> anastasiamac: go nuts, if the check/merge fails for you let me know. I have a quick fix for the immediate problem, I'm working on restructuring the whole job system as as separate but related task
<babbageclunk> yeah, that makes sense
<kelvinliu_> babbageclunk, thanks for doing the enhancement.
<wallyworld> anastasiamac: here's that PR :-D https://github.com/juju/juju/pull/9048
<anastasiamac> wallyworld: yep, Awesome \o/ looking
<wallyworld> ty, i owe you a beer
<wallyworld> it's not too big actually
<anastasiamac> beer?
 * anastasiamac stops reviewing
 * babbageclunk is sorely tempted.
<babbageclunk> hey axw!
<babbageclunk> kelvinliu_: any idea why the digest is 0: for golang.org/x/crypto in the committed Gopkg.lock? I just updated my first dependency in dep-world, it was fine except it's also updated the digest for the crypto package despite my not having touched it.
<kelvinliu_> babbageclunk, i didn't know the digest was 0 for crypto.  needs to investigate
<babbageclunk> kelvinliu_: cool, thanks
<vinodhini>  wallyworld: could u please take a look at the PR : https://github.com/juju/juju/pull/9049
<wallyworld> sure
<vinodhini> who is that ?
<vinodhini> wallyworld: the ca-cert for now i havent taken into the 2.4.2 - to me it looks that it is out of scope of this bug fix.
<vinodhini> Le t me know ur inputs
<wallyworld> sure. i wasn't intending to take thw who ca cert thing, just the small refactoring of certain parts of the code that were common to this change
<wallyworld> i'll look at the pr
<anastasiamac> wallyworld: reviewd but m raising ur beer offer to sparkly, at least...
<wallyworld> ok :-)
<wallyworld> ty
<wallyworld> next week
<wallyworld> monday even
<anastasiamac> mayb...
<anastasiamac> it's a short week in bne and am saving myself for partying
<wallyworld> i'm too old to parety
<babbageclunk> kelvinliu_: oops, my computer died.
<anastasiamac> wot? my 99 yo granny still parties... u r not that old, wallyworld :) altho they do say that u r as old as u feel
<wallyworld> sometimes i feel 21 again, mostly not
<anastasiamac> :) early 20s tend to be crazy - taste of freedom and all that....
<kelvinliu_> babbageclunk, i just found that digest updated from "0:" to "1:xxxx" as well. don't know why it was 1 tho.
<babbageclunk> kelvinliu_: <shrug> I don't think it matters too much. I'll just leave it.
<kelvinliu_> babbageclunk, yeah, let's see if it happens again in the future
<babbageclunk> kelvinliu_: can you review this please? https://github.com/juju/juju/pull/9050
<wallyworld> vinodhini: looks pretty good but needs tests. also don't forget to remove the template from the PR description
<babbageclunk> veebers: I'm getting a weird build failure - can't install go snap. Is this a known thing or something new? http://ci.jujucharms.com/job/github-check-merge-juju/2898/console
<veebers> babbageclunk: let me have aa quick look
<babbageclunk> thanks!
<veebers> babbageclunk: no, that's some weird failure that I've seen happen twice now. a re-run should go through properly
<veebers> babbageclunk: hah, infact my test job (which shouldn't have triggered) is running that pr branch fine; http://ci.jujucharms.com/job/veebers-github-check-merge-juju/27/console
<veebers> unfortunately it won't count
<babbageclunk> veebers: ok, kicking it off again, ta
<veebers> babbageclunk: I'm really not sure what the error is due to, will have another look at some point, but it seems udev is unhappy within the container
<vinodhini> wallyworld: thats why its WIP - i am working on unit test part
<wallyworld> vinodhini: oh, doh! sorry, i didn't see that
<wallyworld> maybe time for glasses
<babbageclunk> too much partying
<kelvinliu_> babbageclunk, LGTM, thanks!
<anastasiamac> well, m reading wallyworld's "glasses" as "friday drinks" too
<wallyworld> i wish
<babbageclunk> kelvinliu_: cheers
<babbageclunk> veebers: This one looks bad too... http://ci.jujucharms.com/job/github-check-merge-juju/2899/console
 * veebers looks
<babbageclunk> doh, not on the vpn after rebooting
<veebers> babbageclunk: hmm, yeah that might be a cloud-init issue, I've seen that recently before too. I've spoke briefly to someone about it, let me poke around a bit
<babbageclunk> Thanks
<babbageclunk> veebers: hah, your test job against my PR did count!
<veebers> babbageclunk: erally? oops, luckily it's just the check and not an erroneous merge
 * veebers deletes that job
<babbageclunk> true that
<babbageclunk> aww, what happened to the veebers-rulez container?
<veebers> babbageclunk: I've just aborted that job that was stuck
<veebers> babbageclunk: hah you on that machine? :-)
<babbageclunk> yeah, once I remembered to start the vpn
<veebers> babbageclunk: We had a sick machine that was gumming up the cleanup jobs, so thinigs wheren't being cleaned up. I think too many old lxd machines on there does something to the cloud-init on those jobs
<babbageclunk> ah right. hey, did you just kill my merge job?
<veebers> babbageclunk: so in short, I killed that machine, re-ran the cleanup to get grumpig cleaned up, killed that stuck job, You need to re-build that pr
<veebers> hopefully it'll get through this time
<veebers> babbageclunk: um, sorry yes
<babbageclunk> ha, no worries
<veebers> babbageclunk: it wasn't going to go anywhere. I'm hoping to get more insight on it, but I think the cleanup will sort it for the immediate future
<babbageclunk> oops, think I've managed to schedule multiple merges - cleaning them up now.
<anastasiamac> an easy review anyone? https://github.com/juju/juju/pull/9051 adds uuid back for compat
<vinodhini> wallyworld: have a min ? i want to discuss abt this goose lib
<wallyworld> ok
<vinodhini> i wud like to HO please.
<wallyworld> sure, i'm there
<anastasiamac> babbageclunk: any chance u could have a look since apparently u r my patner in the original crime...?^^^
<babbageclunk> anastasiamac: well, ok, as long as it's easy!
<anastasiamac> babbageclunk: when is it ever?
<anastasiamac> but PR is small
<babbageclunk> anastasiamac: why have both? Is that going to be more confusing than just having uuid?
<anastasiamac> babbageclunk: to ensure backward compatibility... and a way forward... we've been discussing it instandup the last couple of days :)
<anastasiamac> babbageclunk: also, see thumper comments in the linked bug ;)
<babbageclunk> ok, reading more
<anastasiamac> babbageclunk: thank you :)
<babbageclunk> anastasiamac: approved!
<anastasiamac> babbageclunk: \o/
<babbageclunk> anastasiamac: hey, how was I your partner in the original crime?!
<anastasiamac> u were reviewer when i've renamed 'uuid' to be 'controller-' or 'model-'
<anastasiamac> lol... how else?
<babbageclunk> whoa, totally blanked that from my mind.
<veebers> babbageclunk: I'm not sure that argument would hold up in court . . .
<anastasiamac> kelvinliu_: m doing something wrong.. teething problem mayb? m was off develop locally, switched to 2.4 but running 'make install' or 'go install ./...' does not work
<kelvinliu_> anastasiamac, u will need to run `make godeps` after switched to 2.3 or 2.4
<anastasiamac> kelvinliu_: i have
<anastasiamac> beofre trying to install
<anastasiamac> i'll try again
<kelvinliu_> anastasiamac, because we need to rm -rf ./vendor then ensure dependencies via `godeps` for 2.3/2.4
<anastasiamac> kelvinliu_: sorted... yes... i was not runing make target but godeps directly... old habits die hard... i'll b better by monday i promise
 * anastasiamac winces
<kelvinliu_> anastasiamac, cool! hope the change doesn't impact ur workflow too much.  : )
<anastasiamac> kelvinliu_: so far, pretty seamless !!! so kind of awesome
<anastasiamac> and another awesome and laconic PR for review, plz - https://github.com/juju/juju/pull/9052 - help for create-backup and removing test artifacts
<anastasiamac> wallyworld: any chance u could PTAL ^^ 2.4 one after all..
<wallyworld> sure
<anastasiamac> \o/
<wallyworld> anastasiamac: done with a request to reject incompatible cli args
<anastasiamac> wallyworld: so, the problem is that they are not incompatible. it's totally okay to say --keep-copy --no-download...
<wallyworld> sure, so check the value
<wallyworld> if keep==false and no-download then complain
<anastasiamac> wallyworld: eve if the user says '--keep-cope=false --no-download', we will ignore --keep-copy
<anastasiamac> ic
<wallyworld> that's when we should error
<wallyworld> make sense?
<anastasiamac> -keep-copy is false by default
<anastasiamac> i.e. when we read a flag we supply 'false' as default...
<anastasiamac> how do u know if that false is user-supplied vs the default value?
<anastasiamac> wallyworld:
<wallyworld> doesn't matter how the true/false value gets there. if c.Keep == false and c.NoDownload == false then it's an error
<wallyworld> NoDownload==true
<wallyworld> ie we don't want to allow the user to accidentially ask for a no nop
<anastasiamac> waht u describe is undetectable
<anastasiamac> wallyworld: really made me sweat for i :D but found it... i'll update the pr.. want to review it before i land?
<wallyworld> nah, all good
<anastasiamac> ack
<wallyworld> as long as there's a test
<anastasiamac> yes, of course :) this is how i know it works too :D
<wallyworld> :-)
<veebers> stickupkid: I realise now that I said I was going to counter prpose your ci test changes but never did, I might try hit that real quick for you now
<stickupkid> ta
<veebers> stickupkid: FYI current-parameters
<veebers> heh
<veebers> stickupkid: actually FYI https://github.com/CanonicalLtd/juju-qa-jenkins/pull/73
<stickupkid> ah, that solves it better than mine
<stickupkid> CR your PR :D
<manadart> Anyone got insight into what can cause repeated dependency engine errors like this?
<manadart> ERROR juju.worker.dependency engine.go:587 "api-address-updater" manifold worker returned unexpected error: connection is shut down
<jamespage> odd question but if I need to use pip to install a workload from a reactive charm, how do you break out of the virtualenv being used for the charm hook itself?
<jamespage> cory_fu: ^^ ?
<icey> jamespage `deactivate; pip install $MY_PACKAGE` ?
<icey> probably break all kinds of other stuff though
<cory_fu> jamespage: What icey suggested might work in a subshell.  I think you could also filter the environ dict passed to subprocess (maybe just pass an empty dict)
<jamespage> cory_fu: is it VIRTUAL_ENV that causes pip to install to the venv rather than globally?
<cory_fu> jamespage: There are a few env vars that get modified.  PATH, possibly PYTHON_PATH, maybe some others, I'm not sure
<jamespage> cory_fu: I think if they fully path /usr/bin/pip it will dtrt - the issue is that the pip in the venv is being used
<jamespage> or maybe not...
 * jamespage puzzled
<hml>  stickupkid: pr reviewed, lgtm with a few questions
<manadart> stickupkid: Quick HO?
<stickupkid> sure
<stickupkid> rick_h_: you got 5 minutes for a quick HO?
<rick_h_> stickupkid: for you, sure thing
<magicaltrout> rick_h_: to your knowledge if jujucharms.com having any issues today?
<magicaltrout> oop well thats funky
<pmatulis> works here
<magicaltrout> yeah its been a bit weird today
<magicaltrout> one of my guys earlier was complaining about auth issues
<magicaltrout> then i was searching for stuff i know exists that didn't show up
<magicaltrout> then 5 minutes later, did
<magicaltrout> weird
<pmatulis> IT maintenance maybe, db upgrade?
#juju 2018-08-11
<drcode> hi all
<drcode> after I bootstrap and got "default  maas-cloud-controller  maas-cloud    2.4.1    unsupported "  I can use "juju deploy mysql"?
<pmatulis> drc...
#juju 2018-08-12
<veebers> Morning all o/
<thumper> morning
<thumper> veebers: do you know where our jenkins top level unit test jobs collection went?
<veebers> thumper: no, let me have a look at what you're seeing
<veebers> thumper: what link are you looking at?
<thumper> veebers: as in hangout and screen share?
<thumper> http://10.125.0.203:8080/
<thumper> not logged in
<thumper> logged in don't see it either
<veebers> thumper: I don't understand what you're asking
<thumper> veebers: we used to have a tab along the top, like the functional tests, ci-run etc, that said unit tests
<thumper> and it was the latest unit test runs
<thumper> so we could see general good/bad
<veebers> thumper: I never used the views, I haven't changed anything. The views aren't managed by JJB, so are edited manually
<thumper> hmm...
<veebers> thumper: it's possible that a recent jenkins/plugin update chnaged things
<thumper> I'm sure we used to have one there...
 * thumper nods
<veebers> thumper: If I go to the ci-run tab I can see the top level jobs there (unit, build etc.) and that takes me to the unit jobs.
<thumper> that takes me to the multijob list though
<thumper> we used to have a view that had a graph...
<thumper> remember that?
<veebers> thumper: only vaguely that you used something like that sorry
<thumper> that's ok
<veebers> thumper: I suspect a recent updaet changed things
<thumper> I'll poke around
<veebers> sweet, I can't offer much help sorry I haven't played to much with views
<veebers> I'm pretty sure you can't break anything though so go nuts with what you try :-)
<thumper> Added a view...
<thumper> looks like our windows unit tests haven't been running for a month
<veebers> thumper: seriously ugh /me looks too
<veebers> thumper: ugh, zero errors or messages too; yay
<veebers> oh wait, there kind of is 'scp ... operation in progress'. Have I seen that before? /me checks notes
<veebers> yeah I think I have
<veebers> thumper: I'll reboot that machine (and maybe check if it needs cleanup)
<thumper> ok
<veebers> thumper: FYI I log in and see "Low disk space" warning dialog. I might be crazy, but we might be running low on disk
<thumper> heh
#juju 2019-08-05
<achilleasa> can I get a quick CR on https://github.com/juju/juju/pull/10482? (removal of cmr-bundle ff from dev)
<stickupkid> achilleasa, done
<achilleasa> stickupkid: tyvm
<stickupkid> rick_h, first PR :D https://github.com/juju/jsonschema-gen/pull/1
<rick_h> stickupkid:  ok, I'll defer to someone else because I don't get what "embedded" means in this context and what's going on
<stickupkid> rick_h, jam back tomorrow?
<rick_h> stickupkid:  maybe? he crossed an ocean so should get 2 days...but he's also on holiday for a week so guess not
<rick_h> stickupkid:  actually he's on holiday for 2 weeks
<rick_h> so wheeee
<stickupkid> i'll speak to manadart, see what he thinks
<rick_h> stickupkid:  cool
<rick_h> sorry, just getting caught up my head isn't ready for it yet
<stickupkid> yeah, nps at all
 * rick_h bounces around from topic to topic to topic
<manadart> stickupkid: OK to talk tomorrow? I literally walked in the door, dropped bags and started working today. Have to sort myself out.
<stickupkid> manadart, of course, hence why i've not ping'ed :D
<stickupkid> manadart, speak to you tomorrow
<gnuoy> Hey rick_h, just looking into https://github.com/juju/python-libjuju/issues/333. I'm being dumb, I don't follow the suggestion of trying a delay,  a delay between what and what ?
<rick_h> gnuoy:  sorry, after the add-model call. I think, if it's the same bug, there's a race between adding a model, and juju status being able to see it from the cache.
<gnuoy> rick_h, its the add-model call that errors
<gnuoy> let me try adding a sleep in libjuju itself
<rick_h> gnuoy:  ah, yea so it looks like libjuju is trying to see the new model that was added.
<hml> gnuoy:  rick_h: what does await do here?
<rick_h> hml:  make the async call
<rick_h> gnuoy:  right, so the delay would be before this line https://github.com/juju/python-libjuju/blob/8cb8d75e217d7162bb662dcaef915db840007bbf/juju/controller.py#L296
<gnuoy> rick_h, adding a sleep here https://github.com/juju/python-libjuju/blob/master/juju/controller.py#L295 fixes it
<gnuoy> haha, same idea
<rick_h> gnuoy:  right, exactly
<rick_h> gnuoy:  :( so there's a fix that's landed, not sure on next release timeline atm
<rick_h> gnuoy:  will work with team in the release call tonight I guess and see what I can find out post-sprint
<gnuoy> ack, ok. thanks rick_h
<gnuoy> beisner ^
<beisner> ack thx rick_h gnuoy
#juju 2019-08-06
<timClicks> babbageclunk, et al.: have any time to take a look at https://github.com/juju/juju/pull/10461 before standup?
<babbageclunk> timClicks: sorry, just looking briefly now
<timClicks> babbageclunk: re the vm template's name
<timClicks> perhaps it could simply be "juju-unit-template"?
<timClicks> because it will be hosted in a directory that's unique to each controller
<timClicks> or "juju-{series}-template"?
<timClicks> we could use the SHA256 string, but that feels unnecessarily complicated
<babbageclunk> timClicks: I know in the vmdk code it had some extra structure to disambiguate... just looking hang on
<babbageclunk> I think series was one part, was there anything else?
<timClicks> the series is still there, fwiw - was part of the directory
<timClicks> used to be /{controller uuid}/{series}/{sha265}.vmdk I think
<timClicks> PR is currently using /{controller uuid}/{series}/juju-{model uuid}-template
<babbageclunk> Ah, right - I think the rationale for the hash was so that you'd get a new image if it was released. But is that right? Wouldn't it just use it if it was already there?
<babbageclunk> timClicks: jump into standup?
<timClicks> ok
<timClicks> babbageclunk: can you test the internal vsphere instance? it doesn't to be responsive
<babbageclunk> timClicks: trying
<babbageclunk> timClicks: works for me (tm)
<babbageclunk> maybe vpn?
<timClicks> babbageclunk: hrm, it appears that turning things on and off again isn't helping
<timClicks> nm
<timClicks> looks like changing to the us-based VPN is getting me through
<timClicks> babbageclunk: hey final thought on template naming
<timClicks> is there any need for it to be namespaced by the controller uuid?
<babbageclunk> timClicks: sorry, missed this! Dumb mouse ran out of batteries
<babbageclunk> timClicks: if it's not by controller ID then we don't know when to clean it up.
<timClicks> that's a good point
<babbageclunk> (without implementing some kind of reference counting but I don't know how we'd do that)
<achilleasa> can I please get a CR for https://github.com/juju/juju/pull/10484?
<manadart> achilleasa: I am having a look.
<stickupkid> achilleasa, man, they don't sort the bundle changes in pylib
<stickupkid> achilleasa, :|
<achilleasa> stickupkid: does it even work for more complicated bundles?
<stickupkid> achilleasa, i have no idea, probably not
<achilleasa> stickupkid: I remember we were hitting issues with relations not being created in the correct order when we were working on the bundle cmr stuff
<stickupkid> achilleasa, guess where i'm at now :D
<stickupkid> haha
<achilleasa> aha!
<stickupkid> achilleasa, i really don't want to copy the same sort method, but I guess I have to?
<stickupkid> achilleasa, i.e. i don't like it's non-terminal if things go wrong
<stickupkid> achilleasa, anyway, i'm going grabbing some food, so i'll think over lunch
<achilleasa> stickupkid: yeah... I wonder if you can somehow make it bail out if it exceeds a large number of comparisons (of course, finding a safe value for that is definitely not trivial)
<achilleasa> manadart: can you take a look at this one too? https://github.com/juju/charm/pull/289 (this one is for fixing the relative bundle issues in the develop branch)
<manadart> achilleasa: Sure.
<manadart> achilleasa: Approved.
<achilleasa> tyvm
<achilleasa> can I get a CR for https://github.com/juju/juju/pull/10487? (it's the new and improved fix for resolving relative charm paths - for dev)
<achilleasa> rick_h: this one ^^ also works with overlays
<rick_h> achilleasa:  yay
<stickupkid> achilleasa, i'll have a look soon, fighting with circular dependencies of deps :(
<stickupkid> achilleasa, approved
<achilleasa> stickupkid: tyvm
<stickupkid> rick_h, so that worked https://github.com/juju/juju/pull/10471 - we just need to fork this into juju namespace and we're good to go https://github.com/SimonRichardson/rpcreflect
#juju 2019-08-07
<wallyworld> kelvinliu: hpidcock: i need food so will ping you guys in a bit
<kelvinliu> yep
<wallyworld> hpidcock: forgot to mention - that podspec set commit fix should also land in 2.6
<hpidcock> ok
<wallyworld> kelvinliu: hpidcock: is now ok for hangout?
<hpidcock> give me a sec
<wallyworld> np just ping when ready
<kelvinliu> im ok, we will wait hpidcock
<hpidcock> wallyworld kelvinliu
<timClicks> babbageclunk, wallyworld: vSAN PR be good; once you're happy there is some unit test churn to fix then I'll rebase
<wallyworld> awesome, i'll let babbageclunk +1 it
<timClicks> ok
<timClicks> *should be good
<babbageclunk> timClicks: looking now
<hpidcock> wallyworld: https://github.com/juju/juju/pull/10488 not sure if this is how you would backport or if I should give the PR the same description as the one that landed in develop.
<babbageclunk> timClicks: reviewed - the scoping thing's a bit hard to explain, let me know if you want more detail
<timClicks> babbageclunk: could you please take a look to see if I've fixed that?
<babbageclunk> yup yup
<babbageclunk> timClicks: you don't need to assign to err on line 253 - just the normal return will set err for the deferred func
<babbageclunk> but other than that yes that'll fix it
<timClicks> babbageclunk: should be fixed
<wallyworld> hpidcock: we can go either way. sometimes a copy of the description is useful
<hpidcock> ok cool, thank-you
<timClicks> babbageclunk: could you please click resolve convo if you're happy with that error scoping fix
<babbageclunk> wilco
<babbageclunk> timClicks: sorry, otp but will straight after
<timClicks> np
<babbageclunk> timClicks: approved
<timClicks> whaaaat
<hpidcock> https://github.com/juju/juju/pull/10492 merge 2.6 into develop, no changes, just forward-porting a merge commit that makes merging confusing for the next person.
<wallyworld> kelvinliu: there's a few things to look at in the actions PR. could you go through the comments from me and harry and fix those you agree with and we can talk about the rest
<kelvinliu> sure
<kelvinliu> wallyworld: now?
<wallyworld> kelvinliu: we can chat now if you want - have a read of the comments and ping me?
<kelvinliu> wallyworld: i think we can go through now if you r free
<wallyworld> ok
<kelvinliu> and hpidcock
<hpidcock> oops sorry, was in the zone
<hpidcock> wallyworld: the juju.io/cloud node annotations used for detecting substrate, it appears compileK8sCloudCheckers() doesn't have rules to match openstack
<wallyworld> hpidcock: yeah, i can believe that. i can't recall offhand what the solution is there. i think we'll need to check the charm source code to see what it sets up
<hpidcock> already on it
<wallyworld> hpidcock: i think there may even be something in CDK kube-master charm which when related does something
<wallyworld> i can check with cory tomorrow
<hpidcock> Looks like it's done inside the kubernetes-worker charm and only for gce, ec2 and azure
<wallyworld> it was on the todo list
<wallyworld> i'll follow up
<hpidcock> thank-you
<stickupkid> anyone around for a CR https://github.com/juju/jsonschema-gen/pull/3
<achilleasa> stickupkid: looking ^^^
<achilleasa> stickupkid: done
<stickupkid> achilleasa, whilst you're at it https://github.com/juju/juju/pull/10471
<achilleasa> stickupkid: sure. will look in 10min if that's OK with you
<stickupkid> achilleasa, fine with me
<stickupkid> achilleasa, manadart approved the now closed develop version https://github.com/juju/juju/pull/10467 if that helps
<achilleasa> stickupkid: done. sorry for the delay
<stickupkid> achilleasa, nps, ty
<stickupkid> CR for merging 2.6 into develop https://github.com/juju/juju/pull/10493
<achilleasa> stickupkid: I 've noticed that hpidcock also has a forward merge PR (https://github.com/juju/juju/pull/10492) for his changes (which are included in your PR as well).
<rick_h> achilleasa:  with john and joe out can you please give thumper some time on https://github.com/juju/juju/pull/10491
<achilleasa> rick_h: sure thing
<stickupkid> rick_h, ah, i'll merge his and then redo mine
<stickupkid> achilleasa, correct 2.6 into develop after hpidcock was merged https://github.com/juju/juju/pull/10495
<achilleasa> stickupkid: approved
<stickupkid> achilleasa, ta
<rick_h> stickupkid:  ok'd the one, started to look at the other but it needs updating based on the first so going to hold off then if that's cool
<stickupkid> rick_h, yes of course
<pmatulis> is there a bug whereby a model cannot be removed if CMR is in use?
<rick_h> pmatulis:  https://bugs.launchpad.net/juju/+bug/1768682 I think
<mup> Bug #1768682: cross model breakage <juju:Triaged> <https://launchpad.net/bugs/1768682>
<pmatulis> rick_h, thank you
<pmatulis> rick_h, in the help for remove-relation there is the example 'juju remove-relation 4' . where is the ID of '4' exposed/visible?
<rick_h> pmatulis:  hmm, good question. So there's a relation id that's used in hooks/etc. I'd have to poke around to see if that's exposed in the normal UX in any way
<pmatulis> rick_h, alright. i'm having quite a hard time recovering from trying to remove this model. looks like i have to wipe out the controller
#juju 2019-08-08
<hml> quick pr for someone?  https://github.com/juju/juju/pull/10500
<stickupkid> hml, achilleasa am I going crazy or this doesn't do what people think it does with the error https://github.com/juju/juju/blob/develop/apiserver/stateauthenticator/modeluser.go#L161-L187
<stickupkid> hml, i can look
<stickupkid> achilleasa, got a sec
<achilleasa> stickupkid: sure. you are talking about err1 right?
<rick_h> knobby:  cory_fu do you know if anyone on your end can peek at https://discourse.jujucharms.com/t/cant-create-pod-with-container-from-a-custom-registry/1910 sometime please?
<rick_h> looks like someone looking for the magic incantation to use their CDK for their custom images
<knobby> rick_h: I was having trouble with this exact thing yesterday...I'll see if we can find some answers
<rick_h> knobby:  <3 my hero
<knobby> rick_h: your hero will probably be joedborg, not me ;)
<rick_h> my hero-ship is tranferable for sure
<timClicks> it looks like there's another option in town for CI/CD https://github.blog/2019-08-08-github-actions-now-supports-ci-cd/
<timClicks> it's nice to see github working on claiming back some feature parity with gitlab
<thumper> trivial PR to review to fix an intermittent failure: https://github.com/juju/juju/pull/10502
<hpidcock> thumper: LGTM
<thumper> cheers hpidcock
<hpidcock> timClicks: microsoft also have https://azure.microsoft.com/en-au/services/devops/pipelines/ which I imagine is what GitHub is natively integrating
#juju 2019-08-09
<thumper> does anyone have familiarity with windows ?
<thumper> I'm trying to get our windows CI machine to have go installed
<thumper> as for some reason it seems to have lost it
<thumper> I'm currently struggling with two things:
<thumper> when I ssh from goodra the windows shell doesn't scroll, so after a page of output it just overwrites the last line
<thumper> I managed to download the go 1.11.12 msi
<thumper> but I can't seem to get it to install...
<hpidcock> thumper: maybe this? https://powershellexplained.com/2016-10-21-powershell-installing-msi-files/
<thumper> in my frustration I rebooted the machine
<thumper> I hope
<thumper> hpidcock: let me try that...
<thumper> hmm... nope, that didn't seem to help
<thumper> man I hate windows
 * thumper emails crew
<hpidcock> thumper: were you trying to fix the RunUnittests-win2012 job?
<thumper> yep
<thumper> and failing
<hpidcock> ok I'll check it out this morning
<thumper> thanks hpidcock
<thumper> hpidcock: thanks for fixing the go part. curious, what did you need to do?
<thumper> I notice that mongo doesn't appear to be installed as all the tests failed
<hpidcock> I just fixed mongo
<thumper> but they ended up looking for /usr/local/bin/mongo
<thumper> ah sweet
<thumper> so what was the go problem?
<thumper> was it actually installed and somehow out of the path?
<hpidcock> thumper: for some reason it never installed, I had to use that guys powershell script to install it
<hpidcock> probably would have been easier in the end to use RDP to do it via the gui
<hpidcock> also to work around the ssh issue I did `mode con lines=9999` to fool the windows conhost.exe thinking the window height is 9999
<wallyworld> thumper: trivial https://github.com/juju/juju/pull/10505
<hpidcock> also had to fix up all the env vars, they were wrong and weird
<wallyworld> babbageclunk: timClicks: how goes the vsphere cloud-init fun and games?
<thumper> wallyworld: looking
<timClicks> have figured out how to customise the new instances' hostnames
<timClicks> testing to see if that's sufficient
<wallyworld> timClicks: ok, good to unblock if it works, but it would be preferred to follow up with a bespoke cloud init per instance
<thumper> wallyworld: thanks, lgtm
<wallyworld> thumper: the race wasn't even in test code so it seems it could occur in production
 * thumper nods
<wallyworld> not sure of the effect etc
<thumper> wallyworld: to be honest, unlikely to be a big problem
<thumper> however, good catch
<wallyworld> we'll need to be careful adding new entities if we also need to use a hashcache foro those
<thumper> wallyworld, timClicks: how can the hostname impact the delivery of the full cloud init user data?
<timClicks> thumper: the import process and the vm cloning process have different args
<wallyworld> thumper: it's more so that all cloned instances had the same hostname
<wallyworld> because cloud init data was set once only in the template and cloned
<thumper> hmm..
<thumper> ok
<wallyworld> so setting hostname properly is necessary but not sufficient
<wallyworld> still need to sort out cloud init properly
<wallyworld> but it may unblock
<wallyworld> looking for quick win to start with
<wallyworld> small steps, just fix one thing that is feasible
<wallyworld> thumper: i *think* andrew's approach also had the cloud init issue (not 100%) but the hostname was not an issue - that was introduced by the alternative workflow
<wallyworld> thumper: do we really care about maas 1.9 now that trusty is esm? perhaps we can remove that ci test. we don't really have the means to properly run it anyway IIANM
<thumper> No... I don't think we care about maas 1.9 any more
<wallyworld> yay, one less red dot
<thumper> we know that juju 2.3 and 2.4 work fine and can be a jump point if necessary
<wallyworld> yup
<thumper> we should remove the 1.9 code paths from juju for 3.0
<wallyworld> yep, on the metaphorical list
<babbageclunk> hey timClicks, how's it going?
<timClicks> babbageclunk: hey sorry, was outside
<babbageclunk> timClicks: no worries/same
<babbageclunk> wallyworld: approved
<wallyworld> awesome ty
<wallyworld> kelvinliu: trivial forward port from 2.6 if you are around
<wallyworld> https://github.com/juju/juju/pull/10507
<kelvinliu> yep looking
<kelvinliu> wallyworld: done! thanks!
<wallyworld> ty
#juju 2019-08-11
<thumper> easy PR - merging 2.6 -> develop https://github.com/juju/juju/pull/10509
<thumper> timClicks: thanks
<timClicks> thumper: np
<timClicks> thumper: hey I was considering submitting a Juju tutorial for LCA 2020 on the gold coast
<thumper> go for it
#juju 2020-08-03
<manadart_> achilleasa: Need a review for the AWS VPC re-use.
<manadart_> https://github.com/juju/juju/pull/11872
<achilleasa> manadart_: looking in 5'
<achilleasa> manadart_: are you using venv to test ^^^ ?
<manadart_> achilleasa: Yes.
<achilleasa> we will need to bump the requirements: https://pastebin.canonical.com/p/JQg2k6T7c8/
<achilleasa> I will try to get it up & running and send you a diff for requirements.txt
<achilleasa> looks like it's only some of the azure bits...
<achilleasa> manadart_: https://pastebin.canonical.com/p/6ghJG7gC2q/ not sure if removing these will break winrm.py...
<achilleasa> manadart_: are the 400 responses at the end of the test the known cleanup issues that you mention in the PR description?
<manadart_> achilleasa: For the security groups?
<achilleasa> yeap
<manadart_> achilleasa: Have not yet looked at those. I always get a destroy-model failure.
<manadart_> achilleasa: https://github.com/juju/juju/pull/11873 merges that single patch forward.
#juju 2020-08-04
<thumper> https://github.com/juju/juju/pull/11874 for the stopping and starting of unit workers
<thumper> wallyworld, kelvinliu, hpidcock, tlm: ^^^
<hpidcock> thumper: looking
<hpidcock> wallyworld: might need to look at it too
<tlm> looking
<wallyworld> looking too
<kelvinliu> looking 2
<thumper> I feel blessed
<wallyworld> thumper: i had a question/concern about the behaviour of the start/stop handler not waiting for a response
<thumper> ok, will look later, brain is in the next thing just now
<achilleasa> can someone take a look at https://github.com/juju/juju/pull/11877?
<achilleasa> hml: can you take a look when you have some time? ^^^
<mcayland> hi everyone - i've been trying to upgrade to a newer version of juju to run the latest octavia plugin
<mcayland> i've done the model upgrade but it seems like the juju agents themselves don't get upgraded
<mcayland> i can see lots of messages like "no matching agent binaries available" in the log files
<mcayland> can anyone provide any pointers to help fix this?
<hml> achilleasa:  sure
<petevg> mcayland: Did you upgrade the controller model first, then upgrade the models housing the applications? And have you taken a look at https://juju.is/docs/help-model-upgrades? Let me know if you have any specific questions, but that page might help you get started troubleshooting.
<mcayland> petevg: yes indeed, i followed that page quite closely
<mcayland> petevg: the other thing i had to do was remove the old juju apt packages and switch to a snap version instead
<mcayland> i did a 2 stage upgrade: first upgrade to the latest deb packages which got me to juju 2.6
<mcayland> this looks like it is reflected correctly in the agent.conf files
<mcayland> then i installed a new snap juju 2.7.8 package - "juju status" looks fine
<mcayland> "juju upgrade-model" gives me "ERROR some agents have not upgraded to the current model version 2.7.8"
<mcayland> and lists most of the agents
<mcayland> petevg: i managed to run "juju upgrade-model --agent-stream 2.7.8 --debug"
<mcayland> but 2.7.8 isn't listed in the "available agent binaries" list. 2.7.7 is there, so is 2.8.1. i guess that's why i'm failing...
<mcayland> oooh there is an "--ignore-agent-versions" switch. let me try that...
<mcayland> "started upgrade to 2.8.2" i think that's promising
<petevg> mcayland: Fingers cross, but I think that you're on the right track. There aren't deb based binaries for some of the latest juju releases, which is probably the issue you were running into. Getting everything onto the snap should fix it ...
<petevg> hml: do you have any other advice for mcayland? (Starting w/ a deb based controller, upgrading to a snap based one.)
<hml> petevg: looking
<mcayland> petevg: i can see that some of the agents have now upgraded, lots of activity
<mcayland> petevg/hml: things have started to settle down now. looks like 3 services have errors...
<hml> mcayland:  the controller is running 2.7.8 yes? and now model has upgraded to 2.7.8?
<mcayland> hml: there were no 2.7.8 binaries present and --ignore-agent-versions seems to have updated all the agents to 2.8.2
<hml> mcayland: whatâs the controller running?
<mcayland> hml: looks like controller is also at 2.8.2
<hml> mcayland: what version of the snap?
<hml> mcayland: what snap channel for the juju snap, to be more specific
<mcayland> hml: i went for the basic "sudo snap install juju --classic"
<hml> mcayland: so you should have been upgrading to 2.8.1, afaik with that snap.  might be why you couldnât get to 2.7.8.
<hml> mcayland: juju version shows 2.8.1?
<mcayland> hml: yes it does
<mcayland> i'm just looking to see why designate failed
<mcayland> "TypeError: sequence item 0: expected str instance, NoneType found"
<hml> mcayland: looks like using the âignore-agent-versions upgraded you to a release that is in process right now, but not yet official 2.8.2
<mcayland> return ';'.join(self.relation.client_ips())
<hml> 2.8.2 is the latest/candidate channel right now
<mcayland> hml: hmm okay - i hope there's nothing too unstable there(!)
<hml> mcayland: its in final test right now, if thatâs good, will be released
<mcayland> hml: great, thanks! any thoughts on the designate-bind error above? it looks like it's missing some config?
<hml> mcayland: can you give me more context around the TypeError, looks like a python error, so would be related to the charm, not juju itself
<mcayland> hml: that's the tail end of juju unit log
<mcayland> self.relation.client_ips() makes me think it's trying to update some config
<hml> mcayland:  thatâs from the charm itself, and data it gets for relations to other charms
<mcayland> hml: i can pastebin the whole backtrace if it helps?
<hml> mcayland: trying to find someone who knows more about the designate charm.  iâm on juju itself.  :-)
<mcayland> hml: https://pastebin.ubuntu.com/p/54BKYY8hrs/
<hml> mcayland: looks like the issue is at ln 30\
<beisner> hi mcayland, hml - I may be able to help with the designate charm issue.
<hml> beisner:  the pastebin is just above
<beisner> First, I need to determine if the charm itself is at the latest stable release.
<hml> beisner:  ty
<beisner> mcayland ^
<mcayland> beisner: thanks! juju status tells me it is jujucharms 29 ubuntu
<beisner> ok thx, mcayland.  what all has been tried here?  ie. `juju resolved <unit-in-error>` etc?  Are there any other units in an error state?
<mcayland> beisner: it's basically a cluster i've upgraded via 2.6 to 2.8.2
<mcayland> i've not used "juju resolved" at all
<mcayland> after the upgrade there are 3 units not running: nova-cloud-controller, designate-bind and octavia-dashboard
<beisner> mcayland: mind sharing a `juju status` output?  (take care to review/sanitize for private info)
<beisner> https://pastebin.ubuntu.com/
<mcayland> beisner: https://pastebin.ubuntu.com/p/qyZRXVqx8n/
<beisner> thx mcayland, looking
<mcayland> beisner: also thanks to everyone here :)  just trying to figure out why nova-cloud-controller is unhappy...
<pmatulis> vault will need unsealing for sure
<pmatulis> not sure why it's sealed though. was this a working cloud?
<mcayland> pmatulis: the vault was sealed before i tried to do the upgrade, it's a private cloud that has been moved to a new home
<beisner> mcayland: can you tell me more about that move?  ie. Any network, infra, storage changes with that?
<mcayland> beisner: new storage and compute resources was added, but that was after the move and before i upgraded juju
<beisner> mcayland: I see, nova-compute/5 and 7 most likely.  Was everything functioning normally after the move, and after the scale-out of compute?
<beisner> ps. still looking at logs and at a local deployment.
<mcayland> beisner: yeah afaict it was all fine - the juju upgraded i needed because i was getting a lbaas config error which i traced to needing a later octavia charm
<mcayland> fwiw that error has now gone, but i should probably figure out why nova-cloud-controller isn't running though :/
<mcayland> i've also tried to unseal the vault and it fails. something tells me another admin has done some upgrades since i was last looking after it...
<beisner> mcayland: ok so designate-bind is active/ok?
<pmatulis> there is both a mysql unit and a percona-cluster unit, that's odd. can we have the relations as well? bottom part of 'juju status --relations'
<beisner> hmm, yeah, good catch pmatulis.  they're both the percona-cluster charm, at differing revs.
<mcayland> beisner: no, that's broken too - see the pastebin above at https://pastebin.ubuntu.com/p/54BKYY8hrs/
<beisner> mcayland: oh... you were talking about the lbaas error being gone.  got it.
<mcayland> pmatulis: https://pastebin.ubuntu.com/p/SFMKk9WSnX/
<mcayland> beisner: yeah :)  well i guess that at least made the upgrade worthwhile... ;)
<beisner> ok, so one db for vault; one db for openstack.  that's fine.
<beisner> db cluster, that is.
<mcayland> beisner: and here's the output of the nova-cloud-controller log: https://pastebin.ubuntu.com/p/q2VSPtdRrQ/
<mcayland> Generating template context for amqp  - followed by - update-status ERROR no relation id specified
<beisner> mcayland: so re: designate-bind, the only thing I can suggest is to try `juju resolved designate-bind/0` which will retry the last operation.  I'd be curious if that brings it to an active/ready state or not.
<mcayland> beisner: okay let me see...
<beisner> mcayland: In this topology, it doesn't look like vault can play a role in the success of your nova-c-c, but barbican may not be happy until you work out the vault bits.
<beisner> mcayland: on nova-c-c, you could either do a somewhat blind attempt at `resolved` there too;  or go into the unit and attempt to start each of the services; or start with inspecting each of the payload logs on the system unit to diagnose their startup issues.
<mcayland> beisner: i don't think the juju resolved for designate-bind has helped :/
<mcayland> "Failed to start apache2.service: Unit apache2.service is masked."
<beisner> mcayland: what is the current unit error state on designate-bind after the `resolved` run?
<mcayland> beisner: it still says error with "hook failed: "install"
<mcayland> how do i get the default relations?
<mcayland> there are no manual "juju add-relation" commands in https://jaas.ai/nova-cloud-controller
<mcayland> so presumably it has some defaults?
<beisner> mcayland: the "reference" minimal example is @ https://jaas.ai/openstack-base/bundle/ (see bundle.yaml).  bear in mind that is all the latest on Ubuntu 20.04 with OpenStack Ussuri.   BTW, what versions are you running?  I see it's Bionic, but I'm not sure which version of OpenStack.
<mcayland> beisner: it's bionic-stein
<beisner> ack
<beisner> mcayland: anyway, if you're wondering about relations, my suggestion is to do a stare & compare against that minimal bundle.
<beisner> mcayland: Beyond that, again, good old-fashioned system service diags on the host is next up in order to see what's not going right on the n-c-c unit.
<pmatulis> one big difference will be that the new bundle will contain mysql-router and mysql-innodb-cluster, instead of just 'mysql'. the placement charm will also be new
<mcayland> beisner: what about the "ERROR no relation id specified" message under "Generating template context for amqp"?
<mcayland> i can see the amqp relation there...
<pmatulis> i guess this is the bundle to compare with:
<pmatulis> https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-bionic-stein/bundle.yaml
<mcayland> pmatulis: thanks - i think this is going to have to wait until tomorrow now, it's getting late...
<mcayland> beisner, hml, pmatulis: thanks for all the suggestions. i guess this is going to take some time to figure out :(
<beisner> you're welcome, mcayland - feel free to drop into freenode channel #openstack-charms as well.
#juju 2020-08-05
<manadart_> achilleasa: Just gonna eat something, but I need a quick chat with you about the instance-poller worker.
<achilleasa> manadart_: sure; ping when back
<manadart_> achilleasa: Kickin' it in Daily.
<bthomas> Just an FYI: First time I tried to install juju using snap I got the following error --  "Run hook connect-plug-peers of snap "juju" (run hook "connect-plug-peers": error: cannot communicate with server: timeout exceeded while waiting for response)". Snap list showed that the juju snap had not installed. On attempting to install Juju a second time it worked seemlessly. Seems like a race condition. I have not tried to debug/reproduce
<bthomas> it. One way to reproduce it may be to stop microk8s, then start it and soon after try installing juju snap.
<Chipaca> bthomas: #snappy may also be interested in that fwiw
<Chipaca> i'm not there so maybe you already told them
<Chipaca> (i'm not there because if i were i'd work on snappy stuff :-p)
<Chipaca> jam: I think the only sane way out of this dispatch mess we've dug ourselves into is to look at the juju version, like what we did with has_app_data
<Chipaca> jam: which means main needs to parse the juju version, and so does model (for app data)
<Chipaca> jam: DRY would mean doing that once in main and then passing it in to model
<Chipaca> is that worth it?
<Chipaca> wait, wrong channel
 * Chipaca facepalms
<manadart_> bthomas: Can confirm. I removed/re-installed Microk8s around it, and it worked.
<bthomas> manadart_: Hmmm. If I see it again I will try and dig a bit deeper.
<manadart_> achilleasa: Forward merge: https://github.com/juju/juju/pull/11880
<achilleasa> manadart_: any conflicts?
<achilleasa> ah, nevermind
<manadart_> achilleasa: It's mine, plus a one-liner from Ian.
<gnuoy> When I upgrade my 2.8.1 controller to 2.8.2 it never seems to register that the upgrade has succeeded. list-controllers still reports it is on 2.8.1. show-controller reports that  controller-model-version is 2.8.2 but agent version is 2.8.1.
<gnuoy> I am upgrading it with "juju upgrade-controller --agent-stream=devel"
<gnuoy> My env is totally disposable fwiw
<manadart_> gnuoy: Let me try to repro.
<gnuoy> I am trying to reproduce a bug reported by a user who some how managed to get themselves on 2.8.2
<gnuoy> thanks manadart_
<gnuoy> manadart_, fwiw the machine-0.log on the controller contains a lot of:
<gnuoy> 2020-08-05 12:45:59 ERROR juju.worker.dependency engine.go:671 "upgrader" manifold worker returned unexpected error: no matching agent binaries available
<manadart_> gnuoy: The devel stream doesn't have them. We have not been publishing there for some time it appears. Try `--agent-stream=proposed`.
<gnuoy> manadart_, "no upgrades available"
<gnuoy> manadart_, I can recreate the 2.8.1 and try again with proposed
<manadart_> gnuoy: Hmm. OK, I'll look in a mo'.
<gnuoy> manadart_, fwiw redeploying and using `--agent-stream=proposed` resulted in the same outcome; controller-model-version at 2.8.2 but agent version at 2.8.1 and the same "no matching agent binaries available" entries in the controller log. Although I see no reference to 2.8.2 in machine-0.log at all
<manadart_> gnuoy: OK, OTP presently, but I'll suss it our immediately after.
<gnuoy> Is it expected that an upgrade to Juju 2.8.2 will trigger a charms install hook ? That appears to be mcayland observed yesterday (and it highlighted a bug in the nova-cloud-controller charm). I cannot reproduce that behaviour when upgrading to 2.8.1. I haven't managed to upgrade to 2.8.2 yet. Upgrading to 2.8.1 from <2.8 causes relation hooks to fire but I haven't seen the install hook fire.
<pmatulis> gnuoy, after controller upgrade try 'juju controllers --refresh'
<manadart_> achilleasa: See comment regarding install hook from gnuoy ^
<gnuoy> pmatulis, yeah, I was doing the refresh
<manadart_> gnuoy: https://bugs.launchpad.net/juju/+bug/1890452
<mup> Bug #1890452: Juju allows upgrade based on stream, but fails to find agent binaries <juju:New> <https://launchpad.net/bugs/1890452>
<gnuoy> thanks manadart_
<manadart_> gnuoy: The work-around to force it into action is `juju model-config agent-stream=proposed`.
<gnuoy> ah, great, thans manadart_
<manadart_> For the controller ^
<gnuoy> * thanks
 * manadart_ nods.
<achilleasa> can I get a review on https://github.com/juju/juju/pull/11881?
<achilleasa> hml: you might want to take a look ^ it's the next step from the PR you reviewed yest
<hml> achilleasa:  ack
<achilleasa> petevg: can you also do a QA and doc check (I changed the docs for the three hook tools) for ^^^
<petevg> achilleasa: will do!
#juju 2020-08-06
<jam> morning all
<achilleasa> manadart_: can you point me to a test where we set up a real space topology?
<achilleasa> I have a *state.Machine and want to add some subnets and add addresses in each one to the machine
<manadart_> achilleasa: As in "integration" test with a real Mongo? Not sure we have such an example.
<achilleasa> yeah...
<achilleasa> :-(
<achilleasa> I should just mock the call and move on (it's in the api/client bits)
<achilleasa> otherwise it's add subnets, add LLDs, add addresses...
<manadart_> achilleasa: See the tests for `GetNetworkInfoForSpaces` that's pretty close.
<manadart_> achilleasa: `BridgePolicy` tests should have mock examples.
<achilleasa> given that the test is essentially testing the unserialization logic (I have separate tests for the facade side) I will just inject a payload in the response... pretty sure I 've seen a few tests like that
<achilleasa> manadart_: can you also look into this one as well? https://github.com/juju/juju/pull/11882 (it looks large because I had to add mock bits in various places for the tests)
<manadart_> achilleasa: Yep. Gimme a few.
<hml> achilleasa:  can you please review: https://github.com/juju/juju/pull/11883
<achilleasa> hml: looking
<achilleasa> hml: btw, any idea why is RefreshClientSuite.TestLiveRefreshManyRequest trying to connect to api.snapcraft.io? Shouldn't we be mocking that in the test? Got a test failure due to a DNS timeout
<hml> achilleasa:  iâm not sure what is going on with that test.  thumper pointed it out too last night.
<hml> achilleasa:  stickup kid wrote the test to be run both with mocks and live.  perhaps were running the live version sometimes?
<hml> itâs interesting tha TestLiveInstallRequest is passing
<hml> i meant TestLiveRefreshRequest
<achilleasa> we might be using build tags for that
<achilleasa> (for the live ones I mean)
<hml> achilleasa:  the other thing is that the api being pinged is under development so something might have changed.  dunno yet.
<achilleasa> wouldn't a live test be part of the bash suite?
<manadart_> achilleasa: Left a few suggestions on your patch. I have to head home now, but I will follow up in the morning.
<achilleasa> manadart_: thanks!
<achilleasa> hml: the QA steps don't seem to completely match the expected output: https://pastebin.canonical.com/p/WBCWQtDCVy/
<achilleasa> did the migration happen too quick?
<hml> achilleasa:  looking
<hml> achilleasa:  what command did you run for the output at line 1?
<achilleasa> it was the 'juju migrate seven migrate-from ; juju deploy mysql'
<hml> achilleasa:  did you run the juju switch migrate-to:admin/seven?
<hml> for line #9, you need to run quickly after the migrate, perhaps thatâs why you get a slightly different message there
<hml> i may have missed a juju switch in there at the end
<hml> looks like before the upgrade-controller, need a juju switch
<hml> did a few tweeks to the qa instructions
<achilleasa> hml: trying again. brb
<achilleasa> hml: we really need to do something about those bogus error messages :D
<achilleasa> hml: hmmm shouldn't it stop after complaining that the precheck failed?
<achilleasa> it actually went ahead with the migration even though it complained... then it just erros that the model has already been migrated
#juju 2020-08-07
<icey> hey, I'm seeing a new error: ERROR cannot add relation "prometheus:target prometheus-snmp-exporter:snmp-exporter": establishing a new relation for prometheus:target would exceed its maximum relation limit of 1 (quota limit exceeded)
<icey> any suggestions for how to resolve it?
<achilleasa> icey: passing --force will allow the relation to be established. Can you point me to the charm versions that you are using?
<icey> achilleasa: prometheus (prometheus2): 18
<icey> prometheus-snmp-exporter: 1
<icey> I've removed the snmp-exporter and re-added it
<icey> (testing something out)
<achilleasa> FYI: juju 2.8 enforces relation limits as specified in charm metadata; there has been a related bug (https://bugs.launchpad.net/juju/+bug/1887095) but we have not been able to reproduce
<mup> Bug #1887095: Default relation limit of 1 prevents adding relations <juju:Incomplete by achilleasa> <https://launchpad.net/bugs/1887095>
<icey> $ juju add-relation --force prometheus:target prometheus-snmp-exporter:snmp-exporter
<icey> ERROR option provided but not defined: --force
<icey> weird that, apparently, removing an application doesn't reduce that count back to zero :)
<icey> achilleasa: also, doing a `juju deploy $bundle --force` shows the same erorr about not being able to add the relation
<achilleasa> icey: interesting. can you please add a comment to the above bug? the relation count should certainly not be 1 if you remove the application
<achilleasa> icey: just curious, did you happen to deploy prometheus to a 2.7.x model and then upgraded to 2.8.x?
<icey> I did
<icey> (this lab of mine is a lovely source or finding weird bugs :-P )
<achilleasa> icey: ok... I see what happened here... the charm parser in 2.7 juju used to set a default relation limit of 1 (even if not present in the metadata). That got persisted to the controller and triggers the error after the upgrade
<icey> haha wow
<icey> even if the application is removed...
<icey> I can remove the application and try again, if you think it's worth it
<achilleasa> please do
<icey> (alternately, db surgery to update it?)
<achilleasa> this explains why we couldn't reproduce the problem
<icey> https://pastebin.ubuntu.com/p/TDPzsjghJr/
<achilleasa> manadart_: this will probably cause issues when attempting to add relations after a 2.7->2.8 upgrade. any thoughts on how we can fix it? We could play it safe and add an upgrade step to reset the limit in the stored charm metadata
<icey> achilleasa: let me redact those ipv6 addresses and I'll post that on the bug as well
<achilleasa> icey: thanks
<icey> achilleasa: in the mean time, any suggestions for db surgery that I can do to get my snmp-exporter working again?
<achilleasa> icey: sure; give me a min to write a fix query
<icey> achilleasa: and, I suspect that the issue I'm hitting isn't related to the app reggint removed, it's that there are other consumers of the prometheus-exporter relation (telegraf)
<icey> achilleasa: thanks :)
<achilleasa> icey: in the meantime, is that the latest version of the charm? If not, doing an upgrade-charm --force should reset the limit
<icey> achilleasa: it seems to be - what if I do an upgrade-charm --force on it anyways?
<icey> achilleasa: my bundle just has: charm: cs:prometheus2
<icey> well that's cool: $ juju upgrade-charm --force prometheus
<icey> ERROR already running latest charm "cs:prometheus2-18"
<icey> interestingly, my first attempt to redeploy this bundle on the model should have upgraded the _other_ consumer of the exporter relation :-P
<manadart_> achilleasa: Need to discuss it with you. Later this afternoon?
<achilleasa> manadart_: sure
<achilleasa> icey: can you try this? https://pastebin.canonical.com/p/fb9W78NYRV/
<achilleasa> I am assuming that this is a disposable model, right?
<icey> achilleasa: heh well
<icey> it's my controller model :)
<icey> as well as where I've stuck my monitoring
<icey> but it is in my lab
<icey> so, while I'd rather keep it around, it is rebuildable :-D
<achilleasa> it just resets the limit in the charm metadata so it should be a pretty harmless change
<icey> yeah, I did read the query before looking at hoe to actually connect to juju's mongo :)
<icey> bah, not on the master
 * icey retries
<icey> well that's odd
<icey> https://pastebin.ubuntu.com/p/vKB8yQMYZd/
<icey> same thing worked on the first unit I tried to connect
<icey> weird, has a different passsword saved, other one works
<achilleasa> errr not sure about that one
<icey> achilleasa: tried https://paste.ubuntu.com/21232100/
<icey> the three units (who doesn't run juju in HA) had different passwords stored, apparently
<icey> anyways, query seems to have worked, and I can still talk to juju
<achilleasa> does add-relation work now?
<icey> achilleasa: I did successfully redeploy that bundle :)
<icey> so, yes!
<achilleasa> I will add this as a quick fix solution to the bug and chat with manadart_ about the best way to fix this
