#juju 2013-07-22
<pavel> look like  juju-core1.11.2 was broken today, I had to update to 1.11.3
<melmoth> i have installed openstack folsom with juju on precise boxes. all charm have cloud:precise-folsom as origin.
<melmoth> now, would it be possible to "upgrade" this to grizzly with some juju set charmname origin=cloud:precise-grizzly  followed by a juju charm-upgrade ?
<melmoth> or is this a wrong expectation ?
<marcoceppi> melmoth: if I understand the charms correctly, you'll only need to juju set <all-your-openstack-charms> origin=cloud:precise-grizzly to kick off the upgrade
<melmoth> ok
<marcoceppi> upgrade-charm is typically independant of the service version, though not always. It's really designed to upgrade the actual charm files
<melmoth> so by "kicking off" the upgrade you meant sshing into the box and running apt-get dist-upgrade ?
<marcoceppi> melmoth: well, you shouldn't need to do that, the charm should take care of all that for you
<melmoth> oh.. ok.
<marcoceppi> melmoth: each charm is different though, the readme should cover how to move between versions, I'm fairly confident I've seen the OpenStack charms "just upgrade" via the config change only though. I vaguely remember sabdfl demo that live somewhere
<melmoth> yep, but there was a landscape layer on top of that , wich i dont have here.
<melmoth> but yeah, i saw the demo too :)
<marcoceppi> IIRC landscape was driving juju, but my memory in general isn't all that good, let alone this early in the morning
<melmoth> i was planning to try that today..turns out i have been given another bone to play with...
<melmoth> (a less funny one though)
<marcoceppi> Let me know how it works out for you. I've been scraping together some spare servers in order to play more with Juju and MAAS
<AskUbuntu> Can i install juju zookeeper on maas server itself? | http://askubuntu.com/q/323000
<arges> hi. I left juju 0.7 w/ lxc running over the weekend on my ecryptfs home part, and now my disk is full. I think this is a log file problem, 1) where are the log files so I can delete them, 2) is this a new bug that needs to be filed?
<marcoceppi> arges: I've run in to this before, I thought it was fixed in 0.7 but it might not be
<marcoceppi> arges: if they're not in /var/log/juju they'll be in ~/.juju/local/
<arges> marcoceppi: yea i saw bug 958312
<_mup_> Bug #958312: Change zk logging configuration <verification-done> <juju:Fix Released by hazmat> <juju (Ubuntu):Fix Released> <juju (Ubuntu Precise):Fix Released> <https://launchpad.net/bugs/958312>
<arges> looking
<marcoceppi> arges: the logs for both juju and zk can be VERY verbose in pyjuju
<arges> marcoceppi: both are cleared and still disk is full
<arges> marcoceppi: i'll keep looking doing a du on my home dir first to see whats going on
<marcoceppi> is your ecryptfs partition relatively small? You might be able to use `du` to find where the large files are.
<arges> : )
<marcoceppi> It's been ages since I've run 0.7 so I don't know off the top of my head
<arges> yea its 1TB , and i know i had at least 500GB free when i left my house on friday
<arges> ok
<arges> thanks
<marcoceppi> arges: juju-core 1.11.3 just landed this morning which brings better local support in to juju-core. If you were holding back upgrading to juju-core for local provider
<arges> marcoceppi: yup planning on testing once i dig myself out of this
<marcoceppi> arges: sorry couldn't be of more help!
<arges> np thanks for the tips anyway
<arges> marcoceppi: ok false alarm. Juju wasn't the issue.
 * marcoceppi wipes brow
<arges> Hi. are there any juju-core/lxc docs available? Moving from 0.7 to current version and not sure if I need a new environments.yaml
<arges> n/m got it
<marcoceppi> arges: you'll have to destroy the environment, there's no upgrade path for local provider
<arges> cool, and found this which helps : https://juju.ubuntu.com/get-started/local/
<marcoceppi> arges: that's for 0.7, we're working (well, evilnickveitch is) on getting juju-core local provider docs out
<arges> ah gotcha.
<marcoceppi> arges: I made a quick blog post, if you're looking for a reference: http://marcoceppi.com/2013/07/compiling-juju-and-the-local-provider/
<arges> nice, i'll use this
<arges> thanks
<evilnickveitch> arges, let me know if you have any difficulties
<arges> $ sudo juju bootstrap -e local
<arges> error: no reachable servers
<arges> So i have 1.11.3 installed, and setup exactly from juju init with those two lines
<arges> Also I'm on precise
<arges> After a juju -v status, I see the following:
<arges> 2013-07-22 14:44:22 ERROR juju open.go:89 state: connection failed, will retry: dial tcp 10.0.3.1:37017: connection refused
<evilnickveitch> hmmm, marcoceppi ? ^ ^
<marcoceppi> arges: so, the bootstrap is an interesting problem
<marcoceppi> arges: run sudo juju destroy-environment
<marcoceppi> It's either one of two things
<marcoceppi> try `sudo service mongodb stop`
<marcoceppi> then try another bootstrap
<arges> marcoceppi: mongodb is not installed
<marcoceppi> arges: there's the problem
<arges> so i need those deps even if i'm not compiling : )
<marcoceppi> sudo apt-get install mongodb-server
<marcoceppi> yeah, I guess it's not in the packaginging
<marcoceppi> You'll only need mongodb-server, the rest are directly for compiling
 * marcoceppi takes note
<arges> Ok installing
<marcoceppi> thn run sudo service mongodb stop; sudo juju bootstrap -e local and you should be good to go. If you get an error during bootstrap let me know
 * marcoceppi updates blog post
<arges> hmm
<arges> marcoceppi: different error now
<arges> marcoceppi: http://pastebin.ubuntu.com/5900827/
<marcoceppi> arges: that's not an error
<marcoceppi> let it loop for a while
<arges> ah
<arges> its waiting for the lxc container to start i take it
<ahasenack> mongo, actually
<marcoceppi> wait until the command exits. Behind the scenes juju bootstrap installs two upstart services one that starts mongodb for the state server and the other which acts as provisioning agent
<marcoceppi> so it loops for about a minute waiting for mongodb to come oneline
<arges> marcoceppi: i get 'no reachable servers' at the end again
<marcoceppi> arges: what does `sudo status juju-db-${USER}-local` show?
<arges> juju-db-arges-local stop/waiting
<arges> marcoceppi: ^^
<marcoceppi> also, pastebin /var/log/juju-db-${USER}-local.log
<marcoceppi> also, pastebin /var/log/upstart/juju-db-${USER}-local.log
<arges> nothing in /var/log/juju-*
<arges> http://pastebin.ubuntu.com/5900852/
<ahasenack> arges: which version of mongodb-server did you get installed?
<ahasenack> maybe the one in precise is not recent enough for juju-core
<arges> ahasenack: 1:2.0.4-1ubuntu2.1
<marcoceppi> arges: interesting, I've got 1.2.2.4 on raring
<ahasenack> arges: you don't have a /usr/bin/mongod, or that was an old log?
<ahasenack> marcoceppi: juju-core requires mongo with ssl, I don't know if the precise one has that
<arges> ahasenack: that's old the mongodb-server dep wasn't installed
<ahasenack> arges: ok, can you tell on which port mongodb is listening? Maybe sudo netstat -anp|grep mongod
<marcoceppi> ahasenack: I don't have any LTS machines around, but I think the later SSL errors are the key
<ahasenack> marcoceppi: ah, right, hadn't seen that
<arges> ahasenack: netstat -anp | grep mongo returns nothing
<marcoceppi> arges: try installing libssl0.9.8 libsnmp-base snmpd libsnmp15 snmp
<marcoceppi> then juju destroy-environment, clear the upstart log, bootstrap
<arges> ok
<marcoceppi> in the meantime I'll set up a VM with 12.04
<arges> marcoceppi: same results.
<arges> and same log output sslOnNormalPorts
<arges> as an unknown option
<marcoceppi> arges: ack, setting up 12.04
<arges> cool
<marcoceppi> arges: I think I have a solution for you on precise, but it involves using a source outside of ubuntu archives
<marcoceppi> not sure if you're interested or not
<arges> marcoceppi: i'm interested! want to make sure this works on precise as well
<marcoceppi> arges: hum, nevermind. This might be trickier than I expected
<arges> marcoceppi: can we file a bug to track this?
<marcoceppi> arges: yes, definitely a good idea
<marcoceppi> an easy solution is to just put the raring version of mongodb in to the ppa for precise, but I'm not sure if that's the best solution
<jamespage> thumper, nice work on the local provider btw
<jamespage> works real sweet
<bloodearnest> yeah, really good work, y'all :)
<marcoceppi> arges: opened bug and poked the core team
<arges> marcoceppi: thanks
<arosales> jamespage, marcoceppi, m_3, negronjl, bbcmicrocomputer, hazmat: reminder to be using http://manage.jujucharms.com/review-queue for the review queue
 * arosales will try to update the topic
* arosales changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://manage.jujucharms.com/review-queue || http://jujucharms.com || Reviewer: m_3
<bbcmicrocomputer> arosales: cool, thanks for the tip :)
<arosales> bbcmicrocomputer, sure np.  You guys may have already been using it, but just wanted to confirm with the shiny new charm browser out there now. :-)
<bbcmicrocomputer> arosales: :)
<AskUbuntu> Is virtual maas removed? | http://askubuntu.com/q/323070
<arges> marcoceppi: whats the bug# so i can subscribe to it? thanks
<arges> of the precise juju-core/lxc issue
<marcoceppi> arges: https://bugs.launchpad.net/juju-core/+bug/1203795
<_mup_> Bug #1203795: mongodb with --ssl not available in precise <juju-core:Confirmed> <https://launchpad.net/bugs/1203795>
<arges> thanks
<kirkland> marcoceppi: jcastro: yo -- I'm working on a blog post and trying to link to some juju documentation in the process
<marcoceppi> kirkland: hey
<kirkland> marcoceppi: jcastro: I'm not finding much of what I need in this new jujucharms.com web frontend
<marcoceppi> kirkland: what are you looking for?
<kirkland> marcoceppi: specifically, I'd like to link to the official documentation on the mount interface and the shared-fs relation
<kirkland> marcoceppi: the SEO on these aren't great right now on google
<marcoceppi> kirkland: We don't have any documentation on on interfaces
<kirkland> the top hit is https://jujucharms.com/~craig.magina/precise/openmpi/hooks/shared-fs-relation-changed/
<marcoceppi> Well, we have a few old ones, but they really don't exist
<kirkland> which plunks me down on an empty canvas
<marcoceppi> kirkland: yeah, we just had a new jujucharms roll out a few hours aog
<rick_h> kirkland: and the page that you're linking to there is http://manage.jujucharms.com/~craig.magina/precise/openmpi/hooks/shared-fs-relation-changed
<rick_h> kirkland: s/jujucharm.com/manage.jujucharms.com for things as of a couple of hours ago
<rick_h> will take time for the bots out there to catch up
<kirkland> rick_h: ah, okay, thanks
<kirkland> marcoceppi: jcastro: http://blog.dustinkirkland.com/2013/07/johnjohn-scalable-juju-charm-tutorial.html
<kirkland> marcoceppi: jcastro: thanks for your help with that!
<marcoceppi> kirkland: great post!
<kirkland> marcoceppi: thanks!
<kirkland> marcoceppi: thank YOU for all your help
<marcoceppi> kirkland: no problem!
<kirkland> marcoceppi: I'll leave it to you guys, if this is something you want in the charm repo or not
<kirkland> marcoceppi: I'm happy to clean it up, if you want it in there;  but if it sends the wrong signals about Juju, I certainly understand that too
<marcoceppi> kirkland: I don't think we would bar if for any reason. I'm one to say "Submit it!" but I love the diversity of the charm store
<kirkland> marcoceppi: cool
<marcoceppi> If anything it's a great example for writing a charm
<kirkland> marcoceppi: there's a couple of things in there that could serve as nice runes for other massively parallel charms
<kirkland> marcoceppi: the use of NFS and keep-one-running are kinda cool
<kirkland> (if I do say so myself)
<m_3> arosales: ack
<arosales> m_3, thanks
<hazmat> mysql charm is failing to deploy on core..
<AskUbuntu> Automate PC set up | http://askubuntu.com/q/323096
<ahasenack> hazmat: I deployed it a few days ago
<ahasenack> :(
<lovesharepc> hello  I'm a juju new people . when I creat juju I have a problem
<lovesharepc> I saw here https://juju.ubuntu.com/docs/getting-started.html  say   Configuring for Amazon AWS
<lovesharepc> what this mean??
<lovesharepc> this is in AWS inside creat juju ?    or      in my server creat juju have like AWS's service?
<sarnold> lovesharepc: juju creates 'new computers' using cloud providers
<sarnold> lovesharepc: the getting-started page is discussing how to use amazon AWS as your cloud provider
<marcoceppi> hazmat: works with local provider this morning
<sarnold> lovesharepc: juju can also be used to create your own cloud provider -- this makes things much more complicated :) -- but you can create a cloud environment with one juju environment, and "within" that cloud environment use juju again to manage services
<lovesharepc> OK I understand     thank you :D
<sarnold> lovesharepc: have fun! :)
<hazmat> marcoceppi, hmm.. yeah its just failing consistently in my unit tests.
<hazmat> seems to work okay standalone
<marcoceppi> hazmat: interesting
<lovesharepc> sarnold:so..... when I use maas creat a server.  this is my cloud ?
<sarnold> lovesharepc: yes. you can also use openstack.
<lovesharepc> sarnold: OK.  thank~~~ you solve the biggest question in my mind :D      ubuntu are very funny !!
<sarnold> lovesharepc: cool! :)
<arges> marcoceppi: so i was able to bootstrap juju-core/lxc on precise with a backported mongodb. Now it seems I can't ssh into the bootstrap node because permission is denied. I have id_rsa.pub in the standard location. Anything to look for?
<marcoceppi> arges: how are you sshing?
<arges> marcoceppi: well first i was just trying 'juju debug-log' and that failed with permission denied
<arges> then 'juju ssh 0' also fails with permission denied (public key)
<marcoceppi> arges: that doesn't work. use juju ssh service/unit-num like WordPress/0
<marcoceppi> it's a known issue
<arges> marcoceppi: juju ssh postgresql/0 works (in my test),  but juju debug-log still fails
<ahasenack> I think I filed a bug about debug-log
<arges> bug 1197968
<_mup_> Bug #1197968: debug-hooks are not available for juju-core, should be removed from docs <juju-core:Invalid> <https://launchpad.net/bugs/1197968>
<arges> ahasenack: so is there a way for me to see how my charm is working when I do juju deploy * ? what comamnds should i use with juju-core
<ahasenack> arges: juju status is a start
<arges> hmm...
<ahasenack> arges: about juju ssh with lxc, the bug is https://bugs.launchpad.net/juju-core/+bug/1202682
<_mup_> Bug #1202682: debug-log doesn't work with lxc provider <juju-core:New> <https://launchpad.net/bugs/1202682>
<arges> ahasenack: perfect i'll mark that as affecting me
<ahasenack> arges: but, you can check the logs in this case locally, they are in $JUJU_HOME/<environment>/logs or something
<ahasenack> one per machine
<arges> ahasenack: ah that will help though! thanks
#juju 2013-07-23
<kpepple_> Ubuntu announced a MapR charm back in April (http://www.eweek.com/database/mapr-puts-hadoop-on-ubuntu-source-code-on-github/) but I can't find it anywhere. did this get released ?
<kpepple_> I looked thru https://jujucharms.com/ but don't see any sign of it
<faisal> hello :)
<tom95> is it possible to use nginx instead of apache as http server with juju?
<mgz> tom95: there's a somewhat involved answer, but yes
<tom95> mgz: I haven't used it yet, I just had a look at juju and decided I like it, so I don't know much about the technical side yet. Are there downsides on using nginx compared to apache or is there basically no difference?
<mgz> it's flexible, and can be decided at the charm or deployment level
<mgz> some web frameworks prefer one or the other or could support both
<mgz> you can write your charm (or use existing ones) that would use either
<tom95> ok, thanks!
<gotwig> hello world
<gotwig> https://bugs.launchpad.net/charms/+bug/994699
<_mup_> Bug #994699: Charm Needed: Nginx <Juju Charms Collection:Confirmed for imbrandon> <https://launchpad.net/bugs/994699>
<gotwig> can you do something here?
<marcoceppi> gotwig: what is there to do?
<marcoceppi> That charm was never approved for the charm store
<gotwig> marcoceppi: why
<gotwig> how can it get approved
<marcoceppi> gotwig: All charms go through a review process before making it in to the charm store. This one was in the process of being reviewed but the author never finished the charm
<gotwig> oh
<gotwig> tom95: ^
<gotwig> marcoceppi: what is missing
<marcoceppi> gotwig: read the bug, these "Charm Needed" bugs are not stating that the charms are missing, but are part of the charm submission process
<marcoceppi> gotwig: a charm doesn't NEED to be in the store for you to deploy it. It's in a "personal" charm branch right now. You can fork the branch and work on it if you're interested in using it (and deploy from local repository) or you can deploy it as is with juju deploy cs:imbrandon/nginx
<marcoceppi> gotwig: however, based on the feedback it doesn't look 100% ready, so it's deploy at your own risk when using a personal deployment branch
<gotwig> marcoceppi: does it work with juju gui?
<marcoceppi> Given that this charm hasn't been touched in over 6 months, if you're interested in using it, you might want to take over development of it
<marcoceppi> gotwig: https://jujucharms.com/fullscreen/search/~imbrandon/precise/nginx-7/?series=precise&text=nginx yes
<gotwig> marcoceppi: this doesnt work
<marcoceppi> gotwig: You can deploy "personal" branches from the GUI, you just need to make sure you unselect "Reviewed Charms" from the search
<marcoceppi> gotwig: You'll need to provide more details than "doesn't work"
<gotwig> tom95: ^
<gotwig> marcoceppi: I dont see it in the link
<gotwig> you posted
<gotwig> oh
<gotwig> dat checkbox, yeah..
<marcoceppi> gotwig: When searching, it'll default to only showing reviewed charms, to get all charms - including personal UNREVIEWED branches unselect that checkbox
<gotwig> nginx devs go mad on me :>
<jamespage> wow - I just found "add-unit --to"
<jamespage> nice
<jamespage> hulk smash builtin
<jamespage> negronjl, racedo: ^^ both deploy and add-unit both have a '--to' option in the latest juju-core release which covers the co-location/hulk smash challenges we talked about last week
<negronjl> jamespage, cool ... thx.  I'll start playing with it
<racedo> jamespage: awesome!
<racedo> jamespage: btw the virtme charm is great, i couldn't use it in precise though, I believe that if the charm pulled libvirt from cloud-archive it would work well in precise
<jamespage> racedo, yeah - I've only been using on raring
<jamespage> racedo, I suspect it could be fixed up
<racedo> jamespage: i suspect it's just libvirt not having support for openvswitch type of bridge
<jamespage> racedo, feel free to fixup and propose a merge :-)(
<racedo> jamespage: i still need to get my hands dirty charming stuff
<racedo> jamespage: negronjl is the charms master so i can relax there ;-)
<jhf> hey m_3 - you there?
<marcoceppi> jhf: he's on the west coast for the week
<jhf_> well m_3 if so - I told you something wrong yesterday - my website-relation-joined hook is still hard coded. I'm fixing it now.
<marcoceppi> jhf: He brought this up earlier actually, I think he patched it already
<jhf> oh.. ok, thanks for doing that - I was thinking about my start hook where I did make the port dynamic, but he was talking about the website-relation-joined and I confused the two. thanks!
<marcoceppi> jhf: https://code.launchpad.net/~charmers/charms/precise/liferay/trunk
<marcoceppi> np! I think he's using it during a demo, so it was quickly patched
<jhf> cool.. should I fix it in my branch too?
<marcoceppi> jhf: you can just bzr pull lp:charms/liferay to get the patch
<jhf> k thx
<m_3> jhf: thanks!
<m_3> jhf: btw, now having some sort of issues with liferay behind haproxy
<jhf> oh?
<m_3> jhf: not sure what the problem was, but don't have time to debug it now
<jhf> k
<m_3> had the java exceptions, but I think I wiped them
<jhf> ok. liferay can produce lots of innocuous stack traces :)
<m_3> this was just one mysql, one liferay, and one haproxy node all wired up
<m_3> then hit the haproxy and got "An unexpected system error occurred.
<m_3> java.lang.NullPointerException"
<m_3> via the web interface
<jhf> ok. sounds un-haproxy-related to me, but if/when you see it again I should take a look.
<jhf> ill be there around 8:30 or a bit before
<m_3> jhf: k, thanks!
<freeflying> bootstraping a local environment need to be done as root with latest juju-core?
<marcoceppi> freeflying: bootstrapping and destroying need to be done with root
<marcoceppi> freeflying: this is expected behavior
<freeflying> marcoceppi: I recall in juju python version, sudo will be invoked by juju itself, now we need run sudo juju bootstrap?
<marcoceppi> freeflying: correct, this is a similar but different implementation of the local provider. Instead of having sudo called during bootstrap (requiring someone to watch the command) it's just required for the bootstrap and destroy-environment command outright. All other commands can be run as a normal user
<freeflying> marcoceppi: cool, thanks, but by running sudo juju bootstrap, I got error: no reachable servers
<marcoceppi> freeflying: What version of ubuntu are you using?
<freeflying> marcoceppi: 1.11.3--ubuntu1
<marcoceppi> freeflying: `lsb_release -r` ?
<freeflying> marcoceppi: saucy, sorry, the package was installed from archive
<marcoceppi> freeflying: okay, good. You're probably missing a few dependencies. First, `juju destroy-environment`, then install lxc, mongodb-server, mongodb; then run `sudo service mongodb stop`, bootstrap again
<freeflying> marcoceppi: yep, I forgot to install mongodb, thanks again for pointing it out
<marcoceppi> freeflying: np, I seriously think it should be a recommends for juju-core packaging, but until then the docs will just need to mention it
<marcoceppi> at least a suggests
<marcoceppi> freeflying: http://bit.ly/1366OtX
<freeflying> marcoceppi: if we can update the doc, might be more helpful, since most users will dig into documents firstly
<marcoceppi> freeflying: docs for local provider are landing soon
<freeflying> marcoceppi: in terms of packaging, we may have a meta package for user who want try with local provide to get dependencies installed
<freeflying> marcoceppi: that sounds great
<marcoceppi> freeflying: that's a good idea as well, juju-core juju-core-local-provider or something like that
<freeflying> marcoceppi: will file a whishlist for it :)
<pavel> I've updated documentation for rack charm v2 https://github.com/Altoros/rack
<arosales> pavel, looking good. I know m_3 is at OSCON this week so if any Rails guys are there I am sure your charm may come up.
<pavel> arosales, tomorrow meeting will be on schedule?
<arosales> pavel, yes marcoceppi and I (at a minimum) will be there
<pavel> arosales, cool
<freeflying> marcoceppi: after installed and run mongodb-server, still run into problems: freeflying@ultra:~$ sudo juju bootstrap
<freeflying> error: no reachable servers
<marcoceppi> freeflying: make sure mongodb service is off
<marcoceppi> then destroy-environment again and bootstrap once more. If you're still having an issue let me know
<freeflying> marcoceppi: yes, tried both, same issue
<marcoceppi> freeflying: what does /var/log/upstart/juju-db-*.log have in it?
<freeflying> marcoceppi: only has /var/log/upstart/juju-db-freeflying-local.log
<marcoceppi> freeflying: no output in the log?
<freeflying> //paste.ubuntu.com/5904546/
<marcoceppi> freeflying: can you destroy-environment again, then run sudo juju boostrap -v and pipe the output to a pastebin? It'll be pretty verbose.
<freeflying> 2013-07-23 15:53:40 ERROR juju open.go:89 state: connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
<freeflying> marcoceppi: http://paste.ubuntu.com.cn/251140
<marcoceppi> freeflying: what does `sudo status juju-db-freeflying-local` say?
<freeflying> juju-db-freeflying-local stop/waiting
<marcoceppi> freeflying: run `sudo start juju-db-freeflying-local` then run status again
<freeflying> marcoceppi: same, http://paste.ubuntu.com/5904578/ log from syslog FYI
 * freeflying 's bed time, see you later
<marcoceppi> freeflying: I'm not sure, it looks like a bug. I've not tried in saucy yet though
<mhall119> mramm: will you be able to join the weekly engineering webcast in 20 minutes to update everybody on juju?
<mramm> mhall119: I'm in the charm school at OSCON, so I won't be able to be there
<mhall119> and no jorge either :(
<mramm> perhaps antonio can cover... let me see
<mhall119> okay, we'll get a Juju update next week then, you guys have fun (but not too much fun)!
<mramm> arosales: you around?
<arosales> mramm, yes
<mramm> would you be interested in covering for me/jorge on the weekly engineering update webcast?
<mramm> it's not critical, we can just do the update next week
<mramm> actually, let's just do it next week
<arosales> mramm, let me check on the conflicting meetings I have atm
<mramm> arosales: don't worry about it, next week is fine
<arosales> mramm, ok totally your call
#juju 2013-07-24
<adam_g> hazmat`, still around?
<sarnold> there's no 13.04 under "Ubuntu series" here: https://jujucharms.com/fullscreen/search/?series=precise&series=quantal&text=openafs&type=approved
<marcoceppi> sarnold: do we have any raring charms? I didn't think we did
 * marcoceppi checks
<sarnold> marcoceppi: oh! hah
<marcoceppi> So, apparently we have 7 raring charms. All of them are in personal branches though
<marcoceppi> To the juju-gui bugs page!
<sarnold> "please make the gui less intelligent to match sarnold"? :)
<marcoceppi> :P nonesense
<arosales> Hello we will be starting our weekly Charm Meeting here in a few
<arosales> For folks wanting to participate the G+ URL is:
<arosales> https://plus.google.com/hangouts/_/e50446517e80e3543f63267dedfed239170e337c?authuser=0&hl=en
<arosales> marcoceppi, is also setting up the ubuntu on air link for viewing
<marcoceppi> Ubuntu on air updated!
<arosales> pad at http://pad.ubuntu.com/openid/+login
<arosales> sorry @ http://pad.ubuntu.com/7mf2jvKXNa
<pavel> does 1.11.4 supports lxc provider now?
<mgz> pavel: YEAH, SEE THE RELEASE NOTES
<mgz> er...accidental caps
<pavel> mgz, ha, I thought you are mad at me for a stupid question :D
<mgz> it did rather come across that way... >_<
<jamespage> lol
<arosales> Any folks interesting in a being part of the charm school panel on charm testing?
<arosales> sidnei, ^
<arosales> gary_poster, any folks from the gui ^
<sidnei> arosales: when?
<gary_poster> benji or bac, ^^^ any interest?
<arosales> 17:00 UTC Friday (July 26)
<gary_poster> could ask frankban too
<gary_poster> arosales, I'll send out a note and cc you.  when do you need to know?
<arosales> gary_poster, 16:00 UTC July 26
<gary_poster> arosales, :-) cool thanks
<arosales> gary_poster, basically be prepared to learn about charm testing which marcoceppi is going to teach.
<arosales> gary_poster, thanks
<benji> gary_poster: I'm interested if no one else is more interested
<gary_poster> arosales, ack thanks.  is this a google hangout live broadcast kind of thing??
<gary_poster> benji, awesome
<gary_poster> marcoceppi, do frankban and teknico already know all this stuff? :-)
<arosales> gary_poster, yes it will be google on ari
<gary_poster> cool thanks again
<arosales> marcoceppi, fyi benji is interested in the charm school this friday for charm testing
<arosales> benji, marcoceppi will ping here on Friday, but feel free to catch up before then too
<benji> arosales: sounds good
<arosales> benji, thanks
 * lostinstack is looking for assistance with juju bootstrap problem on private openstack
<lostinstack> can't seem to get around the
<lostinstack> error: cannot start bootstrap instance: no "precise" images in RegionOne with arches [amd64] problem
<lostinstack> anyone have much experience with juju and a private deployment of OpenStack?
<sidnei> ahasenack: ^ i think this is one for you
<lostinstack> Does anyone know if it is normal for juju to try and retrieve its "provider-state" 30+ times during the bootstrap process?
#juju 2013-07-25
<chance> does juju-core take pull requests on github? or can someone point me in the right direction?
<den_sheleh> Hi. I have a little problem. Juju 1.11.4 should work with LXC. But when I try bootstrap environment I go a error : "error: no reachable servers". What is wrong ?
<jamespage> den_sheleh, do you have the mongodb-server package installed?
<jamespage> and which Ubuntu release are you running on?
<jamespage> raring+ is good for mongodb-server - for 12.04/12.10 you need to use a PPA right now
<jamespage> jcastro, fyi - http://javacruft.wordpress.com/2013/07/25/juju-put-it-there-please/
<den_sheleh> Ubuntu 12.04 LTS
<jamespage> den_sheleh, ok - lemme dig out the PPA
<jamespage> the 12.04 LTS mongodb-server does not support SSL
<jamespage> den_sheleh, https://launchpad.net/~juju/+archive/experimental
<jamespage> den_sheleh, I'm working on getting that into official backports in Ubuntu itself
<den_sheleh> Thank's a lot
<den_sheleh> Thank You? James =)
<gotwig> hey there
<gotwig> is there a need for a charm deploying to meteor?
<privateStack> I am looking on help with bootstrapping juju 1.11.4-raring-amd64 on a private Grizzly OpenStack install
<marcoceppi> privateStack: what's your question?
<privateStack> Hi marcoceppi
<privateStack> I get an error, which I think you have even commented on in the boards
<marcoceppi> hi!
<privateStack> error: cannot start bootstrap instance: no "precise" images in RegionOne with arches [amd64 i386]
<privateStack> I have created the public container and have it configured
<marcoceppi> privateStack: okay. there is an answer for this. one second
<privateStack> that would be amazing
<privateStack> banging head on wall for a few days
<marcoceppi> privateStack: you mean t the image data container?
<marcoceppi> err bucket
<privateStack> I have a control bucket "juju-control" which is not public
<privateStack> I also have a public bucket "juju-fist"
<privateStack> juju-dist that is
<marcoceppi> what does the content of that bucket look like? juju dist
<privateStack> On network captures I can see juju hitting the juju-control first, then hitting juju-dist/streams/v1/index.sjson, next it hits juju-dist/streams/v1/index.json
<gotwig> how can I use JuJu with local support
<privateStack> the process finally dies when juju tried to pull status 30 times from the control bucket
<sidnei> marcoceppi: i think he needs to sync simplestreams or something?
<marcoceppi> gotwig I have a blog post on it. one sec I'll dog it up
<gotwig> marcoceppi: the new juju?
<privateStack> I have not configured keystone for any simplestreams
<marcoceppi> sidnei: I think so to. which is why I wonder what the contents of the bucket is
<marcoceppi> gotwig: http://marcoceppi.com/2013/07/compiling-juju-and-the-local-provider/
<gotwig> yeah, thx
<gotwig> *compiling* oh noes ;P
<privateStack> sorry.  the public bucket, juju-dist contains files I generated with juju metadata command.  /juju-dist/streams/v1/index.json and /juju-dist/streams/v1/imagemetadata.json
<marcoceppi> no no! you don't need to compile. is a dual post gotwig
<gotwig> ok
<gotwig> marcoceppi: nice blog, btw
<marcoceppi> privateStack: could you patebin those files?
<privateStack> sure. give me a sec to figure out how
<gotwig> marcoceppi: so all this can be found in the devel ppA?
<marcoceppi> gotwig: yup!
<gotwig> marcoceppi: some dude should update the getting started juju post for sure!!
<gotwig> not everyone has clouds at home.. lol
<privateStack> ok cool.. that was easy
<privateStack> http://pastebin.com/kV2m4qH4 this is the first file
<marcoceppi> gotwig: it will be updated before mercy l next week
<privateStack> here is the second, the metadata file
<privateStack> http://pastebin.com/RenVerjf
<gotwig> marcoceppi: this is all so complex to me :/ I like juju gui...
 * marcoceppi is actually writing it
<privateStack> looking at network dumps juju never request the metdata file
<gotwig> marcoceppi: do I have to manually edit the config YAML files to local?
<privateStack> juju only request /juju-dist/streams/v1/index.sjson and then /juju-dist/streams/v1/index.json
<marcoceppi> privateStack: what does juju bootstrap -v look like?
<marcoceppi> gotwig: you need to create a local environments.yaml file if you don't have one already just run `juju init -w`
<marcoceppi> then you can juju bootstrap -e local
<marcoceppi> gotwig: what version of Ubuntu are you using?
<gotwig> marcoceppi: 13.10 :>
<marcoceppi> gotwig: also you can deploy the juju-gui to local once you get it bootstrapped!
<gotwig> is the admin secret a md5 hash?
<marcoceppi> gotwig: no it's just a random strong of chats
<marcoceppi> chars*
<privateStack> marcoceppi: I will put in paste bin
<marcoceppi> it can be whatever you like
<marcoceppi> privateStack: thanks
<gotwig> marcoceppi: can I do anything with juju gui?
<privateStack> marcoceppi: Here is juju -v sync-tools showing basic function http://pastebin.com/bUVLf21k
<marcoceppi> gotwig: anything you can do from cli you should be able to do in GUI
<gotwig> error: environment has no access-key or secret-key
<gotwig> when I want to deploy something
<marcoceppi> gotwig: use -e local when deploying
<privateStack> marcoceppi: Here is the juju -v bootstrap --debug http://pastebin.com/XMwRrcW6
<marcoceppi> gotwig: or change ~/.juju/environments.yaml default: to 'local'
<marcoceppi> privateStack: I thought you were using 1.11.4?
<gotwig> marcoceppi: do you think there is a need for deploying to MeteorJS? I really like this thing :>
<gotwig> talking about writing a charm for deploying to meteorJS ;X
<gotwig> its similar to nodeJS
<marcoceppi> gotwig: go for it! we can always use more charms
<privateStack> marcoceppi: Sorry. I have tried a few versions on a few nodes.  The dumps were made from a precise node running juju 1.11.4-precise-amd64
<privateStack> marcoceppi: But yes, I have been running 1.11.4
<gotwig> works perfectly
<gotwig> marcoceppi: thank you
<marcoceppi> why is all the metadata 1.10.0?
<gotwig> what do I have to setup to use the nodeJS charm
<gotwig> a github repo, a mongodb relation?
<gotwig> *git repo
<gotwig> I could just read the readme xD
<privateStack> marcoceppi: here are the http request that juju is making during bootstrap http://pastebin.com/HL1mnLt2
<privateStack> marcoceppi: Not sure why. I ran the juju sync-tools command and that is what was deployed
<privateStack> marcoceppi: In the past I have run the --upload-tools during the bootstrap and it will build and update newer tools.
<marcoceppi> privateStack: hum.
<privateStack> marcoceppi:  It seems strange that the metadata file is never requested.
<marcoceppi> privateStack: yeah, is there a juju-dist directory or is /streams the root directory?
<privateStack> marcoceppi:  Here is where I am weak.  I have not messed much with swift so all I know I learned for this.
<privateStack> marcoceppi: I created a swift container like so "swift post juju-dist"
<privateStack> marcoceppi: then I created a local directory /streams/v1/ and placed the two files in the directory, next I used "swift upload streams" to upload the directory tree, and finally I set the permissions on the container to allow anonymous access
<privateStack> marcoceppi: I confirmed through network dumps that /streams/v1/index.json is being returned to juju during bootstrap.
<gotwig> marcoceppi: how can I save the settings in the gui, there is no save button for changing the Config.yaml >_>
<gotwig> marcoceppi: the gui on jujucharms is much better than the one in the official juju charm >_>
<marcoceppi> gotwig: oh, deploy cs:~juju-gui/juju-gui instead of just "juju-gui"
<privateStack> marcoceppi: Is it normal for juju to request /releases/streams/v1/index.json from cloud-images.ubuntu.com after my local site has been requested?
<gotwig> marcoceppi: thx
<gotwig> marcoceppi: thats just like with normal ubuntu >_> you all the time have to use extra repos or branches.. nothing official that is "modern"?
<privateStack> marcoceppi: any direction?  Next step for me is digging through code or admitting defeat :)
<marcoceppi> privateStack: one sec. I'm checking how I did this on HP Cloud
<privateStack> marcoceppi: Thanks so much.. OB1 your my only hope
<marcoceppi> privateStack: I think your problem is all the metadata is using 1.10.0, so juju-core can't match the version and doesn't go past the v1/index.json
<privateStack> marcoceppi: Ok. Do you recommend that I use the --upload-tools option?
<privateStack> marcoceppi: Or perhaps build the chain from source?
<marcoceppi> privateStack: so delete everything in the juju-dist bucket, using 1.11.4 (juju version) build the image-metadata again, upload to juju-dist, then bootstrap and upload-tools again
<marcoceppi> also, paste the new image-metadata again once it's re-generated
<privateStack> marcoceppi: Doing this now
<marcoceppi> privateStack, I'm also doing this again with region 3 of HP Cloud which has not simplestream release data
<privateStack> marcoceppi: my tools are actually being uploaded to my control bucket not juju-dist
<marcoceppi> privateStack: Right, so juju-dist is just for image metadata (which is why it pulls from clouds.ubuntu.com which has all the public cloud information)
<marcoceppi> your control bucket is where the tools go
<privateStack> marcoceppi: Here is the output of "juju -v bootstrap" --debug and "juju -v bootstrap --debug --upload-tools=true" http://pastebin.com/GruGjEa7
<marcoceppi> privateStack: and the new image-metadata files?
<privateStack> marcoceppi: Here are the metadata files http://pastebin.com/LKdp8diR
<marcoceppi> privateStack: Those look good, just comparing to my index.json in hpcloud
<marcoceppi> privateStack: for comparison here is my index.json file for HP cloud az-3 http://paste.ubuntu.com/5911507/
<marcoceppi> privateStack: can you strip sensative information from your environments.yaml and paste that?
<gotwig> marcoceppi: jo,what is the name for the place of the daily juju-gui thing?
<gotwig> I am now on windows :/ so I use instantserver.io
<gotwig> they offer ubuntu servers for free, for 30 minutes X
<marcoceppi> cs:~juju-gui/juju-gui
<gotwig> thx
<marcoceppi> so `juju deploy cs:~juju-gui/juju-gui`
<gotwig> marcoceppi: when does this become the normal juju-gui? does it have to get reviewed?
<baudtack> are there juju packages for debian sid anymore? looks like they are gone
<marcoceppi> gotwig: it's technically the proper place for the juju-gui, that ~juju-gui team is maintaining the charm, I believe there is just a bug in the charmstore where it defaults to only ~charmers as being reviewed sources. I'll need to check on that though
<marcoceppi> baudtack: I believe we removed juju (0.7) from debian sid as it's being depricated. I imagine when the core team is ready a juju-core version will be uploaded to sid
<AskUbuntu> isnt there any descriptive step by step guide for MAAS + juju + Ceph + Openstach HA deployment? | http://askubuntu.com/q/324470
<baudtack> marcoceppi: thanks. guess i can install it by hand like a barbarian until then :-p
<marcoceppi> baudtack: there shouldn't be much difference from the version in the ppa, if you need compile instructions let me know :P
<privateStack> marcoceppi: For a goof I install juju 1.13.0-raring-amd64, removed all buckets, then tried to bootstrap again.
<privateStack> marcoceppi: same problem as before, I will send logs
<baudtack> marcoceppi: i just found out about juju today and have been using lxc for a long time. my only complaint being they were a pain in the rump when it came to network config
<baudtack> sounds like juju handles all that kind of crap
<baudtack> i also hate it when i want to hack on some new framework or language or something so i spend like 3 hours setting up a dev env. by that time i just don't care anymore heh.
<marcoceppi> baudtack: it certainly does. So we have a local provider which will spin up lxc locally (there's still a bit more work there) then we also have containerization on actual clouds which hasn't landed just yet but is close
<gotwig> marcoceppi: I cant paste the Tilde (~) and idea what I can do :/?
<gotwig> *any
<marcoceppi> gotwig: I dont' understand
<gotwig> marcoceppi: I cant use the tilde on command line ;P
<baudtack> marcoceppi: it's almost enough to make me want to run ubuntu instead :-p
<marcoceppi> baudtack: ;)
<marcoceppi> gotwig: I'm really not understanding how you can't use the tilde, this perplexes me
<gotwig> well, some javascript stuff xD
<gotwig> its a javascript terminal
<privateStack> marcoceppi: Here is the metadata from my build of 1.13.0-raring-amd64 http://pastebin.com/LnRRRqLH
<privateStack> marcoceppi: The log from juju -v bootstrap --debug --upload-tools=true is at the bottom of this post http://pastebin.com/bR12pDNh
<noodles775> hazmat, wedgwood: When you've time, here's a branch adding ansible support to charm-helpers: https://code.launchpad.net/~michael.nelson/charm-helpers/ansible-support/+merge/176973
<hazmat> noodles775, sweet!
<wedgwood> noodles775: Thanks! hey, I see some conflicts in the diff
<noodles775> wedgwood: let me fix that...
<wedgwood> noodles775: no hurry, unfortunately. I won't be able to have a look at it today.
<noodles775> wedgwood: hah - it was because the prereq branch just landed... thanks for that!
 * noodles775 merges again.
<noodles775> wedgwood, hazmat: MP is clear again for whenever you get to it (no rush from my side... one of the great things about the charm-payload is not being dependent on your version of charm-helpers landing :-) ).
<privateStack> marcoceppi: is there anything else you can think of that I should try?
<marcoceppi> privateStack: I'm at a loss, unfortunately. To prevent you from waiting around and banging your head somemore, email the juju list (juju@lists.ubuntu.com) with pretty much everything you've told me, the pastebins, the bootstrap output, etc someone else not online right now might be able to help
<marcoceppi> This is just a reminder I need to setup MAAS and private openstack at my place to play with this more
<privateStack> marcoceppi: thanks for the help. I will do
<gotwig> marcoceppi: the gui still looks like the old one
<marcoceppi> gotwig: that's odd, I've used cs:~juju-gui/juju-gui to get the new one. Screenshot?
<gotwig> marcoceppi: it says cs:~juju-gui/precise/juju-gui-77 this is wrong?
<gotwig> how can I "upgrade"?
<gotwig> I removed the environment, made it new, etc :/
<marcoceppi> gotwig: that's it -77 is the latest. Screenshot?
<gotwig> of juju-status?
<marcoceppi> gotwig: of the gui
<gotwig> imgur.com/cLjkUA2
<gotwig> marcoceppi: ^
<marcoceppi> gotwig: that looks like theh recent version
<marcoceppi> what are you comparing it to?
<gotwig> marcoceppi: jujucharms.com
<marcoceppi> gotwig: try https://10.0.3.212/fullscreen/
<marcoceppi> you should see the jujucharms.com interface
<gotwig> in jujucharms.com
<gotwig> there is a side pane for the properties
<gotwig> not a whole new view, which hides all other things
<gotwig> hm
<gotwig> I think it is like it is..
<gotwig> but I dont like this
<marcoceppi> gotwig: try https://10.0.3.212/sidebar/ for the sidebar view
<gotwig> there is no scroll utton
<gotwig> is it broken?
<marcoceppi> gotwig: could be, one sec let me deploy
<gotwig> ok, now comes my question
<gotwig> when I go the charm
<gotwig> go ito the charm properties
<gotwig> there is no save button I can see to save my changes
<gotwig> and no scroll button...
<gotwig> I have to use arrows to navigate
<gotwig> marcoceppi: no idea?
<marcoceppi> gotwig: sorry, I was on a call
<gotwig> the GUI is far away from elegant :/
<gotwig> after I reloged into the GUI, all my services are away :/
<gotwig> even juju status says they  are running
<gotwig> now I see them again
<gotwig> got it running :>
<gotwig> nodeJS
<gotwig> thanks for your help
<gotwig> marcoceppi: where can I talk about juju design?
<gotwig> Juju-GUI, launchpad?
<marcoceppi> #juju-gui on freenode
<gotwig> marcoceppi: do you use the GUI?
<gotwig> I g2g, thanks. I am going to report a new bug, regarding responsive web design
<privateStack> marcoceppi: I have been debugging on the code level and think I might have found something interesting
<privateStack> marcoceppi: when I don't specify a public url in the config, juju tries to create one using my users authentication.
<privateStack> marcoceppi: This bucket creation process fails for the public bucket with the error: MakeServiceURL: no endpoints known for service type: juju-tools
<privateStack> marcoceppi: If I specify a public url this error does not occur, which makes me wonder if juju is expecting juju-tools to be under the public bucket
<arosales> hazmat: looking for some charmers to help out with the review queue, got any spare cycles for reviews?
 * arosales will also email ~chamers
<gotwig> do charms always have to use debian packages
<gotwig> I like to use some custom stuff
<gotwig> downloading and setting up via curl
<sidnei> gotwig: you can do it, no problem. not a recommended practice, but you're free to do anything you want.
<gotwig> sidnei: I want that it becomes an official charm some day
<sidnei> gotwig: then you might want to add an option to use debian packages and toggle to curl with a flag
<sidnei> i think the policy might say something about official charms not going over the network
<AskUbuntu> can we have multiple juju config files? | http://askubuntu.com/q/324604
<privateStack> Does anyone know if juju works with openstack quantum?  Having problems during bootstrap with assigning a floating ip
<privateStack> Juju appears to use the nova api to assign addresses
<privateStack> marcoceppi: I figured out my original bootstrap problem.  The metadata index.json had the wrong port listed.
<gotwig> can I use juju deploy with my custom bzr branch?
<sidnei> gotwig: yes you can
<gotwig> and how, Obama :D?
<sidnei> gotwig: if you have a dir 'charms', and your branch is under 'charms/precise/mycharm' you can do 'juju deploy --repository=charms mycharm'
<gotwig> somehow this doesnt work for me
<gotwig> I recheck
<gotwig> error: cannot get latest charm revision: charm not found: cs:precise/meteor-app
<gotwig> it tries to use the official sources, not my custom directory
<gotwig> do  I have to use version control or so?
<sidnei> gotwig: try juju deploy --repository=charms local:mycharm
<gotwig> my command is juju deploy --repository=eduard local:meteor-app -e local
<gotwig> eduard is my charms dir
<gotwig> error: cannot get latest charm revision: charm not found in "/home/julia/eduard": local:precise/meteor-app
<gotwig> the charm dir name is meteor-app  and the charm name is meteor-app
<sidnei> gotwig: does eduard/precise/meteor-app exist? what's in eduard/precise/meteor-app/metadata.yaml?
<gotwig> no
<gotwig> I hav meteor-app in eduard
<sidnei> gotwig: ah, that's it, needs to be into a dir named 'precise' ( really, whatever you have in default-series iirc)
<gotwig> sucess :>
<gotwig> now lets check juju-gui
<gotwig> actually writting charms is really easy
<gotwig> no, red line :D
<gotwig> hook failed install... hm
<gotwig> sidnei: where can I specify the icon for my charm?
<thumper> jcastro: hey
<thumper> jcastro: not sure if you have tried rebooting a running local environment yet, but I did yesterday (quite by accident)
<thumper> jcastro: all came back up fine
<thumper> so... yay \o/
<gotwig> how do I remove a service
<sidnei> gotwig: i think the icon is just icon.svg in the root
<sidnei> gotwig: juju destroy-service?
<gotwig> no
<gotwig> destroy-service doesnt ermove it
<gotwig> *remove
<sidnei> what you mean by remove then?
<bradm> fwiw I just submitted LP#1205130 for a charm for bip I wrote, any feedback would be great.
<_mup_> Bug #1205130: New charm: bip <Juju Charms Collection:New> <https://launchpad.net/bugs/1205130>
#juju 2013-07-26
<Guest24204> is anyone familiar with setting up node-app on AWS.  I am hitting a wall with the config.yaml for some reason
<sarnold> Guest24204: any chance you can pastebin the errors you're getting?
<sarnold> (I'm not familiar with the charm, and only slightly familiar with node, but you never know :)
<Guest24204> :)
<Guest24204> Description of issue:  I am trying to set the config file first by --config had issues with that so tried to upload my own charm my own --repository and have issue there too.   Here is what I get  :
<Guest24204> with --config:   juju deploy --config config.yaml node-app mongonode-app
<Guest24204> error response
<Guest24204> error: no settings found for "mongonode-app"
<Guest24204> have the config file under http://github.com/TheMindCompany/mongonode-app but as mongonode-app.yaml
<Guest24204> then with --repository:  juju deploy --repository ~/charms local:node-app mongonode-app -v
<Guest24204> error: cannot get latest charm revision: charm not found in "/home/brandon/charms": local:precise/node-app
<Guest24204> i followed mark mim's tutorial at for the --repository at http://markmims.com/cloud/2011/09/07/node-mongo-ubuntu.html
<Guest24204> my issue is simple getting the up to aws.  If i set it up with "juju deploy node-app mongonode-app it will launch the instance but with no app files.
<Guest24204> hope that was thorough enough :)  and thank you for at least attempting to help me in advance
<sarnold> Guest24204: hrm, the charm README suggests it is looking for a config/config.js .. yours has "name": "mongonode", but you've called the app "mongonode-app" on the command line and the config.yaml ...
<sarnold> Guest24204: if that's not it, maybe try changing "app_name: " to "app_name: mongonode-app"  in the config.yaml...
<Guest24204> that is for the mongo module in node.js.  I do have this setup and use it in my node server file
<Guest24204> you think I should remove the sub properties 	type: string
<Guest24204> default: mongonode-app
<Guest24204> description: Application Name  or just add that too?  Well trying both :)
<Guest24204> nope neither way worked :x
<Guest24204> <racking brain here < bang - bang - banging head against cement wall
<sarnold> Guest24204: maybe? (sorry...)
<Guest24204> I know.  Thank you for trying though.
<sarnold> weblife: there's a fair number of questions about juju on askubuntu.com -- perhaps you can find some inspiration there?
<weblife> been looking everywhere not many people have used juju with node.js.  Plenty on MongoDB.  Can't find any related error issues either :(  I am going to ask Mark Mims in hopes he might know.
<sarnold> m_3: ^^^ guest24204 / weblife has some node charm questions I'm useless with.. :)
<hazmat> thumper, is local with restart supposed to restart mongo and machine agent?
<hazmat> hmm looks like it should
<thumper> hazmat: yes it does
<weblife> I posted my issue in a more detailed format at http://askubuntu.com/q/324672/88546 .  Anyone think they can help me out with launching a node-app instance on AWS?
<AskUbuntu> Juju node-app charm on AWS | http://askubuntu.com/q/324672
<weblife> Thats cool.  Didn't know there was a bot in here for that stuff.
<weblife> yeahhhhhhh!!!!!   It was damn white spaces.
<weblife> I just launched a node-app-22 from the charms repo and am getting the following: http://ec2-54-226-121-230.compute-1.amazonaws.com/  --- Any ideas before I get back to it in th morning?
<weblife> First guess is it isn't loading my git repo.  Check that out first I guess.
<mectors> Is there any way to import the shift-D generated script with juju-core?
<marcoceppi> mectors: yes, with a tool called juju-deployer
<mectors> how do you install juju-deployer with juju-core
<marcoceppi> mectors: you can also deploy the juju-gui to your environment and drag-n-drop the deployment file to the juju-gui
<marcoceppi> mectors: you'll need to add a ppa to your system to install it. The final location for the deployer ppa hasn't been locked down yet
 * marcoceppi finds email post
<theJujuGuy> Hi, i try to get juju working with maas. I read the MaaS Juju Quickstart Guide and run into problems. If i try to get the juju status "juju status" i get an error: ERROR Unexpected HTTP 401 trying to GET http://myip:80/MAAS/api/1.0/files/?filename=provider-state&op=get
<theJujuGuy> is this a common error? any hints?
<codybum> marcoceppi: Are you online?
<marcoceppi> codybum: yes
<marcoceppi> theJujuGuy: Do you have a successful bootstrap? What version of juju are you using (juju version or juju --version)? Do you see at least one node in use in the maas master dashboard?
<codybum> marcoceppi: Yesterday my nick was<privteStack>.  I changed nick to be the same as other ones, given I will likely be around for a while.
<marcoceppi> codybum: welcome back o/
<codybum> marcoceppi:  I figured out the problem you were helping me with yesterday.  I was user error. Specifically the index.json had the wrong port number.
<marcoceppi> codybum: Oh no, glad you got that resolved!
<codybum> marcoceppi:  I basically tore apart juju code and debugged until I noticed the constraints didn't match, and it was the port.
<theJujuGuy> marcoceppi: i use juju-verison: 0.7; "juju bootstrap" gave the same error. In my dashboard are 1 "ready" node and 2 allocated to me
<codybum> marcoceppi: I ran into some other stange things that you might know about.
<marcoceppi> codybum: yes please!
<marcoceppi> theJujuGuy: If bootstrap fails then status will also fail, could you run `juju destroy-environment` then run `juju bootstrap -v` and pastebin the output of the bootstrap command?
<codybum> marcoceppi:  It looks like juju, when set to use floating ip, uses the nova API to set addresses.  I am using Quantum and unfortunately the floating addressed managed by quantum don't show up under the os_floating_ip api call for nova.
<marcoceppi> codybum: I think this is a known issue with juju-core, that the openstack provider does not work well with quantum yet. Let me check the bugs for you real quick to confirm
<codybum> marcoceppi:  Basically on bootstrap if floating_ip = true the process fails dude to an openstack error.  I would expect most people, especially HP use Quantum.  So I wonder if it is something strange on my side.
<marcoceppi> codybum: Maybe I'm confusing the openstack charms with juju then. Let me double check
<marcoceppi> codybum: could it be related to this? https://bugs.launchpad.net/juju-core/+bug/1188126
<_mup_> Bug #1188126: Juju unable to interact consistently with an openstack deployment where tenant has multiple networks configured <canonistack> <juju:New> <juju-core:Triaged> <https://launchpad.net/bugs/1188126>
<codybum> marcoceppi:  That might be related since I do have two networks associated with the tenant.
<theJujuGuy> marcoceppi: http://goo.gl/xn559h
<codybum> marcoceppi:  I was actually able to bootstrap without using a floating_ip and have a bootstrap node running.
<marcoceppi> theJujuGuy: Sorry, you'll need to run `juju -v bootstrap`, juju 0.7 is very picky about the order of commandline options.
<marcoceppi> codybum: awesome, it's a bummer about the quantum issue you're experiencing though. If you think that bug is affecting you, you can subscribe to it an follow it's progress
<codybum> marcoceppi:  I was able to deploy wordpress and mysql charms, and new VMs were created.  However, I was never able to successfully add a relation.
<marcoceppi> codybum: did add-relation throw an error?
<theJujuGuy> marcoceppi: http://goo.gl/oXZeAk
<codybum> marcoceppi:  That is the question, what logs should I be reading once the system is boostrapped
<codybum> marcoceppi:  on teh bootstrap node /var/log/juju/* contains few logs and nothing that I can see about node deployment
<marcoceppi> codybum: each unit has it's own unit log. One simple way is to just check `juju status`, if you see a relations: key with a list of the relations added then juju knows about it. If not add-relation failed some how
<marcoceppi> Otherwise /var/log/juju/unit-*.log on each node will contain the output from the charm hooks
<marcoceppi> theJujuGuy: Sorry, 401 means it can't authorize. There's a chance that the maas-oauth isn't set correctly
<marcoceppi> theJujuGuy: check the MAAS master to verify you've got the right key entered in to the environments.yaml file
<codybum> marcoceppi:  Great Tip.. Let me rebuild the environment and come back :)  thanks
<marcoceppi> codybum: also, if that bug is affecting you, feel free to subscribe to it on the right side of the page to get updates for it directly
<theJujuGuy> marcoceppi: what about the user? is the user identified by the maas-oauth?
<marcoceppi> theJujuGuy: So, when I did this I used the same user as my local username. So my local user account name is "marco" and I just created a "marco" account in the MAAS master, alloted units to that user, copied that MAAS key to my environments.yaml and used that
<theJujuGuy> marcoceppi: thank you very much
<theJujuGuy> marcoceppi: i am a step further
<marcoceppi> theJujuGuy: Excellent. We really don't have the most stellar maas documentation, so during your process of using MAAS and Juju when you find any points of contention (like this one) if you could just record it and possibly send your experience using MAAS and juju to the mailing list (juju@lists.ubuntu.com) it'll really help us improve our documentation!
<theJujuGuy> i tried to install juju with the Quick Start: now a get a new error by "juju -v bootstrap" : ERROR SSH authorized/public key not found.   http://goo.gl/YojmDj I insert my key to the dashboard-site, also i tried to delete it an lets bootstrap generate the key pair. The error is the same.
<marcoceppi> theJujuGuy: Do you have an ssh keypair in you ~/.ssh folder?
<theJujuGuy> marcoceppi: thx again. why i must set my pub-key in the dashboard, when it's need in the folder?
<marcoceppi> theJujuGuy: I think it's because MAAS doesn't have a mechanism to seed an ssh key during "provisioning", most clouds allow you to push keys to a machine during spin up. So it needs to have the key in maas master as the same in .ssh so juju can drive the machine after provisioning by maas is done
<theJujuGuy> who is mainting the juju quick start from the maas-website?
<marcoceppi> theJujuGuy: not sure, though we'll probably want to move it to the juju doc
<marcoceppi> evilnick would know, but he's not online atm
<weblife> @marcoceppi Doesn't node-app make a copy of the git repo that you set in you config.yaml  to a S3 bucket?
<weblife> or anyone else familiar with the charm.
<marcoceppi> weblife: no, it simply clones it to the unit
<marcoceppi> weblife: the charm itself is cached prior to deployment, but that cache won't have the git repo in it. Also, not all cloud providers have s3 or comprable object stores avaiable. So we try to make sure charms rely on as little of the cloud infrastructure as possible
<weblife> ahh. Okay thank you.  I'm getting a 503: No server is available to handle this request.
<marcoceppi> weblife: at what point of the deployment?
<weblife> after I have complete the all relationships and exposing.
<weblife> No server is available to handle this request.
<weblife> oops
<weblife> ec2-54-226-121-230.compute-1.amazonaws.com
<weblife> Any idea why? Before I start my witch hunt.
<marcoceppi> weblife: what service is this?
<marcoceppi> node-app?
<weblife> Sorry using ec2 with aws
<marcoceppi> weblife: right, but what charm is this? node-app?
<weblife> revision-22? using the following git repo @ https://github.com/TheMindCompany/mongonode-app
<weblife> @marcoceppi that what you mean
<weblife> yes node-app
<weblife> :)
<marcoceppi> weblife: Okay, so there's a good chance that the node applciation itself didn't start. 503 means nginx can't connect to the backend.
<den_sheleh> I'm trying to bind my own charm and logstash-agent. As I can see logstash-agent use juju-info interface to connection, but when I try to declare that one(logstash-agent:   interface: juju-info   scope: container) in metadata of my charm I got error: "error: charm not found". Is it bug or I'm doing something wrong ?
<marcoceppi> What you can do is `juju ssh node-app/0` (replace node-app if it's deployed under and alias) and look to see if the process is running, check logs for failures (/var/log/juju/unit-*.log, /var/log/node-app)
<marcoceppi> weblife: the node-app makes quite a few assumptions, so it might not work with every node-applcaition. If there are changes needed to the charm to make it work with your application then you can patch and run the charm locally or submit it to be merged into the official charm, making it better for everyone :)
<marcoceppi> den_sheleh: can you pastebin the steps you're taking? subordinates and logstash definintely work last I checked
<weblife> I am currently launching locally, but if I can fix this I will submit a patch.  Thank you for your help.
<marcoceppi> weblife: np, if you need more help debugging let me know!
<dannf> would it be possible to get ppa:charmers/charm-helpers populated for raring?
<marcoceppi> dannf: I dont' see why not, let me take a look
 * dannf is testing on a maas cluster of hw that wasn't supported in previous releases, charms like wordpress fail when they can't find a compatible tools release
<dannf> marcoceppi: cool
<marcoceppi> dannf: Oh, you're talking about old charm-helpers
<den_sheleh> marcoceppi: after adding section logstash-agent: interface: juju-info and trying to deploy my local charm I get error described early
<marcoceppi> den_sheleh: can you paste your metadata.yaml file to paste.ubuntu.com?
<marcoceppi> dannf: I've kicked of the builders, should build in a few hours
<dannf> marcoceppi: you rock, thanks!
<den_sheleh> marcoceppi: http://pastebin.com/fYnXLtQW
<arosales> note Charm School "Infrastructure" is being deferred this week
<arosales> jcastro, was your OSCON charm school video'ed?
<jcastro> yes
<jcastro> we still have to be invoiced though
<jcastro> they told me "a couple of days"
<weblife> @marcoceppi I took your advice and re-installed nodejs.  I did from source though and not chris leas ppa, I always seem to have issues with his.  It is up and running but I get a 504: Gateway Time-out.  Node.js debug gives me connection error:  [Error: failed to connect to [localhost:27017]].   I think it may be my app code now because that is the mongodb port setting from config/config.js.  Before I mess with my code, would you possi
<weblife> bly concur with this thought?  Also npm says I am missing the repository setting but I have never needed this on my local, is this specific to node-app?
<hazmat> SpamapS, still on sup?
<SpamapS> hazmat: yeah, but I'd love to find something better at what it does.
<SpamapS> I'd swear it is getting slower and slower.. :-/
<hazmat> SpamapS, there's https://github.com/pazz/alot
<SpamapS> hazmat: that might be worth a shot
<SpamapS> all the notmuch frontends were horrible a year ago
<weblife> Okay,  not liking mongodb-cloud with nodejs.
<weblife> @marcoceppi in your next screen cast session you think you can cover how node.js, mongodb, and haproxy work and communicate together a little?  I have my node app running with mongodb but only in a single instance.  I am trying to finish up a tutorial series on how to work with node and mongo on ubuntu and then deploy what you make localaly to aws with my sample app: http://ec2-54-226-87-136.compute-1.amazonaws.com
<weblife> I want to say that what ever you build is scalable.
<weblife> but cant at this moment with a single instance.
<weblife> nevermind you inda do talk about it in yourlast one minus haproxy
<weblife> @marcoceppi I believe your on the charm-tools team.  I installed it earlier it messed with my Juju version from the juju/devel PPA.  So I removed it and reinstalled it, works fine now but now charm-tools isn't creating all the files(ie. config.yaml, icon.svg, probably more)
<weblife> I can deal but thought you should know.  Im on 13.04
<marcoceppi> weblife: did you install from ppa? that problem was good but not back ported yet
<marcoceppi> was fixed*
 * marcoceppi needs to open a back port request
<weblife> @marcoceppi Cool, thank you.  DId you get my message earlier about possibly using the wget for nodejs.org - and build from source.  It would make things more universal.  Especially with 13.04 servers.
<marcoceppi> yes. away from computer again but ping me any time after today and we can talk about the logistics of doing such a thing
<marcoceppi> it would make a great conduit option weblife
<marcoceppi> config*
<ahasenack> hi, any charm reviewers lurking? A simple two-line one: https://code.launchpad.net/~ahasenack/charms/precise/rabbitmq-server/rabbitmq-server-juju-core-compat/+merge/177231
<ahasenack> marcoceppi: can you review that? It's simple, I promise
<ahasenack> same fix already existing in cinder
<ahasenack> and I just deployed with two units of rabbit, worked
#juju 2013-07-27
<weblife> Woohoo!!!  Using mongoosejs  instead on mongodb with nodejs app on juju: http://ec2-50-112-224-192.us-west-2.compute.amazonaws.com/   I have a full length tutorial coming on how to do it too!  11 pages so far...
<weblife> broken it down to a three part tutorial though
<weblife> I have not programmed in python much.  I am trying to setup a patch for node-app to build nodejs from source but want to make the 'make' command use as many cores '-j' as possible when building it.  Anyone know what I can do to auto fill this?
<weblife> Got it.  It will use as many available to it with just 'make -j',  don't need to fill it I guess.
#juju 2013-07-28
<AskUbuntu> juju bootstrap not working for MAAS | http://askubuntu.com/q/325749
<jose> marcoceppi: hey, is there a possibility I can get the postfix charm on the store before ths 1st?
<jose> jcastro: ^
<jcastro> jose: yeah it shouldn't take long
<jcastro> jose: the contest is through 1st october, so you have plenty of time
<jose> jcastro: darn, skipped august when counting days, sorry about that!
 * jose should stop working
#juju 2014-07-21
<william_home> Does someone have setup a local mirror from the cloud-images repository for internal use?
<niedbalski> wallyworld, are you around?
<wallyworld> somewhat :-)
<mthaddon> if you have a primary charm and number of subordinate charms on a unit, is there some way of getting the name of the primary charm from within one of those subordinate charms?
<lazyPower> mthaddon: just whats sent over the wire as relation data, and the relation-id, but that still may not give you what you're looking for since most subordinates use juju-info
<mthaddon> ok, thx
<data> hey, I have done some manual deploys, but lost the state on the original controller. How to I get juju to not see the nodes as already provisioned
<bbcmicrocomputer> hmm, if I do 'juju add-machine' followed by 'juju deploy', the deploy ends up on the added machine in 1.20.1 now, whereas in 1.18 the deploy went into a new machine... is this expected behaviour now?
<data> ok, resolved my question above.
<data>         agent-state: error
<data>         agent-state-info: 'hook failed: "config-changed"'
<data> juju resolved --retry keystone/0
<data> ERROR cannot set resolved mode for unit "keystone/0": already resolved
<data> hi, I have a problem with an openstack deployment. everything went smoothly except for a few hickups like wrong hosts in the DB, but now I can not connect to any of the services from horizon, i.e., it gives me a 401 and doing it manually achieves about the same
#juju 2014-07-22
<Sh3lc0k> Is there a way to statically map MAC and IP addresses in the DHCP server of the MAAS?
<data> hey, I have a problem. One of the hooks failed, but it is not being reexecuted, and juju resolved --retry just gives me: ERROR cannot set resolved mode for unit "rabbitmq-server/0": already resolved
<schegi_> jamespage?
<schegi_> got a problem with juju and the maas dns. bootstrapped a new juju node within an environment using maas-dns. bootstrapping worked well but now juju ssh 0 wont connect to server cause it cannot resolve the hostname using the dns. what did i do wrong. cant be the solution to add alll hosts to /etc/hosts when running dns.
<jamespage> schegi_, hello
<schegi_> jamespage, hi.
<jamespage> schegi_, maas-dns should be managing that - it parsers the dhcp leases file in /var/lib/maas/dhcp and generated dns zone files in /etc/bind/maas
<jamespage> schegi_, can you check that the dns server running on the maas server is being used - might be your dhcp renewed or suchlike?
<schegi_> got the error. i got an external ip form the universities intranet on my first interface and this provides me with two nameservers. they are on top in my resolve.conf and not aware of my local dns server. if i put my local on top i can connect to my nodes but not externally any more. think i have to configure bind somehow that the local bind only resolves addresses in my domain and for the rest forwards to the external dns servers. wou
<schegi_> think like that
<schegi_> jamespage, ok added forwarders to  /etc/bind/maas/named.conf.options.inside.maas and now it works fine
<jamespage> schegi_, excellent
<schegi_> just a short question to enable for additional package sources i have to add a file somewhere under /etc/apt/.... where??
<schegi_> for deb sources addedd to sources.list i always get an 403 forbidden. i have to add something somewhere in /etc/apt but i cant just remember what and where can someone help
<schegi_> ok solved my problem by adding the source dns to /usr/share/maas/conf/99-maas. but in this case i had ot restart the suid-deb-proxy. I knew there is another solution where you add an file with the dns somewhere under /etc/apt but couldnt remember where so if someone knews.
<sante> Hi all, I'm trying to configure a new openstack juju environment. I receive this error http://pastebin.com/CTiFbFPN I suspect it is reelatetd to tools-metadata-url and image-metadata-url can anyone address me to some documentation explatining how to set them?
<kentb> https://juju.ubuntu.com/docs/howto-privatecloud.html
<sante> kentb: thanks!
<wasbeer> hi
<wasbeer> I'm using Juju 1.18 in a MAAS env and, trying to deploy services to lxc "machines" on nodes, ran into bug #1340261. reading through some of the Juju code, it seems like provider/maas/environ.go:301 could be the source of the problem. is it possible to override that value in the Juju config?
<_mup_> Bug #1340261: juju add-machine lxc:0 fails to start due to incorrect network name in trusty config <lxc> <network> <placement> <juju-core:Triaged> <https://launchpad.net/bugs/1340261>
<marcoceppi> lazyPower: can you do a quick review for me?
<marcoceppi> https://code.launchpad.net/~marcoceppi/charms/trusty/mysql/lp1343812/+merge/227791
<lazyPower> sure
<lazyPower> marcoceppi: merged
<marcoceppi> ta
<Sh3l0ck> Is there a way to statically map IP addresses to machines in MAAS?
<Sh3l0ck> Tried to edit the DHCP lease file but did not work
<Sh3l0ck> Is there a way to statically map IP addresses to machines in MAAS?
<gQuigs> hi there.. wondering if this bug may have gotten lost in the move from LP -> GH - https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1329251
<_mup_> Bug #1329251: ssh-keyscan of hosts in MAAS environments is incomplete <nova-cloud-controller (Juju Charms Collection):In Progress> <nova-compute (Juju Charms Collection):In Progress> <https://launchpad.net/bugs/1329251>
<lazyPower> Sh3l0ck: not that i'm aware of since MAAS uses a DHCP Server to assign ip's
<lazyPower> Sh3l0ck: you may want to ask in #MAAS though
<lazyPower> gQuigs: not sure - we still use launchpad to track bugs. so i doubt its been lost. it may have been reprioritized
<Sh3l0ck> lazypower: Thanks
<gQuigs> lazyPower: the reason I'm asking is because it looks like it's in the middle of a BZR merge approved by some/not be others..
<lazyPower> gQuigs: well the 2 related branches are merged. But the bug itself is not triaged - so it may still be pending.
<lazyPower> niedbalski: can you take a look at https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1329251 ? Is this resolved or still WIP?
<_mup_> Bug #1329251: ssh-keyscan of hosts in MAAS environments is incomplete <nova-cloud-controller (Juju Charms Collection):In Progress> <nova-compute (Juju Charms Collection):In Progress> <https://launchpad.net/bugs/1329251>
<gQuigs> lazyPower: thanks
<lazyPower> gQuigs: np, if we dont hear back I suggest you post on teh bug querying the status so it gets activity on the bug, and allows the devs a chance to triage as appropriate.
<gQuigs> lazyPower: will do
<Tug> I just tried to promote juju in an HN thread: https://news.ycombinator.com/item?id=8070550
<Tug> Not sure I did it right :)
<lazyPower> Tug: upvoted and commented.
<Tug> lazyPower, nice ;)
<ahasenack> hi, can someone help me debug a bootstrap problem?
<ahasenack> 2014-07-22 19:58:02 DEBUG juju.utils.ssh ssh_openssh.go:129 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -o "ServerAliveInterval 30" -i /home/andreas/.juju/ssh/juju_id_rsa -i /home/andreas/.ssh/id_rsa ubuntu@10.1.3.228 /bin/bash
<ahasenack> it's stuck in a loop doing that
<ahasenack> I can run that command just fine
<ahasenack> it connects, and runs /bin/bash
<ahasenack> but somehow juju is expecting something that isn't happening
#juju 2014-07-23
<dubcanada> Can you deploy more then one thing per a server?
<sarnold> dubcanada: yes; you can either use subordinate charms if the things are logically related somehow or you can use lxc containers if they're not
<dubcanada> How do you use lxc containers?
<sarnold> dubcanada: that'd be something --to foo:1 or --to bar:2
<dubcanada> Does the GUI support that stuff?
<dubcanada> Or is that only cl?
<sarnold> dubcanada: https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-specific-machines-and-containers
<sarnold> dubcanada: ah, no idea there, sorry
<sarnold> dubcanada: ah, looks like I had the syntax all wrong, too :)
<sante> Hi, I'm trying to bootstrap an openstack environment but I'm stuck at this error http://pastebin.com/ER9i8Qt4 . I did setup the new tools-metadata-url and image-metadata-url but the bootstrap process seems to ignore them. Can someone help me understandig thie issue?
<diegows> hi
<schegi> hey guys a little problem with my new maas setup. I configured my maas master with 4 different networks all managed dhcp and dns. but if i now bootstrap a juju environement, the maas master is no longer able to connect to the bootstrapped node cause it chooses to use the wrong network..
<schegi> if i reduce my networks to only the one which is brought up during installation on nodes everything works fine but not if i have all networks on and managed
<schegi> bootstrapping starts with Attempting to connect to controller.wcloud.uni-koblenz.de:22 Attempting to connect to 192.168.25.14:22 Attempting to connect to 10.10.2.13:22 Attempting to connect to 10.10.2.12:22
<schegi> which is correct but later it fails with ERROR waited for 33m20s without being able to connect: ssh: connect to host 10.10.2.12 port 22: No route to host, because the host will only bring up the 192.168.25.0 network during installtion. how can i force maas to use one particular network??
<schegi__> can someone helpt with virsh power settings for maas?? already asked in the maas channel noone able toi help there
<roadmr> schegi__: I think #maas would be the right place, if they were unable to help maybe you can post in askubuntu.com?
<lukestuts> hello jujuers
<lukestuts> i have just started jujuing
<lukestuts> i am confused by the relationship between the ubuntu charm store and the charms on github
<lukestuts> how do the github charms differ from the charm store charms?
<ctlaugh> I'm trying to deploy nova-compute using juju to an ARM64 machine running trusty and am trying to figure out how the following line gets added the /etc/apt/sources.list.d/cloud-archive.list: deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/icehouse main
<lukestuts> hello ctlaugh
<ctlaugh> lukestuts: hi
<lukestuts> what do you mean by "how it gets added"?
<ctlaugh> Somehow, in the process of adding the machine using "juju add-machine ssh:ip-address", or when I deployed nova-compute, that file got created.
<ctlaugh> I am 87% sure it was not there before that
<ctlaugh> I was curious if anyone knew if that was part of the juju bootstrapping process, or part of the charm's installation steps
<lukestuts> it's not on my openstack vm
<lukestuts> i'm playing with rails on trusty
<ctlaugh> hmm
<lukestuts> which charm were you using?
<lukestuts> gtg
<ctlaugh> nova-compute from trusty-icehouse
<cl3044> to shut down juju / juju-gui nodes and reboot it, i thought it was necessary just to hit "stop node" and then "start node", but I'm having the issue of MAAS wiping out the node, PXE booting, and reinstalling a blank node.
<cl3044> i must be doing something totally wrong.
<sarnold> cl3044: you might just want to juju ssh to issue shutdown commands instead
<jose> or do destroy-environment or destroy-service with terminate-machine
<cl3044> ok, that's definitely helpful
<cl3044> no powering down maas nodes then?
<cl3044> (at least directly)
<sarnold> cl3044: good point. there may be some interface in MAAS for rebooting instead...
<cl3044> if i deploy openstack with juju and tear it down with destroy-environment/terminate-, etc.... how do i start it back up, then?  seems like it would not be persisent.
<mwhudson> if i do juju add-machine lxc:0 is there any easy way to identify the lxc that i just added?
<mwhudson> i guess i can do before/after on juju status but...
#juju 2014-07-24
<jose> cl3044: there's no way to recover the data, you're destroying the machines
<cl3044> thanks...  yes, perfect - if i just ssh into the box and reboot, it works fine
<cl3044> and shutting it down/powering it up is fine too -- via virt-manager. I get it, I see what's going on here
<cl3044> thanks, jose and sarnold
<sarnold> cl3044: cool :) have fun!
<gnuoy> jamespage, if you have a spare sec  https://code.launchpad.net/~gnuoy/charm-helpers/add-loglevel-context/+merge/228049
<jamespage> gnuoy, one comment
<gnuoy> hit me
<gnuoy> ah, you have
<gnuoy> jamespage, updated
<jamespage> gnuoy, OK _ so what's a partial?
<gnuoy> jamespage, its a partial object with some of the arguments pre-populated
<gnuoy> jamespage,  https://docs.python.org/2/library/functools.html
<jamespage> gnuoy, cool - I'll use that then
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/network-splits/+merge/222371 updated
<gnuoy> jamespage, lgtm, want me to merge ?
<jamespage> gnuoy, pls
<gnuoy> jamespage, argh, merged but I typed the review/contributor in the commit message
<gnuoy> * typoed
<jamespage> gnuoy, if you have time:
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/ceph-radosgw/fix-apt-race/+merge/228069
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/ceph/fix-apt-race/+merge/228067
<jamespage> and
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/ceph-osd/fix-apt-race/+merge/228068
<jamespage> all related to bug 1346489
<_mup_> Bug #1346489: apt cache operations have potential for filesystem races <cloud-installer> <landscape> <Charm Helpers:Fix Released by james-page> <ceph (Juju Charms Collection):In Progress by james-page> <ceph-osd (Juju Charms Collection):In Progress> <ceph-radosgw (Juju Charms Collection):In Progress by bjornt> <https://launchpad.net/bugs/1346489>
<Odd_Bloke> I currently only have access to Debian machines; is it possible to get Juju local running on a Debian machine?
<Odd_Bloke> I've added the PPA, but I'm hitting "trying to overwrite '/usr/share/lxc/templates/lxc-busybox', which is also in package lxc 1.0.4-3" when installing lxc-templates from the PPA.
<rbasak> sinzui: do you have time before the cross team call today to sync on Ubuntu landing status (separate from our QA discussion later)
<rbasak> ?
<jam> TheMue: so I want to make sure that you're getting credit for the work you've done. Do you need some help to get the docs landed? Or did those already get reviewed?
<TheMue> jam: william reviewed, but another look is always appreciated
<sante> Hi all, I'm trying to run juju on openstack. I'd like to use the trusty serie but I'm stuck to precise because every trusty image does not get deplyed. Can someone help me going through this limitation?
<sante> I'm talkin about version of the images deployed as vm
<data> sante: Do you have any error messages?
<data> and you are just talking about deploying juju services, or are you talking about images for maas or what?
<sante> data: I'll produce some and I'll post it asap
<sante> data: I'm talking about deploying juju service on an openstack cloud, not talking about Maas or bare metal
<data> which environment is that?
<lazyPower> sante: which charm are you trying to deploy?
<data> so the machines are vms in openstack or are you using juju to deploy openstack?
<data> thanks, that was what I wanted to ask ;)
<sante> data: http://pastebin.com/jSvCzrN4 here are teh error I get when i try to bootstrap using the trusty imagesÃ¹
<lazyPower> looks like its missing simplestreams data
<lazyPower> i'm not sure how to fix that
<lazyPower> try bootstrap with --upload-tools
<lazyPower> also, why are you doing this as root?
<sante> In the environment.yaml I've got an url that I know to work when I deploy the precise image. I'm deploying as root for no specific reason
<sante> Teh fact is that there seems to be not yet the moment to deploy trusty images with juju because there are defaults still pointing to precise. Is this correct?
<sante> In any case which is the best way to run the latest juju-gui? thanks
<lazyPower> sante: juju deploy juju-gui will always give you the latest juju-gui
<lazyPower> however, we do have trusty charms. the library isn't very big
<lazyPower> let me get the link for the list hang on
<lazyPower> http://manage.jujucharms.com/charms/trusty
<lazyPower> you'll see that most of the trusty based charms are openstack
<sante> lazyPower: thanks
<lazyPower> np
<sinzui> rbasak, I have about 20 free now, and I may have 15-20 minutes next hour because I expect a meeting to end early
<rbasak> sinzui: sorry I missed that. Probably not enough time now as I need to reboot to enter a Hangout. Ping me if your next meeting ends early?
<sinzui> rbasak, okay
<sinzui> rbasak, I am free for the next 35 minutes
<rbasak> sinzui: sorry, just realised I have a call :(
<sinzui> :(
<khuss> I want to create a juju node with an image which will not require ssh login. In other words, login with just username/password. How do I get a new image to MAAS? Where are the images located?
<lazyPower> avoine: ping
<avoine> lazyPower: pong
<lazyPower> avoine: hey, i'm looking at https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/pure-python/+merge/226742
<lazyPower> I've got concerns that not much is being gained by this conversion.
<lazyPower> you call out this conversion to pure python is a stop gap prior to moving back to ansible?
<avoine> lazyPower: yes and no
<lazyPower> can you expand a bit?
<avoine> I was experimenting with Ansible but it turns out to be more difficult that I was expecting
<avoine> and the current charm in the store is broken
<avoine> so I kind of revert back to an older version but I add the tests that I was using for testing the Ansible branch
<avoine> lazyPower: What is your concern about the charm?
<lazyPower> biggest concern was the flip flopping of the tech powering the charm and what teh implications of that would be.
<avoine> lazyPower: also, it looks like a big merge because I updated charm-helpers
<lazyPower> it fails proof so it was nacked on thsi review anyway i jsut wanted to get a feel for where you think the future of the charm is headed
<lazyPower> i've got no problems with pulling ansible support and going pure python
<lazyPower> i jsut wanted to make sure we werent going down a winding road of crazy patching and unpatching.
<lazyPower> carry on then sir
<avoine> lazyPower: ok
<lazyPower> avoine: feel free to ping me direct when you get the MP cleaned up
<lazyPower> i'll take another look and go through it
<avoine> lazyPower: ok
<data> hey, I used juju to deploy an openstack cluster, but 1) neutron configuration is severely wrong, from what I can see, 2) it deployed no l2 agent to the compute nodes. What did I do wrong?
<ctlaugh> I would like to use the python-jujuclient API, but am having trouble getting started... can someone please point me to an example of how to use?  Specifically what to do to initially connect, and what needs to be in place and running?  Thank you
<automatemecolema> So I'm trying to deploy a bundle file on juju canvas and I get an error saying no bundle name provided
<automatemecolema> thoughts?
<automatemecolema> where can I monitor logs for juju?
<nottrobin> does / will juju support deploying docker containers ?
#juju 2014-07-25
<ctlaugh> can anyone offer some assistance with python-jujuclient?
<kirkland> marcoceppi: howdy!  can you give some attention to https://bugs.launchpad.net/charms/+bug/1342847 and https://bugs.launchpad.net/charms/+bug/1342843 ?
<_mup_> Bug #1342847: please add transcode-cluster bundle to the charm store <Juju Charms Collection:New> <https://launchpad.net/bugs/1342847>
<_mup_> Bug #1342843: please include transcode charm in charm store <Juju Charms Collection:New> <https://launchpad.net/bugs/1342843>
<marcoceppi> kirkland: sure thin
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: mbruzek && whit || News and stuff: http://reddit.com/r/juju
<tvansteenburgh> anyone seen this and know how to work around? http://pastebin.ubuntu.com/7856024/
<marcoceppi> tvansteenburgh: is that local provider in lxc?
<kirkland> marcoceppi: thanks;  it would be fantastic if that could land today-ish :-)
<tvansteenburgh> marcoceppi: no, local provider in virtualbox
<jose> happy sysadmin day!
<mbruzek> to you to jose
<mbruzek> Happy Friday jose!
<jose> thanks!
<mbruzek> kirkland the transcode README could be markdown, looks like you duplicated a step step 5 also
<mbruzek> kirkland, charm add readme will generate the markdown template that is recommended
<kirkland> mbruzek: okay, great -- is that all that's blocking it?
<mbruzek> kirkland, no it is just what I found so far
<kirkland> mbruzek: Error: add is not a valid subcommand
<jose> kirkland: try `juju charm add readme`?
<kirkland> http://paste.ubuntu.com/7857501/
<kirkland> mbruzek: can you just point me to your favorite README.md in the charm store, and I'll clone that?
<mbruzek> kirkland, in meeting just a second.  jose has a few good ones.
<mbruzek> jose can you give kirkland a good example
<jose> sure, owncloud will do, sec, grabbing the link
<kirkland> thx
<jose> kirkland: http://paste.ubuntu.com/7857553/ is the one given by 'charm add readme'
<jose> also, I quite like the Chamilo one
<jose> https://bazaar.launchpad.net/~charmers/charms/precise/chamilo/trunk/view/head:/README.md
<sebas5384> hey! o/
<sebas5384> :)
<sebas5384> there's any way that I can help to get the Drupal charm reviewed ?
<sebas5384> :)
<sebas5384> we are planning to divulge how to deploy and scale Drupal with using the charm
<sebas5384> and if the fact of being a recommended charm is really important, and the nice logo will appear in the charm too hehe
<kirkland> mbruzek: jose: okay, readme now markdown;  what's next?
<jose> kirkland: which branch should I take a look at?
<marcoceppi> sebas5384: it's definitely in the review queue, http://manage.jujucharms.com/tools/review-queue should be looked at shortly!
 * jose just joined for the readme stuff :)
<sebas5384> thanks for the update marcoceppi !
<kirkland> jose: mbruzek: looks like both of you are deploying now?
 * jose is
<mbruzek> kirkland, I deployed and tried the first config example in the new README
 * kirkland is deploying now, too
<mbruzek> kirkland, looking at the logs on transcode/0 I see a loop
<kirkland> mbruzek: yep, waiting until all worker nodes are done
<kirkland> mbruzek: how many nodes/
<kirkland> ?
<mbruzek> 4 total
<mbruzek> http://pastebin.ubuntu.com/7858501/
<mbruzek> It does not seem to exit from this repeating pattern
<mbruzek> looks like 0 is done.
<jose> kirkland: I'll ping you in 5-10 when this is done :)
<kirkland> mbruzek: can I see ls -alF /srv?
<mbruzek> http://pastebin.ubuntu.com/7858521/
<kirkland> mbruzek: uno momento, I'm trying to reproduce it now
<mbruzek> kirkland, Where is the output written to?  I didn't see that in the new README.  The old README told me about the web url to hit, but not what the output file would be named.
<kirkland> mbruzek: good point;  I'll update that.  it'll be in /srv/, which is actually served out by apache2 on every single node
<mbruzek> kirkland, I also copied a video from my phone to the system. that is what 20140408_*.mp4 is
<kirkland> mbruzek: so you can just point a browser to any node
<kirkland> mbruzek: cool, that should work
<mbruzek> kirkland, seems stuck on the first one.
<kirkland> mbruzek: which bzr rev of the charm are you on?
<kirkland> mbruzek: r8?
<mbruzek> kirkland, no let me destroy and boostrap again
<kirkland> mbruzek: okay, I'm bootstrapping now
<kirkland> mbruzek: deploying....
<kirkland> mbruzek: total of 8 nodes
<mbruzek> kirkland, I have a total of 5 now
<kirkland> mbarnett: k
<kirkland> mbruzek: k;  mbarnett: sorry :-)
<kirkland> mbruzek: nearly up...
<mbruzek> kirkland, ran this command $ juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg output_size=640x360
<kirkland> jose: 25 nodes, is that right?
<jose> kirkland: yep! 25 nodes
<kirkland> mbruzek: I just ran the same
<kirkland> mbruzek: downloading mpg...
<kirkland> mbruzek: mine split into 8 parts, and they're each transcoding now
<mbruzek> kirkland, I don't see the mount on transcode/0 .  If I run sudo mount | grep srv I get nothing.
<kirkland> mbruzek: did you add the relation?
<mbruzek> Yes I did.  http://paste.ubuntu.com/7858678/
<kirkland> mbruzek: is /srv/ mounted correctly in any of the other transcode/* ?
<kirkland> mbruzek: I'm running right now against MAAS, on an Orange Box, and everything is mounted correctly
<kirkland> mbruzek: perhaps a problem with NFS traffic in EC2?
<jose> EC2 works fine
<mbruzek> kirkland, I am running local lxc containers
<kirkland> mbruzek: oh, hmm, well, maybe nfs isn't the best shared-fs for lxc?
<kirkland> mbruzek: I've never tried nfs mounts inside of lxc
 * mbruzek is switching to hp cloud
<kirkland> interestingly, I just had 1 out of my 8 nodes not mount NFS correctly
<kirkland> 2014-07-25 19:52:48 INFO shared-fs-relation-changed + mount -t nfs -o rsize=8192,wsize=8192 10.14.100.6:/srv/data/transcode /srv
<kirkland> 2014-07-25 19:52:48 INFO shared-fs-relation-changed mount.nfs: access denied by server while mounting 10.14.100.6:/srv/data/transcode
<kirkland> 2014-07-25 19:52:48 INFO shared-fs-relation-changed + juju-log 'mount failed: /srv'
<kirkland> looks like the nfs service wasn't up yet, when I tried to make this relation
<mbruzek> kirkland, I saw a similar error in my log (which is no sadly deleted)
<mbruzek> kirkland, you have retry logic in the mount though right?
<kirkland> mbruzek: I'll need to look again
<mbruzek> for try in {1..3}; do
<kirkland> mbruzek: okay;  I hand mounted nfs on the one bad node, and did a juju set, and my transcode job finished immediately
<kirkland>    juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg
<kirkland> jose: where are you stuck now?
<jose> kirkland: check the logs - it says something about invalid data
<kirkland> jose: ah, okay, let me try that url
<kirkland> jose: tbh, I haven't tried a .divx file yet;  I've successfully done avi, mpg, mp4, mkv, ogg, mov
<jose> so there's another thing there - if something fails on the other nodes it's gonna be stuck in a loop
<mbruzek> kirkland, does the basename get renamed or does the output have the same extension
<kirkland> jose: I suspect the install hook might need to add some additional codec packages
<mbruzek> ?
<jose> and there's no way for me to touch and cut that loop
<kirkland> mbruzek: the output will tack on the resolution, codecs, and an mp4
<mbruzek> kirkland, ack
<kirkland> mbruzek: so for mine: diditdoneit.mpg_640x360_x264_aac.mp4
<kirkland> mbarnett: I'll add that to the readme
<kirkland> mbruzek: ^
<kirkland> jose: downloading .divx
<jose> kirkland: what do you mean?
<kirkland> jose: I'm trying to reproduce your failure
<jose> oh, ok
<kirkland> jose: and my run is currently wgetting your .divx example
<kirkland> jose: I'm trying to figure out what additional packages might be necessary to support that
<jose> ok, cool!
<kirkland> jose: okay, same problem here
<kirkland> mbruzek: I upped the retries in the shared-fs relation from 3 to 60
<mbruzek> jose and kirkland if the divx codec is not free it would be appropriate to list that in the known limitations.
<kirkland> mbruzek: ack;
 * mbruzek suspects divx is not free, remembers something about divx in the past
<automatemecolema> so I have a unit of a service stuck in a dying state. Anyway to get this guy to just die
<mbruzek> automatemecolema, You could juju destroy-machine # with the one it is on
<mbruzek> automatemecolema, by chance were you using debug-hooks ?
<kirkland> jose: okay -- could you try a different input format?
<jose> kirkland: I could, but I'll have to re-deploy - the loop cannot be cut
<kirkland> jose: ?
<kirkland> jose: sure you can
<kirkland> just update the config
<kirkland> there's a couple of killalls in there
<jose> but the first instance of the config-changed hook would need to be terminated
<kirkland> jose: this just worked fine for me:    juju set transcode input_url=http://download.blender.org/demo/old_demos/diditdoneit.mpg
<kirkland> jose: hmm
<mbruzek> kirkland, jose brings up a good point.  When my local one was stuck I tried different config options and the old loop never exited.
<kirkland> jose: okay, I can add a kill switch
<jose> would be awesome
<kirkland> jose: mbruzek: for now, you can just touch the DONE files
<kirkland>                 while [ $(ls ${filename}.part*.${format}.DONE | wc -l) -lt $total_nodes ]; do
<automatemecolema> mbruzek, I think we'll just delete the machine, I was just trying to avoid that
<jose> kirkland: are you applying your change now? so I can branch and re-deploy
<automatemecolema> mbruzek, my consultant was logged into debug-hooks at some point, but not sure when
<kirkland> jose: yes, give me a minute
<jose> np
<mbruzek> automatemecolema, In some cases the debug hooks command will prevent a charm from completely dying
<mbruzek> automatemecolema, you can try juju resolved --retry charm/#
<kirkland> jose: mbruzek: how would you like the kill switch to work?  touch a file?  add a config option killall=true
<automatemecolema> mbruzek, yea I tried that several times
<jose> kirkland: probably not a kill switch
<jose> but 'if retried for 10m then quit'
<kirkland>   kill_all_jobs:
<kirkland>     type: boolean
<kirkland>     description: "kill switch for terminating all jobs"
<kirkland>     default: false
<jose> another thingy there
<kirkland> that way, you can set it to true, and then back to false, to re-run
<mbruzek> kirkland, the loop could check for a stop file of some kind. That would only be written if a new config-changed was trying to do something
<jose> mbruzek: would two config-changed's run at the same time?
<kirkland> actually, that would work
<mbruzek> jose, not sure... marcoceppi would know
<jose> kirkland: problem with that boolean option is that config-get values don't change until the new run
<jose> mbruzek: I believe not because juju is event based, and one event needs to finish for the other to run
<jose> but let's wait for an answer :)
<mbruzek> marcoceppi, will a new config-get run if the old one is still looping?
<mbruzek> kirkland, this is kind of a big change, but could you fork the process to a script so the config-get exists immediately?
<mbruzek> kirkland, once the charm gets all the information it needs, just call another bash script to do the dirty work?  The loops in the new script could check for a stop flag
<kirkland> mbruzek: is there an easy way for my charm to install a file?
<kirkland> mbruzek: and a here-doc is not what I'm looking for
<mbruzek> yes.
<mbruzek> copy it from the charm directory
<mbruzek> kirkland, I have created a files/ directory within the charm.
<kirkland> mbruzek: okay -- I'm game for that
<kirkland> mbruzek: can you give the the cp rune?
<kirkland> mbruzek: an example of where I'm copying it from?
<kirkland> mbruzek: or, rather, how to call it?  it doesn't need to be copied
<mbruzek> cp ${CHARM_HOME}/files/file.tar.gz /tmp/
<mbruzek> kirkland, you could just put another bash script in the hooks directory or charm root
<mbruzek> source run_transcode_loop.sh arg1 arg2 arg3
<mbruzek> not source
<mbruzek> my mistake
<mbruzek> ./run_transcode_loop.sh would be in the charm dir
<mbruzek> ./hooks/run_transcode_loop.sh
<mbruzek> kirkland, is that what you are asking?
<mbruzek> kirkland, ${CHARM_DIR} == cwd in a hook.
<kirkland> mbruzek: yeah, I have the change;  testing locally now
<mbruzek> kirkland, I have a hp bootstrapped and ready to test next revision
<kirkland> mbruzek: ack, let me put this through a quick local maas test
<mbruzek> kirkland, yep
<kirkland> mbruzek: http://paste.ubuntu.com/7858922/
<kirkland> mbruzek: maybe eyeball that for me?
<mbruzek> kirkland, I have lots of other stuff to work on in the mean time.
<mbruzek> kirkland, quick look over, I would suggest keeping the juju bits in the hook and just calling transcode with all the arguments that you compute in the hook.
<mbruzek> kirkland, but as the author feel free to override me on that.
<mbruzek> kirkland, I was thinking the transcode script could be juju free but perhaps that is too big of a change here.
<kirkland> mbruzek: more like this?  http://paste.ubuntu.com/7858971/
<mbruzek> yep
<mbruzek> I am sure both will work.
<kirkland> mbruzek: so I can just run ./transcode from within config-changed, if transcode is in the same dir as config-changed, right?
<mbruzek> kirkland, no I believe the cwd is charm root
<mbruzek> I believe the proper way is to type ./hooks/transcode
<mbruzek> iirc
<kirkland> doh
<mbruzek> Did I mislead you Homer?
<kirkland> mbruzek: I'm just confused about the cwd
<mbruzek> OK
<mbruzek> Hey tvansteenburgh do you have a minute?
<tvansteenburgh> yep
<jose> kirkland: he means that the folder from which the scripts are ran is your charm root
<jose> bare in mind that your scripts are in the hooks folder
<kirkland> jose: right -- I'm curious about the cwd when a hook runs
<tvansteenburgh> mbruzek: yeah what's up?
<jose> it's /var/lib/juju/agents/unit-charmname-number/charm/
<jose> that is exported as a variable, called CHARM_DIR
<jose> so every time you call $CHARM_DIR, it's taking you to that path
<mbruzek> I am trying to deploy a bundle with a local reference to the charm do you remember what we decided worked?
<tvansteenburgh> mbruzek: set the 'branch' key for the service
<tvansteenburgh> (to the local path of the charm)
<mbruzek> http://pastebin.ubuntu.com/7859181/
<mbruzek> I am getting a lot of key errors
<mbruzek> but that is using   charm: /home/ubuntu/charms/trusty/postgresql-psql
<mbruzek> Will switch this to branch and try again
<mbruzek> tvansteenburgh, We had this problem solved, I thought it was charm: /path/to/charm but I just tested it with branch.
<mbruzek> http://pastebin.ubuntu.com/7859194/
<mbruzek> tvansteenburgh, Can you have a look and tell me where I am wrong?
<mbruzek> same KeyError with branch as with charm
<tvansteenburgh> mbruzek: do you still have your local amulet changes
<mbruzek> tvansteenburgh, yes I believe so, but that is a good question, how can I verify?
<tvansteenburgh> cd to amulet dir, run `git status`
<kirkland> fuck
<mbruzek> modified:   amulet/deployer.py
<kirkland> it's not working anymore
<kirkland> http://paste.ubuntu.com/7859278/
<mbruzek> kirkland, looking
<kirkland> seems the script just dies now inside of that while loop
<kirkland> after the sleep
<kirkland> disappears from ps
<tvansteenburgh> mbruzek: `git diff`
<mbruzek> tvansteenburgh, http://pastebin.ubuntu.com/7859281/
<tvansteenburgh> ok, those have already been added to upstream
<kirkland> transcode/0 is currently doing the download, and that's going, but it seems to exit right after the wget
<kirkland> transcode/1-N just all die inside of that while loop
<kirkland> I'm at a loss for idea
<kirkland> ideas
<kirkland> I'm feeling at this point the charm store is more trouble than it's worth
<mbruzek> kirkland, I am looking, trying to find what command is causing the hook to exit?
<mbruzek> kirkland, could it be another config-changed event is coming by and killing this first transcode?
<kirkland> mbruzek: that would be my best shot in the dark
<mbruzek> kirkland, what is dying the transcode or the hook?
<bloodearnest> is there a way to juju set a value to either a) and empty string or b) back to it's default value?
<mbruzek> bloodearnest, does juju set charm key="" not work?
<bloodearnest> mbruzek: nope
<bloodearnest> nor = "''", = '""' or just =
<bloodearnest> this is on 1.20
<bloodearnest> I should say, juju set charm setting='""' does work, but it sets the value to ""
<mbruzek> bloodearnest, I was not aware that setting to empty string threw an exception
<bloodearnest> mbruzek: yeah, that's new with 1.20
<mbruzek> http://pastebin.ubuntu.com/7859627/
<mbruzek> bloodearnest, You see a stack trace like that?
<bloodearnest> mbruzek: I've been seeing quite a few panics, although not always fatal (as in, juju deploy x shows traceback, but the deploy has been kicked off)
<bloodearnest> mbruzek: exactly
<mbruzek> That should not be that way.  you should ping in #juju-dev
<bloodearnest> mbruzek: kk
<mbruzek> bloodearnest, and open a bug against juju-core
<mbruzek> It should not panic like that
<bloodearnest> mbruzek: https://bugs.launchpad.net/juju-core/+bug/1348829
<_mup_> Bug #1348829: juju-core client panics with juju set empty string <juju-core:New> <https://launchpad.net/bugs/1348829>
#juju 2014-07-26
<ayr-ton> I'm trying to add a unit to a manual environment, I got the error: "ERROR illegal base64 data at input byte 9". Someone know how to fix this?
<ayr-ton> I also tried, right now, to update to the devel ppa. But the base64 error still happening. The fact of my machine be 64bits and the server 32bits could be the problem?
<ayr-ton> Ok. I fixed the first problem, now, when I run bootstrap it says: "initialising SSH storage failed: failed to create storage dir: subprocess encountered error code 1. sudo: unable to fork: Cannot allocate memory". I need more than 512MB of memory in the bootstrap environment?
<ayr-ton> And know I have the bootstrap with 1GB, but the error "illegal base64 data at input byte 9" back.
#juju 2015-07-20
<lukasa_work> Hey folks. I'm having trouble with juju destroy-environment timing out and being unable to contact the juju server. Anyone got suggestions?
<lazyPower> lukasa_work: stale environment you just want to get rid of?
<lazyPower> lukasa_work: or do you still have machines active in the env that you need to have destroyed
<lukasa_work> Still got active machines
<lazyPower> hmm.. if its not able to reach the state machine, this gets a bit trickier. Is this an openstack env?
<lazyPower> lukasa_work: to take a sledgehammer to the situation, juju destroy-environment --force + manual deletion of any leftover machines is an option. However if it cannot reach the state server it lends itself to the idea that either the api tanked on the state server, or something has changed in networking
<lukasa_work> Yeah, I think the first
<lukasa_work> But I don't really know how to debug it. ;)
<lazyPower> juju ssh ubuntu@<ip> -i ~/.juju/ssh/juju_id_rsa
<lazyPower> should be able to sudo from there and go inspect logs in /var/log/juju
<lazyPower> sorry, no juju ssh, just ssh
<lazyPower> and thats my queue to go get my morning coffee
<lukasa_work> =P yeah, I checked that log, nothing exciting in there. =P
<lazyPower> hmm
<lukasa_work> juju-db isn't running, but it doesn't appear to be logging
<lazyPower> service jujud-machine-0 status is stopped as well?
<lukasa_work> Nope
<lazyPower> can you restart those services?
<lukasa_work> Did so, no change. =)
<lukasa_work> juju-db fails to run
<lazyPower> you should see output in syslog from trying to start that as to why its failing
<lazyPower> its typically due to a db lock, mongo tanked and didn't clean up properly so the service refuses to restart until its manually massaged
<lukasa_work> Hmmmmm
<lukasa_work> Nothing in syslog from juju-db
<lazyPower> natefinch: do you a moment to help debug juju-db not running?
<lazyPower> lukasa_work: pinged the big guns in #dev. Sorry i'm not much help :|
<natefinch> lazyPower: sure
<lukasa_work> That's fine lazyPower, I'll clean it up. =)
<natefinch> lukasa_work: if it's a matter of destroy environment not working... probably easiest thing is just to manually delete machines from the provider yourself.
<lazyPower> natefinch: i'm more concerned why the juju-db failed ;|
<lazyPower> i haven't seen that failure in a loooooong time.
<natefinch> lazyPower: well, yes.
<lazyPower> like, 1.18 days
<natefinch> lukasa_work: what version of juju?
<lukasa_work> 1.23.3-trusty-i386
<Odd_Bloke> Hello all, I have a partner who are seeing "2015-07-20 11:48:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)" when trying to bootstrap a manual environment.
<Odd_Bloke> Can anyone give me pointers as to how to start debugging this?
<natefinch> Odd_Bloke: that's almost always a networking issue.
<Odd_Bloke> natefinch: Is that local trying to connect to the remote node?
<lazyPower> Odd_Bloke: check firewall settings on the provider.
<lazyPower> Odd_Bloke: i ran into that when bootstrapping in a VPC and i didn't have the 27017 port opened in the firewall config
<natefinch> Odd_Bloke: that's juju trying to connect to mongo... ironically, it usually is  getting the info from mongo itself and then trying to connect to the master that is indicated in master.  Usually it's a matter of the computer name not being resolvable to its IP address
<natefinch> er, indicated in its database
<lazyPower> yeah, that or its configured to use the public ip, which i've run into as well
<lazyPower> however i'm not entirely certain that wasn't *my* fault
<Odd_Bloke> Ah, OK, non-resolvable local name is also a decent bet.
<coreycb> jamespage, gnuoy: I could use a review of this quantum-gateway EOL when one of you have a moment:  https://code.launchpad.net/~corey.bryant/charms/trusty/quantum-gateway/end-of-life/+merge/265035
<natefinch> lazyPower, lukasa_work: sorry, I have a meeting now and need to run some errands after.  Can't really do much with investigating the DB problem now.
<lazyPower> thanks for helping natefinch, cheers
<jamespage> coreycb, I have one request for the README - other than that looks OK
<beisner> fyi jamespage, looks like an undercloud hiccup false-failed a couple of tests on your neutron-gateway merge prop ("message": "Connection to neutron failed: Maximum attempts reached").   will re-kick those as soon as i confirm that the enviro is ok.
<jamespage> beisner, ah ok
<jamespage> nice
<beisner> fyi coreycb, since re-enabling the 052-basic-trusty-kilo-git amulet test on nova-compute next, it is consistently failing w/ "create_instance ERROR: instance creation timed out"
<coreycb> beisner, ok I'll take a look
<beisner> coreycb, ack thanks!
<jamespage> beisner, coreycb: hey  - does a charm have to be a bzr branch to run its amulet tests? I'm getting oddness with a git based one I'm working on
<beisner> jamespage, does deployer support git branches?  iiuc, amulet uses deployer under the hood.
<beisner> jamespage, ah there are indeed some lp:specific amulet helpers too.
<jamespage> hmm
<lazyPower> jamespage: i use git based charms all the time. they work
<lazyPower> as anecdotal evidence. I can point you to etcd/docker/flannel-docker/kubernetes and company
<rick_h_> jamespage: we had a git based issue as well I think. I'm trying to ask bac what was up if he recalls. We did a PR against the deployer for something
<jamespage> lazyPower, think it might be something todo with our helpers for openstack
<jamespage> lazyPower, I already have the charm locally - everything else deploys, and then deployer borkes on the charm I'm testing
<beisner> jamespage, lazyPower - i'm fairly certain the  openstack/amulet/deployment.py helpers are going to need more smarts
<lazyPower> jamespage: define borked
<jamespage> just getting the error message now
<jamespage> lazyPower, beisner:
<jamespage> RuntimeError: ('No charm metadata @ %s', 'trusty/openvswitch-odl/metadata.yaml')
<jamespage> openvswitch-odl being my charm in which i ran juju test
<lazyPower> jamespage: are you using deployer from repo or pip?
<jamespage> repo I think
<jamespage> I can do a venv - lemme try that
<jamespage> ah-ha!
<jamespage> 2015-07-20 16:20:01 Could not branch /home/ubuntu/tools/ovs-odl/trusty/openvswitch-odl to trusty/openvswitch-odl
<coreycb> jamespage, got it figured out?
<lazyPower> interesting that its trying to branch a git repository a-la bzr
<jose> Odd_Bloke: ping
<jcastro> marcoceppi: maybe we should spit out the hardware info like tacked into a result
<jcastro> with like, CPUs, drive info, etc.
<jcastro> sort of how landscape does
<marcoceppi> jcastro: no, there's no way to gather that data
<marcoceppi> jcastro: but the budnle will have constraints
<marcoceppi> which will be either the instance-type submitted, or the hardware output (cpu-cores, cpu-power, mem, etc)
<jcastro> oh ok
<jcastro> so in other words, just make the constraints very specific
<marcoceppi> jcastro: well, instance type will be enough to determinte exactly what the profile of the machine is
<marcoceppi> and if no constraints are given, IE default instance size in Juju, you'll get the hardware data juju provides
#juju 2015-07-21
<h0mer___> hey guys can anyone here help out with a maas/juju/openstack networking question?
<tasdomas> hi
<tasdomas> I'm working on introducing action support to amulet - should I propose the changes on launchpad or github?
<Odd_Bloke> jose: Pong.
<jamespage> lazyPower, beisner, coreycb: bug in deployer for git repos
<jamespage> https://code.launchpad.net/~james-page/juju-deployer/git-cwd/+merge/265365
<jamespage> tvansteenburgh, dpb1: any chance one of you could review https://code.launchpad.net/~james-page/juju-deployer/git-cwd/+merge/265365
<jamespage> that fixes from challenges I had with git support
<tvansteenburgh> jamespage: i'll take a look
<beisner> jamespage, ah nice ^
<aisrael> tasdomas: On github: https://github.com/juju/amulet
<beisner> hi jamespage, please review ... o-c-t: Q-G rename + pxc + wily/liberty prep   https://code.launchpad.net/~1chb1n/openstack-charm-testing/liberty-prep/+merge/265106   tia!
<beisner> hi gnuoy - not quite ready for final review, but wanted to alert you to some re-working in the openstack-mojo-specs @ https://code.launchpad.net/~1chb1n/openstack-mojo-specs/baremetal-bundle-constrainer/+merge/264558
<beisner> ^   this is what uosci is consuming atm, resolves several issues in automation, a few more yet to iron out
<gnuoy> ack, thanks
<beisner> gnuoy, the lint checker just got a lot pickier ;-)   needed to ensure that specs are using sane u:os combos.
<gnuoy> ah, tip top
<jamespage> beisner, +1 lgtm
<beisner> jamespage, ty
<beisner> jamespage, ok, that's landed
<jamespage> awesome
<arosales> marcoceppi, aisrael, tvansteenburgh: I am loving cloud-benchmarks.org
<tvansteenburgh> arosales: <3
<marcoceppi> arosales: \o/
<arosales> had an idea I wanted to bounce off you guys
<marcoceppi> arosales: go for it
<arosales> if someone notices a key performance enhancing method is missing from the charm how can we make it clear for them on how to contribute to the charm?
<arosales> or they would like to add a benchmark.
<marcoceppi> arosales: yeah, we have a little bit of a call to action at the bottom, but we don't make it clear. We do link to the jujudocs on writing a benchmark charm/action though
<marcoceppi> arosales: we ahve a follow up blog post which addresses the comments our latest post has been getting, I think in there we'll discuss how contributions work since we're patching the charm in a few ways to iterate on feedback
<marcoceppi> arosales: adding it to the about page as well, may not be a bad place at a high level
<arosales> About page may be nice for some additional pointers to the docs and how to contribute to charms
<arosales> marcoceppi, be nice if on say "http://cloud-benchmarks.org/submissions/6" or "http://cloud-benchmarks.org/services/cassandra" there was a call to action there
<marcoceppi> arosales: ack
 * arosales just thinking outloud atm
<arosales> wanted to see what folks thought
<marcoceppi> arosales: I think it makes sense on the services overview page
<marcoceppi> results page as well, but not 100% sure where it'd fit in there
<arosales> as we would like to make it super simple for folks to see the benchmarks (done) but also easily contribute code to improve the charms performance
<arosales> marcoceppi, may have to live in the submission page and parse out charm URLs
<arosales> marcoceppi, something to think about. Specifically if we would like folks to contribute make the UI to do so amazingly easy
<arosales> in this case contribute performance tunings back to the charm. . .
<marcoceppi>  it's a good point
<tvansteenburgh> yeah it would be nice to parse charm urls out of the bundle and link to them at least
 * arosales really like how you got the juju quickstart command right there
<arosales> makes it super simple to try
 * arosales was just thinking of try, "branch," tune, submit, share workflow and how that could be modeled for benchmarking
<arosales> food for thought
<tvansteenburgh> good ideas arosales, thanks
<g3naro> hi,, anyone used juju to deploy docker containers on aws ?
<lazyPower> g3naro: I sure have
<g3naro> lazyPower: haev you got any ref. guides for this?
<lazyPower> g3naro: are you looking just to spinup the docker based infrastructure and deploy containers to that? or are you looking to deploy your products docker container with juju?
<g3naro> exactly what im trying to seutp
<lazyPower> you have some options here
<g3naro> deploy the products (docker container) into aws
<g3naro> i guess try to see if it makes sense to use juju for that in first place, im not too experience with juju yet either :)
<lazyPower> well, we have a charm that can deploy the docker infrastructure - which takes care of setting up the latest stable docker, configuring the backend storage service (like UFS enablement) - and to deploy container into that theres 2 options. Charm it up as a subordinate and relate to docker, or we can skip the docker charm all together and write a quick bash charm (50 lines or less) to deploy that specific container.
<g3naro> ohhh interesting
<g3naro> so does infrastructure include things like elastic beanstalk (aws load-balancing app)
<g3naro> that sounds very similar use cawse as well
<lazyPower> Juju doesnt' natively integrate with beanstalk, at least not today.
<g3naro> ahh ok
<g3naro> so that could be a seperate task
<lazyPower> if you're looking to do load balancing, there's work and charms already written that leverages consul, consul-template, dns, and nginx to do this :)
<g3naro> ok, nginx probally sounds the way to go
<g3naro> i had the understanding that the aws ebs with docker does the service registry in one system
<g3naro> but maybe i just dont understand it well enough
<lazyPower> ELB can work - but unfortunately we dont have a representation of ELB in juju as it stands today
<g3naro> ahh kk no worries, then
<lazyPower> so what you could do, is deploy with juju, then configure ELB in the AWS control panel to point at those nodes.
<g3naro> yeah that what i was thinking
<lazyPower> which sounds closer to what you want to achieve
<lazyPower> g3naro: https://github.com/chuckbutler/docker-charm is a good place to start. The readme gives you just enough to drill down and run a deployment
<g3naro> interesting
<g3naro> yeah i have deployed that into lxc container actually
<lazyPower> g3naro: the merge for AUFS backend storage is still pending for the charm store unfortunately :( But its available in my namespace and offers an easy upgrade path from there.
<lazyPower> i'm highly surprised if that works. default APPArmor profiles prevent docker from working properly in lxc
<g3naro> i havent run the app on it
<g3naro> but it worked on my box just about an hour ago
<lazyPower> impressive, you've done what i've been brain bending around for a couple months now :)
<lazyPower> i can't even get the daemon to come online
<lazyPower> it gets immediately nuked when it requests cgroups resources
<g3naro> root@g3naro-local-machine-3:~# docker --version
<g3naro> Docker version 1.0.1, build 990021a
<g3naro> i used a different charm actually
<lazyPower> ah thats deploying the package from archive
<g3naro> and just did juju switch local, juju deploy docker
<lazyPower> 1.0.1 is quite old now
<g3naro> true
<lazyPower> but that's still the same charm.
<lazyPower> juju set docker latest=true version=1.0.7 will upgrade
<g3naro> i guess you have to add repo and updated it
<g3naro> ohh
<lazyPower> :) this is why managing docker as an infrastructure component makes sense. You get independent control of docker, where if you're shipping it with a charm you lose some of that, or you're duplicating effort
<g3naro> ahhh ok,, yeah now i see the point
<lazyPower> if you're following the mailing list, we're working through "juju compose" which will allow you to inheret from the docker charm, and only maintain the hooks that are important to your service - so delivering of that container and making sure its got hte proper env + relations.
<g3naro> but then what about if application is packaged as docker image
<g3naro> im not on the list, but i think i would be interested to
<g3naro> well definately would be intersted to
<lazyPower> so, using the juju compose methodology, you inheret, write a config-changed hook thats specific to your container, add any relations you need if its going to communicate with other services, and generate your charm. As we cycle and update the docker charm you simply regenerate your charm and publish.
<lazyPower> g3naro: i'm going to run for lunch however, if you need additional information feel free to ping. I should be back in about 45 minutes to an hour.
<g3naro> interestingok cool man, thanks alot
<g3naro> ill be here
<lazyPower> also my colleague mbruzek has some experience with docker and charming :) so feel free to utilize him as a resource as well
 * mbruzek waves
<g3naro> wow cool :D
<g3naro> just fyi, ive only started using juju/lxc/docker <1 month
<g3naro> but very keen to build this out
<g3naro> so i think step1 is creating a charm that will create docker inf. in aws
<g3naro> https://jujucharms.com/u/mattyw/docker/precise/3
<g3naro> this is what i had installed actually
<g3naro> "/alias hidelevels.set set activity_hide_level parts joins quits nicks modes"
<g3naro> seems a bit too quiet now
<g3naro> :/
<mbruzek> still here
<g3naro> heheh, sorry been rambling a bit
<mbruzek> although your alias came across as text so I don't know how well that would work
<g3naro> yeah i had to re-send it
<g3naro> so.. yeah im trying to get my head around this still
<g3naro> if the application is a docker image
<mbruzek> Juju deploys a charm in AWS or GCE, or where ever.  That charm installs docker and configures it for you.  Then you can run commands on that unit to start docker containers
<g3naro> so step 1- deploy the docker inf.
<g3naro> creating ec2 instances with docker installed
<g3naro> then deploying application into those boxes
<mbruzek> g3naro: Juju can create an ec2 instance with docker inside it.  Just deploy the docker charm.
<mbruzek> Then you can run commands on that unit to start docker containers inside the vm.
<g3naro> ok, so on the vm you ship the docker image and run it
<mbruzek> g3naro: Juju doesn't ship images, it configures the base image.  So a charm installs and configures Docker on that unit.
<g3naro> ohh
<mbruzek> You could write a charm that installs docker and starts a container with your application
<mbruzek> If you run it in a container.
<g3naro> yeah that is the step needed then
<g3naro> charm the application
<mbruzek> But you could start with the base image and just install the application on the VM.
<mbruzek> But it is up to you how you want to organize and architect that
<g3naro> hmm yeah,, thats what i have to find out,, and fast :)
<mbruzek> g3naro: Well just ask here if you need help.  I have to run for lunch too.  There are others that can help.  Also lazyPower and myself use #system-zoo to discuss container stuff, you are welcome to join us there as well.
<g3naro> ok cool man! thanks so much
<g3naro> im trying to get started so would this charm lazy provided before be stuitable to test
<mbruzek> g3naro: #juju is still your best bet for general Juju questions.   #system-zoo is a smaller group of container related technology
<g3naro> yea.. juju is still new for me, and ill have few more by 2morrow :)
<g3naro> im in UTC time
<mbruzek> We will be here
<g3naro> nice :)
<tasdomas> hi, I've submitted a PR that adds action support to amulet
<tasdomas> https://github.com/juju/amulet/pull/80
<mbruzek> There are folks working on Juju from all over the world.  I happen to be in CST, but we have an office in London, lots of folks on that timezone
<g3naro> yeah man, would be great to be part of ubuntu one day ;)
<mbruzek> tasdomas: Thank you for your contribution!  tvansteenburgh, or marcoceppi will likely be the ones to review that
<tasdomas> mbruzek, don't thank me yet ;-]
<mbruzek> tasdomas: if you don't get a review let me know and I can take a look as a back up.
<mbruzek> happy to have interest in making our tools better
<mbruzek> Ok I have to get going to lunch
<g3naro> ez bro
<mbruzek> be back in a few
<tvansteenburgh> tasdomas: thanks for the PR, i'll have a look in a bit
<jose> Odd_Bloke: hey, I had some questions about the ubuntu-repository-cache charm. if you had a min, mind a pm?
<marcoceppi> g3naro: hey! Thanks for the contribution, we've left some feedback, though I usually jsut defer to tvansteenburgh
<tvansteenburgh1> s/g3naro/tasdomas/
<marcoceppi> blehhhhhhh
 * marcoceppi disappears for a while
<lazyPower> g3naro: o/
<tasdomas> marcoceppi, thanks for the feedback
<marcoceppi> tasdomas: thanks for taking a stab at this, overall the implementation looks great, I just had some small comments that are pretty inconsequential
<tasdomas> marcoceppi, I'll try to address the comments shortly - I need the implementation of actions to be able to test my own charm (that uses actions extensively)
<marcoceppi> tasdomas: well we're starting ot see a lot of actions show up, so it's pretty perfectly timed
<Odd_Bloke> jose: I'm EOD now, but rcj can probably answer them.
<Odd_Bloke> (Sorry rcj ;)
<jose> lol
<rcj> Odd_Bloke, jose: no problem, I can answer questions on that
<tasdomas> tvansteenburgh, ping?
 * arosales got a sack full of k8 shirts
<arosales> I made sure to get a few for marcoceppi cause I know he loves himself some docker
<marcoceppi> arosales: the winters are long and harsh here ;)
<arosales> marcoceppi, :-) you need something to change the midget's oil in
<tvansteenburgh> tasdomas: pong
<tasdomas> tvansteenburgh, marcoceppi - thanks for the reviews, I'll speak to you tomorrow
<tvansteenburgh> tasdomas: sounds good, thanks!
<lamont> if I have a service that was deployed and working with juju 1.18.1-0ubuntu1, and now I have juju 1.20.11-0ubuntu0.14.04.1 and it says 'ERROR no such request "NewServiceSetForClientAPI" on Client', what is the best fix?
<marcoceppi> lamont: well, that's not supposed to happen
<lamont> marcoceppi: I have an existance case to the contrary.. :(
<marcoceppi> did you do an upgrade or fresh deploy?
<lamont> https://pastebin.canonical.com/135766/
<lamont> which isn't the answer to your question
<lamont> deployed with 1.18.1 (maas host, which ahs been up for 6 days) - dist-upgrade yesterday brought in 1.20.11 (backported to trusty), and today, juju set fails to perform
<lamont> but the 1.18.1 binary works just fine
<lamont> marcoceppi: interestingly, it looks like 1.18 was just ignoring the error and plowing on ahead, based on what the 1.18 binary produced in the machinelog
<lamont> in slightly different questions: what do I need to restart to have juju notice that I changed the IP address of the machine?  and what side effects should I expect?
<marcoceppi> lamont: typically you need to restart the juju-machine agent on the instance, IIRC
<lamont> marcoceppi: should have no side effects?
<marcoceppi> lamont: well, 1.20 is still pretty "old", so can't say it won't have any side effects, ideally it should recover jsut fine. the agent will go from up to down to up again in status and in that time events will be queued but not dispatched, if a hooks is running it'll likely be marked as an error
<aisrael> is debug-hooks part of juju-core
<aisrael> or a plugin?
<lazyPower> i'm fairly certain its a plugin that lives in core.
<lazyPower> but, a core dev might be a better candidate to answer that
<lazyPower> aisrael: i'm wrong its core - https://github.com/juju/juju/blob/596210921f2291010d94682f3fcc3cea71534efe/cmd/juju/commands/debughooks.go
<aisrael> lazyPower: ack, thanks. To open a bug report or write a patch, decisions decisions.
 * aisrael goes the lazy route
<lazyPower> aisrael: why not both?
<lazyPower> also, i had no idea you hacked in golang
<aisrael> lazyPower: Maybe after I'm out of the weeds. :/
<lazyPower> marcoceppi: stop working aisrael so hard
<aisrael> Very minor hacking in it, but it's piqued my curiosity so I just need a strong enough itch to scratch.
<lazyPower> i just recently picked up the golang bootcamp myself, so i can start learning and helping
<aisrael> There's already a bug for what I need. Almost two years old. Time to fire up the go machine
<mad4comp> Hey guys.  I'm trying to deploy a Maas/Landscape/Opestack instance with 8 machines connected to two networks, but for some reason it seems that all my vm's get spun up, but can't reach out to the internet to configure the meta-data service.  Anyone know if this is a bug?
<blr> hi there, is there a workaround for https://bugs.launchpad.net/juju-core/+bug/1464335
<mup> Bug #1464335: debug-log does not work with local provider <debug-log> <local-provider> <regression> <vivid> <juju-core:Triaged> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1464335>
<tvansteenburgh> blr: i usually just tail the log i care about under /var/log/juju*
<blr> tvansteenburgh: yep I'll do that thanks
#juju 2015-07-22
<tasdomas> hi
<tasdomas> what is the purpose of amulet.raise_status(amulet.PASS) ?
<Odd_Bloke> How large a disk does the bootstrap node need?
<Odd_Bloke> Would 10GB be sufficient?
<marcoceppi> Odd_Bloke: that should be sufficent
<Odd_Bloke> marcoceppi: Great, thanks.
<marcoceppi> tasdomas: that more or less exits the test and prints a PASS line
<tasdomas> marcoceppi, I'm asking because if I did that, the test was marked as failed
<marcoceppi> rofl, I don't think that's the intention of that command
<Odd_Bloke> marcoceppi: If you have a few minutes: https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864
<marcoceppi> Odd_Bloke: ack, taking a look
<Odd_Bloke> marcoceppi: Thanks muchly!
<Odd_Bloke> That should let me sort out issues in the ubuntu-repository-cache merge proposals.
<Odd_Bloke> Then you can merge those. ;)
<marcoceppi> \o/
<tasdomas> marcoceppi, bundletester is a great tool!
<marcoceppi> tasdomas: it is very nice, we hope to have it be what `charm test` and `juju test` run in the near future
<tasdomas> marcoceppi, that would be nice - charm test fails for me (using local provider)
<coreycb> jamespage, hey, can you take a look at my response to your comment here?  A charm-helpers fix is needed for upgrade-charm:  https://code.launchpad.net/~corey.bryant/charms/trusty/quantum-gateway/end-of-life/+merge/265035
<beisner> marcoceppi, it appears that amulet.SKIP is unexpectedly invoking --set-e behavior.  end result is a failed test run instead of a single skipped test.  I don't generally use amulet.SKIP or amulet.PASS explicitly, but dosaboy has a percona-cluster amulet test in flight which does, and that's where we are seeing this.
<beisner> marcoceppi, according juju test -h this ^ would appear to not be the expected behavior.
<beisner> marcoceppi, i've not dug into the code, but could it be that any amulet.raise use takes us down a fail path (given tasdomas is also seeing amulet.PASS raises cause a fail)?
<tasdomas> beisner, that's what I saw - I just assumed I was using it wron
<tasdomas> g
<g3naro> yo!
<g3naro> ok,, is there a charm that can build and configre a docker-registry server ?
<g3naro> or should i somehow package a puppet class that does setup for this
<mbruzek> g3naro: I am not aware of a charm that sets up a docker-registry server.  That is a good candidate for a new charm *nudge* *nudge*
<mbruzek> g3naro: charms can be written in puppet or chef, or any thing that runs on Ubuntu
<mbruzek> also centos
<mbruzek> g3naro: If you want to search for charms use this url: https://jujucharms.com/
<mbruzek> It searches charms that are in a personal namespace and ones that are in the recommended section of the charm store.
<g3naro> ahh ok great!
<marcoceppi> beisner: it shouldn't PASS -> FAIL is determined by the test runner
<marcoceppi> PASS exits with code 100, juju test has a flag to determine how to handle SKIP (as OK, or problem)
<beisner> marcoceppi, raising a SKIP or PAAS causes juju test to exit non-zero.
<marcoceppi> not sure how bundletester handles this
<beisner> PASS even
<marcoceppi> beisner: well that's not the expected outcome
<marcoceppi> and is the first I've heard of this. Do you have an example of the test/cahrm causing this so I can repro?
<beisner> dosaboy, ^
<marcoceppi> stub: ping
<marcoceppi> stub: actually, going to PM
<dosaboy> marcoceppi: https://code.launchpad.net/~hopem/charms/trusty/percona-cluster/min-cluster-size/+merge/265502
<dosaboy> try running 'make test'
<dosaboy> you'll see that most tests skip since they require a vip
<dosaboy> yet they all get marked as fail
<dosaboy> all the ones that skip that is
<beisner> dosaboy, even though this may be unexpected behavior, i would recommend that we allow the test to fail if it's not fed everything it needs (and not even try to SKIP it).
<beisner> dosaboy, otherwise a binary pass/fail in ci for a merge proposal may not always mean the same thing.
<marcoceppi> dosaboy beisner do you have output? I'm getting skips as expected
<marcoceppi> what version of charm-tools are you using?
<beisner> marcoceppi, one diff:   use --set-e
<beisner> marcoceppi, uosci runner injects that into the makefile so that if something fails, we have an environment to pull logs from
<dosaboy> beisner, marcoceppi: i don;'t use --set-e and i still get "failed: 5" which is all those that skip
<marcoceppi> bleh, I hate how half baked this tool is, it should really just always get logs
<beisner> otherwise if a test fails, the juju environment is torn down before juju test exits, and we don't really have data to inspect.
<dosaboy> charm-tools 1.5.1-0ubuntu1~ubuntu14.04.1~ppa1
<marcoceppi> this is stupid logic doing stupid things
<marcoceppi> I'll open a bug for juju test and get a patch release out
 * beisner coffees..
<marcoceppi> so, I found the problem with skip
<apuimedo> mbruzek: ping
<mbruzek> pong
<apuimedo> mbruzek: how are you?
<mbruzek> I am well.  How are things with you?
<apuimedo> mbruzek: not too bad
<apuimedo> I did the change you suggested
<apuimedo> https://code.launchpad.net/~celebdor/charms/precise/cassandra/hostname_resolve/+merge/257120
<mbruzek> great I will review it today
<apuimedo> thanks
<apuimedo> mbruzek: should I change it to needs review?
<mbruzek> yes
<mbruzek> thank you for the followup
<apuimedo> ;-)
<marcoceppi> apuimedo: any reason why you're using the precise version of cassandra and not the new trusty version?
<apuimedo> marcoceppi: the trusty version was not there when I satarted with this
<apuimedo> I guess I should move to it
<apuimedo> and check if it can do what I need to
<marcoceppi> apuimedo: it's available now, you should just be able to deploy the trusty version instead
<marcoceppi> it's a lot smarter about how to handle clustering and versions of cassandra
<apuimedo> I'm very glad to hear that
<apuimedo> marcoceppi: I can't see about choosing a specific version
<marcoceppi> apuimedo: how specific do you need? this supports more generic options like datastax vs apache and has support for apache cassandra 2.0 and 2.1 atm
<apuimedo> I need to install 2.0.1
<apuimedo> from datastax
<marcoceppi> you mean datastax enterprise or just the cassandra 2.0.1?
<apuimedo>         #   - deb http://debian.datastax.com/community stable main
<marcoceppi> apuimedo: you can set that in the config.yaml
<marcoceppi> install_sources
<apuimedo> marcoceppi: yes, but I need an older package from that source
<apuimedo> I guess I would have to patch this new cassandra to allow you to specify a specific version
<marcoceppi> apuimedo: I see
<marcoceppi> apuimedo: well, you can continue to use the precise version, don't get me wrong
<marcoceppi> just wanted to make sure you were aware of teh trusty version
<apuimedo> I would like to use the trusty
<apuimedo> so probably what we can do is
<apuimedo> we get this merged for precise
<apuimedo> and then for the next release I patch the new cassandra charm to accept version
<apuimedo> and move our bundle to use it
<marcoceppi> cool
<marcoceppi> thanks for the all the work so far!
<apuimedo> ;-)
<mbruzek> apuimedo: What package_version can I set the cassandra charm to?
<apuimedo> I'll give you the config
<apuimedo> mbruzek: http://paste.ubuntu.com/11920807/
<g3naro> we come in peeeeace
<coreycb> jamespage, gnuoy: can one of you review this?  it's needed by neutron-gateway in order to allow upgrading from quantum-gateway to neutron-gateway.  https://code.launchpad.net/~corey.bryant/charm-helpers/upgrade-charm/+merge/265450
<pmatulis> are log levels for debug-log and juju-log the same? TRACE, DEBUG, INFO, WARNING, ERROR
<marcoceppi> pmatulis: probably
<sebas5384> lazyPower: ping
#juju 2015-07-23
<tasdomas> marcoceppi, do you think https://github.com/juju/amulet/pull/80 needs any further changes?
<tasdomas> hi
<tasdomas> does bundletester keep a cache of charms anywhere?
<tasdomas> it seems to ignore an updated local charm
<Odd_Bloke> stub: Might you be able to have a look at https://code.launchpad.net/~daniel-thewatkins/charm-helpers/unison_patches/+merge/265633?
<Odd_Bloke> marcoceppi: If stub can't, might you be able to have a look at ^?
<Odd_Bloke> stub: marcoceppi: Oh, hold on, that looks like I've screwed something up pulling the changes across to my branch.
<Odd_Bloke> stub: marcoceppi: OK, fixed now.
<stub> Odd_Bloke: I've got a MP adding hookenv.peer_relationship_id(), which would allow you to drop the peer relation argument entirely if you want (not sure who is using this or how important backwards compatibility is)
<stub> Odd_Bloke: I'm no longer using this, but I'll review since I was probably the last person to touch it.
<Odd_Bloke> rcj: You might be interested in ^
<Odd_Bloke> stub: Thanks!
<stub> Odd_Bloke: Are you just using the ssh bit btw? That is all I was using, and at the time thought it should really be separate from unison.
<Odd_Bloke> stub: I can't tell you for sure, I haven't really touched that bit of the charm.
<Odd_Bloke> I'm trying to get a completely separate change merged, but need this to land in charm-helpers to remove our local modifications to charm-helpers to get that merged. :p
<stub> Odd_Bloke: I haven't seen someone assigning to mock.call_args_list before. I use mock.reset_mock()
<stub> I guess this way preserves return_value etc.
<stub> niedbalski: You broke trunk. 'make test' fails on core.file with Python3.
<stub> Fixed.
<stub> And the intermittent openstack failure, cause that's the sort of guy I am.
<Odd_Bloke> stub: Masochistic? :p
<stub> Odd_Bloke: Avoiding my real TODO list.
<Odd_Bloke> ^_^
<Odd_Bloke> stub: Thanks for the merge. :)
<stub> np
<stub> Should I have landed that? I just spotted tarmac_tests.sh
<Odd_Bloke> marcoceppi: If you could have a look at https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/update_charm-helpers/+merge/265645, it would be much appreciated.
<Odd_Bloke> That's just a charm-helpers update (and removal of the patches we were using which have now been merged).
<Odd_Bloke> marcoceppi: https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/add-rsync-timeout/+merge/261424 is also approved by a couple of people and ready to merge.
<tasdomas> does anyone know what could be the cause of bundletester deploying an older revision of a local charm?
<lazyPower> tasdomas: git or bzr based charm?
<tasdomas> lazyPower, bzr
<lazyPower> tasdomas: do you have uncommitted changes in the repository? iirc there's a bug open about it not deploying the uncommitted changes
<tasdomas> lazyPower, that was it - thanks!
<lazyPower> tasdomas: sorry you ran into that, but glad I could help :)
<lazyPower> tasdomas: https://github.com/juju/amulet/issues/55 might want to subscribe to this bug so you know when its resolved.
<tvansteenburgh> lazyPower: i just realized why that's happening
 * tvansteenburgh goes to fix bug
<jrwren> lazyPower: I really like that branch name
<lazyPower> @jrwren - btw saw you starred gogs. You should alsos tar gitea
<lazyPower> its the fork thats under active dev, gogs maintainer isn't accepting patches.
<lazyPower> jrwren: https://github.com/go-gitea/gitea
<jrwren> lazyPower: thanks. I will.
<lazyPower> just FYI - i have a gogs server deployed that will be replaced by this when they land a stable.
<lazyPower> its not ready to replace it, but will be in short order.
<lazyPower> and re: branch name - its self explanatory.
<lazyPower> and i was having a rough day to boot
<tasdomas> jrwren, are you writing a gogs charm?
<jrwren> tasdomas: NO!
<tasdomas> jrwren, sheesh, no need to shout ;-]
<jrwren> tasdomas: I want to be very clear. :)
<jrwren> tasdomas: sorry, even the thought of this nearly gave me panic attacks.
<tasdomas> jrwren, yeah - I spent half a day trying to charm gogs once, started with the bright idea of "I know, I'll first package it to a ppa"
<jrwren> tasdomas: hahaha, and its go code, so it doesn't package to deb without a fight :)
<tasdomas> jrwren, not without trying to drive you mad first
<jrwren> tasdomas: well, if you need help with that step, let me know. I've gotta good at it.
<tasdomas> jrwren, yeah - I decided I don't really need gogs, so there went the motivation to charm it
<tasdomas> jrwren, built an action-based git server charm instead
<jrwren> tasdomas:that sounds interesting. What is an action-based git server?
<tasdomas> jrwren, the charm uses actions to manage a git server
<jrwren> tasdomas: what kind of actions? and to what end?
<tasdomas> jrwren, juju actions - https://jujucharms.com/u/tasdomas/git/trusty/4
<tasdomas> jrwren, still ironing out a few kinks (and writing amulet tests for it)
<jrwren> tasdomas: that is really interesting
<tasdomas> jrwren, thanks
<marcoceppi> tvansteenburgh: any other thoughts on this? https://github.com/juju/amulet/pull/80 lgtm
<tvansteenburgh> marcoceppi: weird, never got an email about that, looking...
<tvansteenburgh> marcoceppi: lgtm
<marcoceppi> tvansteenburgh: cool
<marcoceppi> I'm going to merge and do a release
<tvansteenburgh> ack, although i have more fixes coming
<marcoceppi> tvansteenburgh: oh, I'll hold off then
<ddellav> jamespage, whenever you get a chance if you could take a look at my MP: https://code.launchpad.net/~ddellav/charms/trusty/glance/upgrade-action/+merge/265592
<niedbalski> stub, could you expand?
<niedbalski> stub, this is not my change's fault, is in fact an issue with the latest mock package (please see: https://code.launchpad.net/~freyes/charm-helpers/mock/+merge/264884)
<niedbalski> stub, if you are refering to the same issue.
#juju 2015-07-24
<stub> niedbalski: The NamedTemporaryFile was opened with the default mode ('wb'), but having a string written too it. I fixed the mode to 'w' so writing text to it actually works.
<stub> And lp:charm-helpers got push --overwritten, so ensueing merge headaches and trunk has two broken tests again :-(
<stub> So pretty please everyone, actually run 'make test'
<stub> oic, the openstack fix came in elsewhere and the file fix lost in the confusion
<tvansteenburgh> hazmat: ping
<niedbalski> stub, thanks.
<jamespage> ddellav, looking now
<jamespage> ddellav, looks good - a few comments to think about and resolve, but basically 99% done IMHO
<jamespage> ddellav, I think we need to try to avoid using action_fail in scenarios where the action does not do anything
<jamespage> git and configuration not set right
<jamespage> ddellav, some sort of action-set of the outcome seems appropriate
<jamespage> ddellav, skipped
<jamespage> ddellav, upgraded
<ddellav> jamespage, git and configuration not set right?
<jamespage> ddellav, sorry of git-install or config-changed managed upgrades are configured
<jamespage> of/if
<jamespage> I can't type today either
<jamespage> and have a tendency to repeat myself so bear with me
<ddellav> no worries :)
<ddellav> jamespage, so i'll go back and take out the action_fail lines when the code does nothing and maybe just use juju_log instead?
<jamespage> ddellav, I'd suggest some sort of 'outcome' output variable to say what action was taken
<jamespage> 'skipped' 'upgraded' 'upgrade failed'
<jamespage> and log message is nice as well I guess
<ddellav> ah ok, that makes sense.
<jamespage> ddellav, I think 'fail' sends the wrong message back to the end user
<jamespage> the action did not fail - it just did not do anything!Â¬
<whit> alai1: heyo
<alai1> whit, hi
<whit> cloud:trusty-updates/juno is not a string I suspect as being in the etcd charm
<whit> as etcd should not be openstack specific in any way
<alai1> whit, the etcd charm option source cannot be set
<alai1> it always set to cloud:trusty-updates/juno
<whit> alai1: ok let me look at the code
<whit> alai1: how are you testing what it is set to?
<whit> also, where does the charm you are deploying come from?
<whit> and what cloud are you deploying onto?
<alai1> whit, juju get etcd
<lukasa> I feel like I should mention that I'm here right now
<lukasa> As I suspect this conversation is tangentially about my work. ;)
<whit> and source=" cloud:trusty-updates/juno"
<lukasa>  /cc alai1 whit
<alai1> lukasa, hi ;)
<lukasa> Howdy alai1 =D
<whit> alai1: so that string doesn't exist in the master repo, so I don't see how the charm would be setting it
<whit> alai1: hmmm
<alai1> lukasa, any idea why source in etcd charm is set to cloud:trusty-updates/juno ?
<lukasa> uh...which etcd charm are you using?
<lukasa> If you were using my old one that might happen, but with whit and lazyPower's it's hard to see how
<whit> that string is totally not something that should be getting set there
<whit> aha
<alai1> lukasa, whit :  we are using lp:~kubernetes/charms/trusty/etcd/trunk
<alai1> lukasa, and got this error http://pastebin.com/dCd53r70
<whit> alai1:  is it possible your environment has another etcd charm cached somehow?
<lukasa> That's perplexing
<whit> alai1: that string does not exist in the code you think you are deploying
<alai1> whit, good point... I always clear the cache but I may forgot this time ;).  Trying it again...
<whit> so I'm guessing either the code you think you are deploying is not... or something external is setting it :(
<whit> alai1: cool
<lazyPower> whit: its probably set by a bundle
<lazyPower> alai1: check the bundle source
<alai1> lazyPower, bundle source doesn't set up
<alai1> lazyPower, in fact i added source: https://github.com/coreos/etcd/releases/download/v2.0.11/etcd-v2.0.11-linux-amd64.tar.gz in the bundle and did not help
<alai1> lukasa, whit: i still see the same error
<lazyPower> alai1: can you link me to your bundle, as well as steps to reproduce?
<alai1> i don't see that string in the charm so must be something external as whit said
<lazyPower> alai1: Thanks for the pastebin - looking this over i see the issue
<lazyPower> there is an overrides: directive at the top - and its declaring any source: option should be could:trusty-updates/juno
<lazyPower> the etcd config option is 'source' - so therefore its getting overridden by the bundle config
<alai1> lazyPower, woot
<alai1> good catch lazyPower
<lazyPower> :)
<lazyPower> I like to think this is why whit keeps me around.
<alai1> i would never thought of the bundle as it is auto generated and we never had any issues with it
<lazyPower> thats the one variable in the equation between upstream and the deployment you're running is the bundle.
<lazyPower> and i had suspicioned since 'source' was at play it got overridden
<lazyPower> alai1: if you dont mind can you file a bug against the ETCD charm so i've got a reference when i propose a fix?
<lazyPower> we normally dont like to break backwords compat like this, and having a bug that references this breaks with openstack deployments will help validate the necessity for the change
<alai1> lazyPower, would be glad to do it
<alai1> give me few minutes
<bhundven> trying to add mysql to an lxc, but the lxc fails to start: http://pastebin.com/jZmamds4
<bhundven> this is the machine-1.log from /var/log/juju
<marcoceppi> bhundven: do you have a proxy in place or restricted access to the outside world?
<bhundven> marcoceppi: there is a nat, but no proxy
<marcoceppi> bhundven: try `juju retry-provisioning 1/lxc/0`
<bhundven> $ juju retry-provisioning 1/lxc/0
<bhundven> error: invalid machine "1/lxc/0" retry-provisioning does not support containers
<bhundven> 1.24.2-trusty-amd64
<marcoceppi> bhundven: terminate-machine 1/lxc/0; then juju remove-unit mysql/0
<marcoceppi> then try to deploy to container again
<marcoceppi> it failed to fetch the base image that lxc uses for cloning
<marcoceppi> which is stored and created oin teh bootstrap node
<bhundven> marcoceppi: do you know the logfile this work will be in? I'm guessing on the node the lxc is being deployed to.
<bhundven> juju add-machine lxc:1
<bhundven>       1/lxc/1:
<bhundven>         agent-state-info: 'container failed to start and was destroyed: juju-machine-1-lxc-1'
<bhundven> yea, I made a new container after that, and still get the same error as the pastebin
<bhundven> I can wget/curl from sites outside my network from the physical machine through the nat
<bhundven> so not dns or connectivity.
<bhundven> afaict
<bhundven> also weird is that I have done a 'boot-resources import' and click the apply button on the images page of maas, but it still says: "Boot image import process not started. Nodes will not be able to provision without boot images. Visit the boot images page to start the import.", but I have 4 deployed nodes. I tried nuking the /var/lib/maas/boot-resources
<bhundven> directory, ran the maas command again and watch the images sync in the logs.
<bhundven> for S&G purposes, I tried to fire up a kvm container...
<bhundven>     containers:
<bhundven>       3/kvm/0:
<bhundven>         agent-state-info: 'kvm container creation failed: exit status 1'
<bhundven> am I missing a package or something?
<bhundven> are the templates coming from jujucharms.com?
#juju 2015-07-25
<bhundven> interesting to note, on the node I started the kvm container on, I see this over and over: 2015-07-24 23:36:48.316+0000: 3068: error : virNetDevGetMTU:397 : Cannot get interface MTU on 'juju-br0': No such device
<bhundven> even though, I have this set in my environments.yaml: disable-network-management: true
<bhundven> and juju status says: agent-state-info: 'kvm container creation failed: exit status 1'
<marcoceppi> bhundven: why are you disabling networking management?
<bhundven> maas
<bhundven> marcoceppi: https://jujucharms.com/docs/stable/config-maas
<bhundven> first item under: MAAS specific features
<marcoceppi> bhundven: right, but I've enver had to do that, unless you have that all configured in maas it should prbably be set to false
<bhundven> marcoceppi: well, the network the nodes are on are managed by maas.
<bhundven> dhcp and dns
<marcoceppi> bhundven: right, which is still fine, but I doubt that maas is doing the container network as well
<marcoceppi> bhundven: that option should seldom be used AFAIK
<marcoceppi> more a caveat and less a standard
<bhundven> marcoceppi: ok. I think I understand. I nuke the env and try again.
<bhundven> s/I\ nuke/I'll nuke/
<marcoceppi> bhundven: I have a standard maas dhcp + dns setup here as well, and have that option set to false and LXC + KVMs work, which is why I mention trying it the other way
<bhundven> marcoceppi: bootstrapping now
<bhundven> marcoceppi: thanks, it's probably been my problem all along.
<marcoceppi> bhundven: if it is, I'll open a bug/merge on the docs to make it clear that is not a typcially enabled option
<bhundven> marcoceppi: well, that section of the config-maas document should note that maas networking doesn't manage the hvm/lxc networks, and it's best to leave it at false unless you know what you're doing (deploying openvswitch... etc...)
<bhundven> it's false by default.
<marcoceppi> right
<marcoceppi> bhundven: I'll be around for another hour or two, curious if that helps. lmk either way!
<marcoceppi> hopefully that was it and you're all set to go
<bhundven> I should have results in about a half hour
<bhundven> thanks!
<bhundven> up to this point, everything has worked well with some fairly old hardware (dell e1950/e2950) using plain ipmi v2.0. I'm excited to get something working! :)
<marcoceppi> bhundven: what's the plan for the hardware? Any interesting workloads?
<bhundven> for no it's a poc
<bhundven> *now
<marcoceppi> cool
<bhundven> I'm trying to build a CBA for getting some blade servers
<bhundven> production of course would live on a cloud provider, but the dev systems need to be local. (and similar to the cloud provider)
<marcoceppi> makes sense, sounds exciting
<bhundven> yes, and the cto is a centos bigot. I'm happy that 1.24 has at least initial support for deploying centos. :) that helps the CBA, and hopefully in the future bringing him over to the ubuntu side ;)
<marcoceppi> even if he doesn't, we're happy to support the workloads that make sense for you guys ;)
<bhundven> a lot of the workload is network and storage i/o
<bhundven> I can get more specific in pm.
<marcoceppi> bhundven: sure, definitely interested!
<bhundven> if anyone was following along, disable-network-management: true was causing my problems. Thanks again!
<bhundven> Now that I have something deploying, and to correct kvm/lxc instances (yeay!), I hit commit on the juju-gui to make it go, but all of my items are still blue (besides the juju-gui, which is green). juju status shows nothing interesting, not even 409 errors. Is there a meaning to the blue status?
<bhundven> 2015-07-25 04:06:51 WARNING juju.cmd.jujud machine.go:885 determining kvm support: INFO: /dev/kvm does not exist
<bhundven> HINT:   sudo modprobe kvm_intel
<bhundven> modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-35-generic/modules.dep.bin'
<bhundven> : exit status 1
<bhundven> no kvm containers possible
<jobot> Hi there, so little question... I want to ssh to a machine on a local juju env. "juju ssh _____" , but is now giving this error: ERROR missing namespace, config not prepared
<jose> jobot: what are you putting on _____?
<jobot> jose: juju ssh mysql/0   ... or juju ssh IPADDRESSofMachine , both give the same result
<jose> the ip address shouldn't work. can you please do a pastebin of your juju status?
<jobot> here you go http://paste.ubuntu.com/11938378/
<jose> thanks
<jose> let's take a look
<jobot> thanks
<jose> jobot: have you tried destroying the environment and re-bootstrapping?
<jobot> jose: yes, I have been doing that quite a bit. Also noted that I could not do simply "juju bootstrap"; I had to create a custom "myenv".
<jose> jobot: have you switched to local before bootstrapping?
<jobot> hm, actually i just tried it now and it seems to bootstrap :P
<jobot> I will see if the ssh works in that environment. Thanks!
<jose> cool
<marcoceppi> jobot: next time run the command with the -v and --debug flags, they can be quite insightful
<jobot> marcoceppi: good idea. Thanks :)
<jobot> juju ssh is working on the new local boostrap, thanks guys.
<jose> woohoo! glad it's working now jobot :)
<jose> marcoceppi: mind a quick pm?
<h0mer> anyone here know how to tell which juju machine has the neutron/networking node installed on it?  I'm running the Ubuntu cannonical openstack distribution with 9 nodes, but "juju status" doesn't tell me which node has the networking component on it
<marcoceppi> jose: sure
<marcoceppi> h0mer: it should, what does juju status say?
<h0mer> lists everything (rabbit-mq, mysql etc..), but not the networking node.
<h0mer> but I know networking is setup because i can see the network topolgy in the openstack gui
<h0mer> btw I'm using the Ubutnu/landscape/Openstack beta setup.
<h0mer> hold on let me put it on pastebin
<jose> great :)
<marcoceppi> h0mer: pastebin would be best
<h0mer> my juju nodes crashed, it'll be up shortly
<h0mer> had to re-deploy
#juju 2015-07-26
<bhundven> am I doing this right? juju machine add --constraints "cpu-cores=2 mem=4G arch=amd64" kvm:0
<bhundven> on maas
<bhundven> hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
<bhundven> is what it deploys
<bhundven> seems to be bug#1399613
<jose> bhundven: then, aren't you exceeding what you have?
<bhundven> jose: no that is the vm hardware, the host machine is: hardware: arch=amd64 cpu-cores=8 mem=16384M
<bhundven> http://pastebin.com/s0aRcvqM
<bhundven> I noticed this, because I was trying to deploy things to kvm instances and they weren't deploying, because the constraints of the application were higher then the kvm instance. Then I noticed that it was 1 core, 512m, 8192m disk.
#juju 2016-07-25
<kina_> Hi, I have a question about accessing into mongodb shell when using Juju2.0
<kina_> is it appropriate to ask it here?
<magicaltrout> sure kina_
<magicaltrout> its just a bit quiet this time of the day
<magicaltrout> whats up?
<kina_> I have been trying to access by following this https://github.com/juju/juju/wiki/Login-into-MongoDB
<kina_> but I get "auth failed"
<magicaltrout> you're trying to login to the controller mongo instance?
<kina_> magicaltrout: yes that's right
<magicaltrout> never tried it
<magicaltrout> let me find a running juju instance somewhere and test
<kina_> magicaltrout: thanks!
<kina_> BTW, with v1.25, I could successfully access into mongodb
<magicaltrout> hmm
<magicaltrout> juju 1.25 wasn't controlled by mongodb was it?
<kina_> it was. but different version i think
<kina_> it was like mongodb v2.xx, and juju 2.0 uses mongodb 3.2
<magicaltrout> fair enough
<magicaltrout> shows what i know
<magicaltrout> yeah i get the same kina_
<magicaltrout> that page is certainly stale
<magicaltrout> if babbageclunk or someone is around they might know
<kina_> thanks for checking it out!
<magicaltrout> or it was edited a few days ago
<magicaltrout> maybe i'm on an older beta
<magicaltrout> and thats trunk or something
<kina_> hmm, not sure about that.
<kina_> I'm now using the latest juju 2.0beta13
<kina_> with just installing the mongodb-clients from ubuntu, it will install v2.6, but the mongodb server is v3.2.
<kina_> so I also have tried by installing mongodb-shell v3.2, and the output is now a little different but still no access...
<babbageclunk> hi kina_ - can you put up a pastebin of what you tried? (Obscure the password first though!)
<babbageclunk> I'm just bootstrapping so I can try following the instructions there and see if I can get in to the db.
<kina_> babbageclunk: hi
<kina_> babbageclunk: thanks, but i just got afk. so will paste it later. maybe two hours later or so...
<babbageclunk> kina_: ok cool, I'll be around.
<jcastro> evilnickveitch: heya, I'm doing a bunch of AU fixes now for the questions
<jcastro> but I'm going to do /stable/ instead of /2.0/ for the URLs
<evilnickveitch> jcastro, yeah, i was thinking about that
<evilnickveitch> I guess it will be good enough until 3.0 comes out :)
<jcastro> evilnickveitch: if one is really old and doesn't have a lot of views, just downvote it to bury it, editing them will just cause them to get a bump in google
<evilnickveitch> jcastro, good plan
<jcastro> if it's lxc related I'
<jcastro> m just going to vote to close and delete
<jcastro> hmm, the current docs don't mention how to ask for a charm to get reviewed and promulagated
<evilnickveitch> jcastro that policy page got rewritten not so long ago didn't it?
<jcastro> I can't find it at all
<evilnickveitch> jcastro, no, I think you are right. this is the page it *used* to be part of: https://jujucharms.com/docs/2.0/authors-charm-store
<jcastro> yeah so basically, we don't have instructions or a review queue right now lol
<evilnickveitch> :)
<mpavone1> hi, I'm using juju2-beta7 (unchanged xenial repo) and a local lxd controller. It was working for some weeks, now suddenly a "juju status" returns "ERROR refreshing models: connecting with bootstrap config: unable to connect to API" - any hints how to debug are appreciated
<lazypower> magicaltrout - can you link me @ your maven build plugin?
<magicaltrout> my what?
<lazypower> your maven build that you gave a presentation on last week
<magicaltrout> ah
<magicaltrout> https://github.com/buggtb/snappy-maven-plugin
<magicaltrout> read the readme for the many holes ;)
<kina_> babbageclunk: hi, sorry for the late response, but here is the pastebin
<kina_> http://pastebin.ubuntu.com/20879705/
<babbageclunk> kina_: Hi - it looks like those instructions are old, so we need to pass the CA cert in as well. I'm just trying to work out how to do that.
<babbageclunk> kina_: The cert is in the /var/lib/juju/agents/machine-0/agent.conf beside the password.
<kina_> babbageclunk: i see. thanks! i will try too
<babbageclunk> kina_: Actually, you can just pass --sslAllowInvalidCertificates to avoid having to check the cert.
<kina_> ah, now i got in
<babbageclunk> kina_: That's obviously bad if you're using it in prod, but probably ok for debugging/exploration.
<kina_> babbageclunk: yeah, for api usage it is bad indeed. but i just wanted to know the method, just in case
<petevg> @cory_fu, @kwmonroe: Here is the upstream PR for the kafka changes: https://github.com/apache/bigtop/pull/134
<endomorphosis> anyone home
<endomorphosis> I've been trying for 2 weeks to get juju 2.0 and maas 2.0 working at intel labs
<endomorphosis> im about ready to throw up my hands
<bdx> endomorphosis: there are a few caveats you must know considering the pre-release beta status to get everything working ... good job making it here!
<endomorphosis> we're in a proxied environment and its a pain in the butt
<endomorphosis> im not usually the type to ask for help, and I know that its a pre-release beta
<bdx> endomorphosis: asking for help is a good thing!
<endomorphosis> in any case, I work at the big data solutions at intel, and I wanted to convert our racks to juju / maas / openstack
<bdx> endomorphosis: thats great!
<endomorphosis> Internally we're working on some packages called "snap" a telemetry package, and "tap" an analyitics platform written in go
<endomorphosis> https://github.com/intelsdi-x/snap
<bdx> endomorphosis: do you know exactly what is being blocked by your proxy?
<endomorphosis> https://github.com/trustedanalytics/platform-wiki-0.7/wiki/Overview%20of%20Trusted%20Analytics%20Platform
<endomorphosis> and I wanted to try to integrate snap into juju at some point in the future
<bdx> endomorphosis: that will be the easy part
<bdx> endomorphosis: how much experience do you have with openstack?
<endomorphosis> I have some experience in openstack, we've been using fuel.
<endomorphosis> dial tcp getsockopt: connection timed out
<endomorphosis> that and MAAS similarly has problems provisioning servers over a proxy (but thats another team)
<endomorphosis> I get this error when trying to deploy charms or bootstrap nodes
<bdx> endomorphosis: are you setting the proxy params when bootstrapping juju?
<endomorphosis> even when it appears like the proxy settings are correct on the host
<endomorphosis> set in /etc/environment and /etc/profile
<bdx> endomorphosis: have you looked into setting the juju bootstrap specific proxy config params?
<endomorphosis> in environments.yaml right?
<bdx> endomorphosis: what version of juju are you using?
<endomorphosis> 2.0 beta 13
<bdx> ok, thats the issue
<bdx> in juju-2.0, there is no more environments.yaml
<bdx> endomorphosis: what cloud provider are you bootstrapping to?
<endomorphosis> I've tried both LXD and MAAS
<stub> juju bootstrap local lxd --upload-tools --config apt-mirror=http://th.archive.ubuntu.com/ubuntu --config apt-http-proxy=http://10.0.4.1:8000/
<bdx> endomorphosis: ok, great
<bdx> endomorphosis: can you add `--debug`
<bdx> to that command
<bdx> and report back with the output
<bdx> endomorphosis: you may also need to set http-proxy depending on your setup
<endomorphosis> correct me if I'm wrong but the process is add cloud -> add controller -> add machines -> add model -> deploy service
<bdx> endomorphosis: for lxd you don't need to add it, its configuration are included by default
<bdx> so, you can just run the command ^^ from stub, and it should stand right up
<bdx> endomorphosis: assuming you have lxd configured
<bdx> endomorphosis: can you `lxc launch ubuntu:16.04` ?
<endomorphosis> right now its at installing cloud-utils
<bdx> endomorphosis: then the "bootstrap" process is almost complete
<endomorphosis> when I add machines using the proxy, do I also use the same flags as well I'm assuming
<bdx> endomorphosis: the flags you provided at bootstrap should be inherited by your models you create on that controller
<bdx> endomorphosis: so, once the bootstrap is complete, you should be able to `juju deploy ubuntu`
<petevg> cory_fu, kwmonroe: did either of you do anything to the kafka branch in our bigtop repo? It looks like cory_fu merged my bind address changes to it three days ago, but they don't show up in the branch.
<petevg> ... I also got some conflicts when doing a pull, which suggests that somebody may have edited the history :-/
<cory_fu> petevg: I squashed the kafka branch, as our upstream PRs are supposed to be.
<cory_fu> The PR you created this morning needs to be squashed as well
<petevg> cory_fu: Ah. You squashed it into the already squashed stuff.
<petevg> cory_fu: I know that we squash things, I was just expecting to see my changes, squashed, rather than my changes, squashed together with kjackal's.
<petevg> It makes sense now.
<endomorphosis> internally there are some people working on this SNAP system and TAP system, to assign VMs / containers to systems based on load and performance of those workloads.
<endomorphosis> it is the software defined infrastructure team at intel.
<endomorphosis> I was keen to try to glue juju and those together.
<bdx> endomorphosis: great idea, I think a few of us have taken stabs at integrating other auto scaling frameworks into juju as well
<bdx> endomorphosis: so this is a great place to ask if you have questions regarding that
<jose> quick question, is it possible to check when an specific review of a charm was pushed?
<jose> for instance, cs:trusty/nova-compute-26
<bdx> endomorphosis: I proposed "juju scale" to the team .... I'm not sure if it got much traction though ... possibly you could write the mailing list with your scaling integration project/idea ... this could really help with bringing that "auto scaling" functionality to Juju itself
<bdx> endomorphosis: plus, it entirely backs up my idea/proposal :-)
<endomorphosis> the project is also written in go, if that helps any
<endomorphosis> I'll write the mailing list
<bdx> endomorphosis: thats great, thanks!
<endomorphosis> I think the bootstrap hung
<bdx> endomorphosis: depending on the speed/capacity of your machine, the last few steps can sometimes take a moment
<cory_fu> kwmonroe: Was there anything else before promulgating the bigtop Kafka charm?  petevg: What series does it support?
<petevg> cory_fu: it doesn't specify a series ... which might be something that we want to fix.
<cory_fu> Wait, it's already promulgated for trusty: https://jujucharms.com/kafka/
<cory_fu> This would just be to update with the new config option
<cory_fu> Has this been tested on xenial?
<petevg> cory_fu: yes. It's just the update of the config.
<petevg> And no, I haven't tested on xenial.
<petevg> I'm kind of tempted to make "series" an environment variable that the amulet tests respect. So instead of doing amulet.Deployment(series='trusty'), we do something like amulet.Deployment(series=os.envrion.get('charm_test_series', 'trusty'))
<cory_fu> petevg: Not unreasonable.  Some charms / bundles include a separate test for each series, just to confirm that, e.g., a change for xenial doesn't break something on trusty
<cory_fu> But that does double the test load
<petevg> Bleh. We don't have a zookeeper for xenial, so testing xenial isn't as simple as editing and re-running the tests.
<cory_fu> petevg: The Bigtop ZK should be xenial (and only xenial)
<petevg> cory_fu: Right. My checkout of bigtop zookeeper is undergoing surgery right now. Fixing it up and deploying and running tests now ...
<cory_fu> petevg: I don't think xenial is a blocker for the kafka update
<petevg> cory_fu: Cool. I will back off on worrying about it, then :-)
<petevg> In any case, kafka is as done as I have permissions to make done for now.
<cory_fu> petevg, kwmonroe: Ok, https://jujucharms.com/kafka/trusty/4 is pushed and published
<cory_fu> lazypower: Since both Kevin and I are going to be out this week, can you work with petevg to review and publish the Bigtop Zookeeper charm to resolve https://bugs.launchpad.net/charms/+source/zookeeper/+bug/1603007
<mup> Bug #1603007: split network environment <zookeeper (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1603007>
<lazypower> cory_fu - sure
<lazypower> petevg ping me when you're ready sir
<lazypower> cory_fu - just for clarity, this is like the extra endpoint bindings support that has landed in the openstack charms yes?
<petevg> lazyPower: will do. I won't be right away -- Zookeeper and I are currently arguing about writing things to a config file.
 * lazypower is stepping out to run a quick errand, will brb shortly
<cory_fu> petevg, lazypower: Keep in mind that the new ZK charm should be xenial only.  Also, promulgating it from bigdata-charmers will drop https://jujucharms.com/zookeeper/trusty/ and that will need to be re-pushed to cs:~bigdata-charmers/trusty/zookeeper
<lazypower> petevg - that sounds like typical devops fun :)
<lazypower> cory_fu - right, same issue we've had with other charms coming into the promoted namespace. ack
<cory_fu> Thanks
<lazypower> s/promoted/promulgated/
<petevg> lazypower: yep. That it is.
<lazypower> ok. sure i'll lend a hand. just lmk
 * lazypower poofs for a bit
<cory_fu> petevg: Did we end up making the java relation optional somehow for the Bigtop ZK?
<petevg> cory_fu: nope.
<cory_fu> OK
<petevg> I think that we discussed it, and you thought that it would set a bad precedent. It was a while ago, though.
<kwmonroe> petevg: what's the best zk charm to use for testing some spark-ha stuff?  ~bd-dev or ~bd-charmers zookeeper?
<kwmonroe> or build something more recent?
<petevg> @kwmonroe: probably just ~bd-charmers for now.
<kwmonroe> cool, thx petevg
<petevg> np
<endomorphosis> 2016-07-25 19:43:50 ERROR cmd supercommand.go:458 storing charm for URL "cs:juju-gui-130": cannot retrieve charm "cs:juju-gui-130": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/juju-gui-130/archive?channel=stable: dial tcp 162.213.33.122:443: getsockopt: connection timed out
<endomorphosis> does anyone know how to fix this
<endomorphosis_> does anyone know how to deal with this error message?
<endomorphosis_> cmd supercommand.go:458 storing charm for URL "cs:juju-gui-130": cannot retrieve charm "cs:juju-gui-130": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/juju-gui-130/archive?channel=stable: dial tcp 162.213.33.122:443: getsockopt: connection timed out
<endomorphosis_> does anyone know how to deal with this error message?
<endomorphosis_> cmd supercommand.go:458 storing charm for URL "cs:juju-gui-130": cannot retrieve charm "cs:juju-gui-130": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/juju-gui-130/archive?channel=stable: dial tcp 162.213.33.122:443: getsockopt: connection timed out
<magicaltrout> i don't, but you have some  routing issue from controller to  the www
<magicaltrout> i don't, but you have some  routing issue from controller to  the www endomorphosis_
<endomorphosis_> i have proxy setup in the controller, so I dont understand why
#juju 2016-07-26
<x58> How can I debug why a juju run is not completing?
<petevg> kwmonroe: here's the traceback I get when trying to deploy a xenial version of the zookeeper charm: http://paste.ubuntu.com/21019611/
<petevg> It looks like it is a failure in the openjdk layer; it's looking for /etc/default/bigtop-utils, but that doesn't exist ...
<kwmonroe> petevg: that ain't nary an openjdk layer failure.. openjdk doesn't need bigtop-utils.  it's gotta be a failure of zookeeper or the bigtop base layer.
<petevg> Hmmm ... Zookeeper doesn't attempt to edit anything in place, though.
<petevg> ... I think.
<kwmonroe> yeah, it's coming from lib/charms/layer/apache_
<kwmonroe> bigtop_base.py", line 239, in trigger_puppet
<lazypower> x58 - you can juju debug-hooks application/#    which will put you in an interactive shell (just a plain ol root shell by default)
<lazypower> it should trap that run stanza and give you an opportunity to debug
<kwmonroe> petevg: i might have spoken too soon -- there is some java stuff that it's looking for during that re_edit_in_place.. i'll kick it around in just a few minutes.
<petevg> kwmonroe: here's my branch, for reference: https://github.com/juju-solutions/bigtop/tree/zookeeper-bind-address
<kwmonroe> cool, thx petevg
<petevg> Did I specify the series correctly in metadata.yaml?
<kwmonroe> yeah petevg -- that looks legit
<x58> lazypower: Thans.
<petevg> kwmonroe:  you're right about where it's coming from. Here's the line from the base layer:
<petevg> http://paste.ubuntu.com/21020508/
<petevg> kwmonroe: do we have any big data charms that we've actually deployed/tested on xenial?
<kwmonroe> ah, right petevg.. so the question is now why isn't /etc/default/bigtop-utils present?  is zk trying to trigger_puppet before bigtop-base is available?
<petevg> kwmonroe: Hmmm. Bigtop looks like it installed -- there's a bigtop.release directory in the ubuntu user's home dir.
<petevg> etc/default/bigtop-utils isn't there, though.
<kwmonroe> petevg: could it be the zookeeper deb in bigtop's repo doesn't correctly specify bigtop-utils as a prereq?  said differently, how does bigtop-utils get installed with a trusty deployment?  i don't see us specifically calling that out in the bigtop base layer.
<petevg> kwmonroe: That sounds plausible (I don't see it explicitly installing it, either.)
<petevg> kwmonroe: Yeah. If I apt install "bigtop-utils" and then run "juju resolved zookeeper/0", everything looks happier.
<petevg> I think that I'm going to go ahead and cheat and just explicitly call "apt install bigtop-utils" in the charm for now, to get this done. I'll file a ticket w/ bigtop to get the package fixed.
<kwmonroe> roger that petevg.. perhaps we can also work around with an explicit bigtop-utils entry in the base layer: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L7
<suchvenu> Hi
<suchvenu> I am getting the following error when I deploy the charm http://pastebin.ubuntu.com/21028721/
<suchvenu> and the juju status shows as "message: agent is lost, sorry! See 'juju status-history websphere-liberty/0'"
<suchvenu> Can anyone help me with this ?
<suchvenu> juju status history shows this : http://pastebin.ubuntu.com/21028908/
<kwmonroe> suchvenu: it seems anastasia suggested trying juju-1.25.6 in bug 1465307, but i do not see that as released yet.
<mup> Bug #1465307: 1.24.0: Lots of "agent is lost, sorry!" messages <canonical-bootstack> <landscape> <regression> <juju-core:New> <https://launchpad.net/bugs/1465307>
<kwmonroe> anyone know when 1.25.6 is due?
<lazypower> kwmonroe - its in proposed
<lazypower> add-apt-repository ppa:juju/proposed
<kwmonroe> ah, ack lazypower.. suchvenu if you want to try the proposed 1.25.6, do:
<kwmonroe> sudo add-apt-repository ppa:juju/proposed && sudo apt update && sudo apt upgrade juju
<suchvenu> ok kwmonroe
<narindergupta> anyone knows how to connect juju to openstack (deployed through juju) i tried this URL https://blog.felipe-alfaro.com/2014/04/29/bootstraping-juju-on-top-of-an-openstack-private-cloud/
<narindergupta>  but facing issues
<lazypower> narindergupta - what issues are you encountering?
<lazypower> narindergupta - we use juju driving a juju deployed openstack all the time internally with OIL and other efforts
<narindergupta> lazypower, juju.cmd supercommand.go:429 failed to bootstrap environment: index file has no data for cloud {RegionOne http://10.5.1.83:35357/v2.0} not found
<lazypower> ahh i  have seen this before but i dont recall the fix... one sec while I see if i can get someone to lend a hand.
<narindergupta> lazypower, sure will be helpful. Also if you have the clear method explaining or script will be helpful?
<lazypower> narindergupta - i think you have to build simplestreams, but i'm verifying that
<narindergupta> lazypower, how to do that?
<lazypower> i very well could be wrong, adn we have a lot of high profile folks out at the mid cycle sprint, so i may redirect you to the juju mailing list allowing them a chance to chime in
<narindergupta> lazypower, zahra already posted this to mailing list. everyone says juju can not connect to cloud but manually i can do everything to cloud
<narindergupta> lazypower, but any information will be helpful
<lazypower> you bet, i'll do what i can to help. i've got your contact details, i'll e-mail you any information i can surface today
<narindergupta> lazypower, sounds good to me thanks
<magicaltrout> make it work lazypower !
<lazypower> magicaltrout - working on it, its hard to do when there's no response from the SME's ;)
<magicaltrout> hehe
<petevg> kwmonroe: Hmmm ... looks like bigdata-utils being missing might be just the tip of the iceberg for Zookeeper on xenial. It looks like the Zookeeper xenial deb installs it in /usr/share/zookeeper rather than /usr/lib/zookeeper, and that screws up the puppet scripts :-/
<kwmonroe> petevg: how are you specifying the xenial deb in your zk layer?
<petevg> kwmonroe: I'm not. I'm just running puppet and letting it do its thing.
<kwmonroe> petevg: did you alter the bigtop_repo_x86 layer opt?  or manually apt-add-repo?
<petevg> Currently digging through puppet stuff to see if I can override the path ... I see a lot of instances of that zookeeper path being a defaul or hardcoded, though.
<petevg> kwmonroe: again. Not doing anything manually. Just telling bigtop to do its thing.
<kwmonroe> petevg: i bet i know what's happening.. i bet there's a zookeeper package in the xenial archives.  on the borked system, what does "dpkg -l | grep zoo" say?
<petevg> kwmonroe: http://paste.ubuntu.com/21038468/
<kwmonroe> petevg: how about "sudo apt-cache madison zookeeper"
<petevg> kwmonroe: http://paste.ubuntu.com/21038664/
<kwmonroe> thar she be petevg
<petevg> So there is a package in universe.
<petevg> Yeah. And it's newer, so that's why it's being grabbed, right?
<kwmonroe> i'm not sure how the precedent is set
<kwmonroe> but your thought seems reasonable.. bigger version wins
<kwmonroe> so now petevg.. we need to figure out how to force the bigtop repo to win.  maybe a priority option we can set?
<petevg> kwmonroe: Probably. Do you know where puppet calls apt/dpkg?
<petevg> We either need to do some magic before that runs, or override it.
<petevg> I think that I want to pin the package. I'm not sure that it's going to be easy to do so before bigtop installs zookeeper, but after it adds its repo. Digging into it now ...
<kwmonroe> ok petevg, let's compare notes.. http://paste.ubuntu.com/21045756/
<kwmonroe> petevg: one solution is to create an /etc/apt/preferences.d/file to force bigtop repos to take priority (as seen in above paste)
<petevg> kwmonroe: Yep. That's pretty much what I found. Need to test it and make sure that it works before the repo is actually setup.
<petevg> kwmonroe: thx for suggesting the wildcard -- covering the same case elsewhere makes sense.
<kwmonroe> petevg: the pro is that presumably all zk deps would get prioritized too (or rather, all bigtop repo bits would).  the con is that there may be some old crusty deb in the bigtop repo that we really didn't want to take prescendent over the main archive.
<petevg> True. I think that the pro outweighs the con, though.
<kwmonroe> yeah, me too.  lest you're stuck managing deps in a preferences file till you simply die.
<petevg> I'm sticking the file in a resource, reading it into Python, then writing it out, so we can do some parsing if it every becomes a big problem.
<kwmonroe> word petevg.  perhaps think about using the bigtop_repo layer option (from bigtop base layer) as the url to use in the origin matcher.  it may not always be "bigtop-repos.s3.amazonaws.com".
<petevg> kwmonroe: Good idea. Will do.
<kwmonroe> in fact, it probably should never be that hard coded string, but whatever the hostname for the repo url may be.  surely there's some pythony thing to strip 'http://' and '/blah/foo' from a url and just leave the fqdn.
<petevg> kwmonroe: bigtop_repo[7:].split("/")[0] should do it. Not pretty, but it works. :-)
<magicaltrout> its okay
<magicaltrout> kwmonroe hasn't written a pretty line of code in his life
<kwmonroe> petevg: it's gonna get uglier too.. the layer option is "bigtop_repo-$(arch)"
<petevg> Yep. I'm in the middle of writing that section of code right now :-)
<kwmonroe> and petevg, what if somebody uses https or ftp?  that'll dork the 7th char.
<kwmonroe> magicaltrout: all my code is handsome
<petevg> This is why a lib is always nice :-)
<magicaltrout> hehe
<kwmonroe> dont we use urlparse in these bt charms?  surely that would do a fine parsing job
<magicaltrout> urlparse
<petevg> Here's an ugly function for you: http://paste.ubuntu.com/21048613/ (Going to test to verify that the approach works, and worry about making things prettier later.)
<kwmonroe> magicaltrout: if you can't keep up, at least keep quiet
<magicaltrout> *sob*
 * magicaltrout finds more wine
<kwmonroe> ;)
<petevg> magicaltrout: thank you. urlparse is a good idea, to catch other stuff I haven't thought of.
<kwmonroe> petevg: i like everything about that function except line 8
<kwmonroe> that's like 92% approval from me
<kwmonroe> which is pretty incredible
<petevg> Excellent. I'll fix up line 8 with urlparse before review :-p
<kwmonroe> :)  +1
<petevg> kwmonroe: yay. Tests passed! Cleaning up code. PRs coming your way soon ...
<x58> juju run --service myserver hangs forever and never seems to complete.
<petevg> kwmonroe: PRs for you: https://github.com/juju-solutions/bigtop/pull/30  lazyPower had volunteered to review them, so feel free to sling it over to him if you run short on time :-)
<petevg> (That PR links to the other two.)
<kwmonroe> roger petevg, taking a gander ahora
<zeestrat> Hey guys, what would you do if you'd need to deploy some none juju services on blank Ubuntu machines if you have an MAAS managed infra with some juju deployments already? Just commission machines in MAAS, use the ubuntu charm so you can manage it some what with juju and get some flexibility with lxd containers, or some other approach?
<zeestrat> I see that the ubuntu charm is intended for testing and development, but is there anything inherently missing that I should look out for?
<kwmonroe> petevg: "This breaks many things"  lol.  zk and bigtop-base PRs merged.  we'll let upstream marinate on their PR.
<petevg> kwmonroe: Cool. Thank you :-)
<kwmonroe> zeestrat: deploying 'ubuntu' seems a fine way to do what you want.  i don't know of anything missing that would make you say "oh, this is why it's test/dev only".  it's just a cloud-init ubuntu installation.
<kwmonroe> zeestrat: and you do get the benefit of connecting stuff up to it.. anything that uses the juju-info relation can be attached to your 'ubuntu' charm units... stuff like ganglia for monitoring, etc
<kwmonroe> petevg: fwiw, https://jujucharms.com/u/bigdata-dev/zookeeper/xenial/1 is your new hotness
<petevg> kwmonroe: sweet! Thank you :-)
<kwmonroe> not so fast petevg.. the bigtop_apt change in the zookeeper push now conflicts with your unit test PR :/  https://github.com/juju-solutions/layer-apache-bigtop-base/pull/28
<kwmonroe> not a big deal, just wanted to knock you down a peg this fine tuesday afternoon ;)
<petevg> kwmonroe: of course it does. Will fix it in the morning :-)
<kwmonroe> cool - thanks, and have a good night!
#juju 2016-07-27
<kjackal> Hello Juu world!
<kjackal> admcleod_: Hey! How is it going?
<kjackal> admcleod_: Got to play a bit with the bigtop kafka offering we have packaging. Did you use Kafka for your ML demo?
<admcleod_> kjackal: good, we havent worked on the demo yet though
<skay> is there a charm for celery flower? I searched, but am not getting any results on celery https://jujucharms.com/q/celery
<skay> which seems really weird
<lazyPower> skay - not that i'm aware of
<skay> lazyPower: oh, okay. what do people normally use for monitoring celery stuff?
<skay> we've got a django app that uses the old django-celery package, just to use the admin
<lazyPower> I dont think we've got a charm that models a celery monitor. That would be a great contribution
<skay> it's not actively developed, and the celery site recommends FLower
<bdx> skay: I use flower
<skay> I think that would get out of the scope of what I'm currently working on
<skay> otherwise it would be a nice thing for me to do
<skay> bdx: do you have a charm for it?
<skay> lazyPower: I was hoping I could be lazy and
<skay> remove a dependency while I'm at it!
<skay> well, a django dependency
<bdx> skay: I actually started up a celery charm last week ... I think flower would be a great configurable option
<bdx> skay: I would just 'pip install flower' for the time being
<bdx> skay: look out in the coming days/weeks for celery
<skay> bdx: thanks
<bdx> np
<bdx> skay: now that I think of it, I remember what I ran into last week when looking at it
<bdx> skay: celery is interacted with by django, it usually in your django codebase that you will import and call to celery
<bdx> skay: as such, I figured it was just best to let celery run where the django app runs, and get installed as one of the django app deps
<skay> bdx: aw. we are going to be deploying some non-django services
<kwmonroe> hey lazyPower, what's your charmbox docker run --net param?  i have --net=bridge, and i thought i did that for reasons, but i can't remember them.  curious what you use.
<skay> bdx: yeah, that is what we are doing with our django app
<skay> roadmr: o/
<lazyPower> kwmonroe - i've since removed that as i'm not using the 'local' provider.
<lazyPower> kwmonroe - however, what that does is it places the container directly on the hosts networking bridge, instead of running behind a service NAT
<roadmr> heya skay !
<bdx> skay: yea, its just not something that is easily decoupled from the django application ... I've been on a mission to do so for a while, and the more I look at it, the more it just doesn't make sense to do so
<bdx> skay: how are your other apps interacting with celery? from their native codebase?
<skay> bdx: unknown yet, we are just starting to talk about moving some things out of our big django app in to smaller apps, some of which will be go
<bdx> skay: usually calls to celery will happen from the applications code base
<bdx> this is difficult to decouple
<skay> bdx: before I JFDI making the actual service I want to spec out how we'll deploy in just enough detail to make sure I won't shoot us in the feet
<skay> bdx: hmm, so I guess no charm
<skay> or just a setting, like you said, and it can live int he same application container with whatever app
<bdx> skay: I've charmed up a bunch of django apps that use celery
<skay> bdx: we have one that does as well
<skay> as I mentioned, I'd like to stop using the django admin for looking at the task queue
<skay> I'm not sure that package is under active development (let me go find the github issue I remember)
<kwmonroe> ack lazyPower.  that's good to know.  it explains why my charmbox network dies whenever i switch networks.  methinks natting might keep them alive.
<Prabakaran> Hello Team, Is there any interface available @ interface.juju.solutions for apache vhost configuration?
<lazyPower> Yeah, the only reason we did the --net=bridge was for the old local provider integration. Since we are primarily on juju2 these days, we should be able to remove that stanza from the alias and have a good time.
<skay> https://github.com/celery/django-celery/issues/391
<skay> and some other issues. tests failing with django 1.9, etc.
<lazyPower> kwmonroe - at some point, in the distant future when we have -spare cycles- we should pair and hax on getting it functional with the lxd cloud, and see what that looks like
<kwmonroe> sounds like a summit break-out
<kwmonroe> see you in septiembre
<bdx> skay: yea, looks like you are going to be dealing with a few different issues moving forward if you are trying to be on latest and greatest django/celery/etc/etc ..... if you want these thing to start in your direction  .... you have to be the change you want to see
<bdx> skay: otherwise, just wait
<skay> bdx: ha, I've got a django 1.6 app and want to get to at least LTS
<skay> bdx: nod
<kwmonroe> Prabakaran: i don't see anything related to vhost data currently.  it looks like the http interface only passes hostname/port.  the apache2 charm handles vhosts during it's config-changed hook (so for example, changing the apache2 port would update the defined vhost config with the new value)
<bdx> skay: yea, I just ransacked throught a few django apps bringing them to latest and greatest, it was pretty rough to say the least
<bdx> skay: your on the right track, map out the deps for everything, and where the different projects are at (check that the desired feature sets align)
<lazyPower> kwmonroe - iirc the reverse proxy relation sets up a vhost, but you dont get to specify the vhost information. its a templated one shot reverse proxy config
<lazyPower> so for example, if your hosting static assets out of s3, and want to rewrite anything in the /asset url, it doesn't currently support that.
<bdx> skay: then just go throught and do a timeline appraisal (expected dates when critical issues/deps will be fixed)
<lazyPower> so there's potential for a contribution there
<bdx> skay: this will give you an overhead view of what/when/where/why/how, then you can start putting the pieces together, knowing which ones will play nice, and when, why
<skay> bdx: nod
<kwmonroe> yup, +1 lazyPower
<Prabakaran> my requirment is to feed this vhost file http://pastebin.ubuntu.com/21160463/ to apache2 charm .. is it possible to do from my reactive code? Is there any interface/layer helps for this scenario?
<kwmonroe> Prabakaran: there is not anything in the interface registry to help your scenario, but it should be possible to use your pastebin with apache2 like this:  juju set apache2 "vhost_http_template=$(base64 < my_vhost_template.txt)"
<kwmonroe> Prabakaran: to set the vhost template from a connected reactive charm, i *think* you might be able to use apache2's vhost-config relation.  according to the apache2 readme (https://jujucharms.com/apache2/, section "Using the vhost-config relation"), you could run "relation-set vhosts="foo" from your reactive charm when website.available.
<Prabakaran> Thanks kwmonroe, let me try this
<lazyPower> i'm pretty sure you'd need to base64 encode that, i recall the apache2 charm making heavy use of base64 encoding to ensure it preserves linebreaks and what not
<lazyPower> might be worth while to peek in the relationship hook and see what its doing
<lazyPower> kwmonroe  - if you have a spare cycle, i'd appreciate some eyeballs on https://github.com/juju-solutions/layer-certificate-authority/pull/1
<Guest_94843> Allah is doing
<Guest_94843> sun is not doing Allah is doing
<Guest_94843> moon is not doing Allah is doing
<Guest_94843> stars are not doing Allah is doing
<Guest_94843> planets are not doing Allah is doing
<Guest_94843> galaxies are not doing Allah is doing
<Guest_94843> oceans are not doing Allah is doing
<Guest_94843> mountains are not doing Allah is doing
<Guest_94843> trees are not doing Allah is doing
<Guest_94843> mom is not doing Allah is doing
<Guest_94843> dad is not doing Allah is doing
<Guest_94843> boss is not doing Allah is doing
<Guest_94843> job is not doing Allah is doing
<Guest_94843> dollar is not doing Allah is doing
<Guest_94843> degree is not doing Allah is doing
<Guest_94843> medicine is not doing Allah is doing
<Guest_94843> customers are not doing Allah is doing
<Guest_94843> you can not get a job without the permission of allah
<Guest_94843> you can not get married without the permission of allah
<Guest_94843> nobody can get angry at you without the permission of allah
<Guest_94843> light is not doing Allah is doing
<Guest_94843> fan is not doing Allah is doing
<Guest_94843> businessess are not doing Allah is doing
<Guest_94843> america is not doing Allah is doing
<Guest_94843> fire can not burn without the permission of allah
<Guest_94843> knife can not cut without the permission of allah
<Guest_94843> rulers are not doing Allah is doing
<Guest_94843> governments are not doing Allah is doing
<Guest_94843> sleep is not doing Allah is doing
<Guest_94843> hunger is not doing Allah is doing
<Guest_94843> food does not take away the hunger Allah takes away the hunger
<Guest_94843> water does not take away the thirst Allah takes away the thirst
<Guest_94843> seeing is not doing Allah is doing
<Guest_94843> hearing is not doing Allah is doing
<Guest_94843> seasons are not doing Allah is doing
<Guest_94843> weather is not doing Allah is doing
<Guest_94843> humans are not doing Allah is doing
<Guest_94843> animals are not doing Allah is doing
<Guest_94843> the best amongst you are those who learn and then teach quran
<Guest_94843> one letter read from book of Allah amounts to one good deed and Allah multiplies one good deed ten times
<aisrael> Well, that was fun
<kwmonroe> devops are not doing Juju is doing
<phy1729> Spamming before I joined?
<lazyPower> phy1729 yep
<aisrael> Guest_94843
<aisrael> was the nick
<phy1729> I'll stick around for a while if you don't mind, but he doesn't seem to hit the same channel twice. Continure with your magic juju :)
<lazyPower> Thanks phy1729 for the quick response
<lazyPower> appreciate the backup in case it happens again
<aisrael> TIL: don't run `charm build` in your $HOME
<aisrael> It doesn't do any damage, but it sure does take a long time to fail.
<kwmonroe> lol
<lazyPower> yuuuup
<lazyPower> gotta love that shallow deep copy ;)
<jcastro> balloons: tell me about this juju snap in the store
<jcastro> can I whole hog replace the deb?
<lazyPower> jcastro - iirc mattyw created that snap
<jcastro> juju-nskaggs                            2.0-beta13                    nskaggs
<jcastro> it looks brand new to me
<mattyw> lazyPower, jcastro, I've put no juju snap in any store
<lazyPower> ooo, ok. that was just prior work on the pi then
<lazyPower> disregard my fallable memory
<jcastro> yeah it's like less than a day old afaict
<mattyw> lazyPower, I only ever built it for the pi
<mattyw> lazyPower, I never put it in the store because there wasn't a store in those days ;)
<kwmonroe> lazyPower: pr looks fine - want me to merge, or do you have other eyeballs on it?
<lazyPower> go for it
<kwmonroe> donezo
<lazyPower> ta kwmonroe
<kwmonroe> petevg: we're in a pickle.  the tox.ini in bigtop-base (https://github.com/juju-solutions/layer-apache-bigtop-base/pull/28) gets overridden by the base layer tox.ini.
<kwmonroe> (https://github.com/juju-solutions/layer-basic/blob/master/tox.ini)
<kwmonroe> this is bad petevg, because charms built with the bigtop base layer *will* get the ./tests/unit/* stuff from bigtop-base, but won't have the unit_test_reqs installed, so tox fails on all bigtop charms.
<petevg> kwmonroe: That is bad :-/
<petevg> kwmonroe: Do we have a way to exclude directories from being inherited in a layer?
<kwmonroe> petevg: i think there's an ignores options.. i saw beisner mention that recently.  yo beisner, how does one use ignores in a layer?
<kwmonroe> stand down beisner.  got it.
<kwmonroe> petevg: pretty simple, http://paste.ubuntu.com/21178635/
<petevg> kwmonroe: Cool. Will add it. Thx. :-)
<petevg> kwmonroe: Just one quick question. When two layers contain a file, do you know how we handle the conflicts? I would expect layers to be able to overwrite files from a layer that they inherit from. But that would mean that layer-apache-bigtop-base's tox.ini would clobber the one from the base layer. And it sounds like the opposite is happening ...
<kwmonroe> i agree with your expectation and observation petevg.  and i thought the layer ordering was fixed with https://github.com/juju/charm-tools/issues/218
<petevg> kwmonroe: Interesting. Just to be safe, I think that I'm going to rename "tox.ini" to "tox-unit.ini", and add it to the ignore list.
<petevg> We'll still be able to run tests with "tox -c tox-unit.ini"
<kwmonroe> ah - cool petevg.  smells like a readme update too ;)
<petevg> Just working on that now :-)
<kwmonroe> petevg:  may want to add unit_test_reqs.txt to the ignore list while you're in there..
<petevg> Done :-)
<petevg> kwmonroe: Changes pushed. Going to spend some time doing some additional building/testing to make sure that things are okay, though.
<petevg> kwmonroe: Everything looks good from my end. I can build charms, and the right stuff gets ignore from the bigtop base layer. lmk if you see any issues on your end.
<kwmonroe> roger that petevg - thx!
<mbruzek> ping bdx
#juju 2016-07-28
<kjackal> Hello Juju World!
<rick_h_> kjackal: hello back at ya
<magicaltrout> late night rick_h_ ? :)
<magicaltrout> oooh
<magicaltrout> sprint in holland
<magicaltrout> hows the cheese and clogs?
<magicaltrout> and windmills
<magicaltrout> and tulips
<rick_h_> magicaltrout: it's a party
<rick_h_> wheee
<magicaltrout> hehe
<ejat> 2016-07-28 08:04:34 INFO config-changed subprocess.CalledProcessError: Command '['/opt/opendaylight-karaf/bin/client', '-r', '20', '-h', 'localhost', '-a', '8101', '-u', 'karaf', 'feature:list']' returned non-zero exit status 1
<ejat> odl-controller charm
<ejat> anyone can help ?
<magicaltrout> ejat: i can't but can you ssh into the odl unit?
<magicaltrout> you could just run that command manually and see what happens
<ejat> $ /opt/opendaylight-karaf/bin/client -r 20 -h localhost -a 8101 -u karaf feature:list
<ejat> client: JAVA_HOME not set; results may vary
<ejat> Logging in as karaf
<ejat> 175 [pool-2-thread-2] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at localhost/127.0.0.1:8101 presented unverified key:
<ejat> Error executing command: java.lang.NullPointerException
<magicaltrout> I'd check with the chaps over on #karaf ejat
<magicaltrout> I know the guys who write ODL hangout over there and can probably offer some insight
<ejat> magicaltrout: thanks
<magicaltrout> no problem
 * ejat trying to link the odl-controller with the conjure-up openstack based
<magicaltrout> well the unit is trying to ssh into itself
<magicaltrout> so I'm not sure why that would fail
<magicaltrout> the karaf client gets run all over the place in standard karaf installs
<ejat> have u try the odl-controller charm ?
<magicaltrout> I haven't, but I'm a karaf user so I know my way around karaf a bit
<ejat> magicaltrout: owh .. thanks for da assistance
<magicaltrout> that odl charm is at revision 1
<magicaltrout> so I guess its probably rough around the edges ;)
<ejat> magicaltrout: is it similar to this : https://issues.apache.org/jira/browse/KARAF-3492
<magicaltrout> yeah looks it ejat
<ejat> 10 revisions
<ejat> by James Page 2016-05-18
<ejat> need to ping jamespage
<magicaltrout> all that hook is trying to do is list the features
<magicaltrout> then install the missing stuff
<magicaltrout> but clearly there is a key verification issue even though it seems to be told to accept them all
<magicaltrout> that said
<magicaltrout> it gets ODL from here:
<magicaltrout> https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.2.2-Helium-SR2/distribution-karaf-0.2.2-Helium-SR2.tar.gz
<magicaltrout> so it could also be a funky upstream problem
<magicaltrout> because I saw similar discussed over on #karaf a few weeks ago
<magicaltrout> you could get that package and verify that the client works
<ejat> 2016-07-28 09:11:55 INFO install Cloning into '/opt/distribution-karaf-0.2.2-Helium-SR2.tar.gz'...
<ejat> 2016-07-28 09:11:56 INFO install fatal: repository 'https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.2.2-Helium-SR2/distribution-karaf-0.2.2-Helium-SR2.tar.gz/' not found
<ejat> 2016-07-28 09:11:56 INFO worker.uniter.jujuc server.go:173 running hook tool "juju-log" ["-l" "WARNING" "Install source attempt unsuccessful: Command '['git', 'clone', u'https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.2.2-Helium-SR2/distribution-karaf-0.2.2-Helium-SR2.tar.gz', u'/opt/distribution-karaf-0.2.2-Helium-SR2.tar.gz', '--branch', 'master']' returned non-zero exit status 128"]
<ejat> 2016-07-28 09:11:56 WARNING juju-log Install source attempt unsuccessful: Command '['git', 'clone', u'https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.2.2-Helium-SR2/distribution-karaf-0.2.2-Helium-SR2.tar.gz', u'/opt/distribution-karaf-0.2.2-Helium-SR2.tar.gz', '--branch', 'master']' returned non-zero exit status 128
<ejat> 2016-07-28 09:11:56 INFO install Created symlink from /etc/systemd/system/multi-user.target.wants/odl-controller.service to /lib/systemd/system/odl-controller.service.
<ejat> not found ? but i can download it through browser
<magicaltrout> well
<magicaltrout> that link isn't a git repository
<magicaltrout> so i don't know what its doing, but whatever it is, its not right :)
<magicaltrout> last time i checked you can't git clone a tarball
<ejat> https://wiki.opendaylight.org/view/OpenDaylight_DLUX:DLUX_Karaf_Feature#Helium
<ejat> magicaltrout: http://paste.ubuntu.com/21259206/
<ejat> the result differ when i manually download n extract the file
<magicaltrout> did you run ./karaf first?
<ejat> after ./start
<ejat> the same error occur
<magicaltrout> dunno then
<magicaltrout> that is an upstream problem, the charm seems to have problems as well. I suggest you crack out your best hacking skills ;)
<ejat> magicaltrout: gtg 1st .. will try to continue tonight n ping jamespage :)
<magicaltrout> good luck chief!
<jcastro> balloons: tell me about this juju snap
<balloons> jcastro, sssh! The secret is not supposed to be out yet :-)
<balloons> lol, it's an open secret by now I guess
<jcastro> it's in the store though, I see it
<balloons> jcastro, right. Install it with --devmode and enjoy
<jcastro> oh, unless you made it canonical only and I just basically ruined your surprise
<balloons> it's beta13, and should just work
<jcastro> ah, I couldn't bootstrap, but I didn't install with --devmode
<balloons> jcastro, right. I need to rebuild it, but also, there's a rule Jamie is working on that I need to work under strict confinement
<balloons> it's really very close
<balloons> jcastro, also note, you should juju add-credential if you want to use a cloud
<jcastro> even for lxd?
<jcastro> balloons: I had an idea if this works
<jcastro> balloons: do you have a Fedora machine? I am wondering if this would be a nice way to get juju onto non-ubuntu distros
<jcastro> magicaltrout: if you want to come to the summit I need you to register on summit.juju.solutions at your convenience!
<magicaltrout> talk submissions aren't enough ? :P
<balloons> jcastro, absolutely it would be. The constraints about the server still exist, but suddenly the client can run anywhere
<magicaltrout> balloons: that is very cool
<lazyPower> magicaltrout - i think we need the RSVP for headcount reasons
<jcastro> yeah I need an accurate number for food
<jcastro> and booze. :)
<jcastro> balloons: that's fine, that's a huge improvement for us though
<magicaltrout> done
<balloons> jcastro, lxd doesn't need credentials no
<balloons> it 'just works'
<jcastro> so lets say this snap works awesome
<magicaltrout> juju on arch, opensuse & fedora will be very good
<jcastro> what reason would anyone need to use the deb for the client in xenial? For normal developers though, I understand if like, enteprise people will want to use the archive, etc.
<balloons> jcastro, the demo is in a couple hours, but yes.. I have opinions :-)
<jcastro> hah sorry man, I didn't mean to ruin your surprise
<magicaltrout> lol
<balloons> jcastro, I thought someone spoke up.. You just found it in the store eh?
<jcastro> but since I ruined it, will you be doing edge and beta channels as well? I am dreaming of a non-PPA Juju experience for myself
<balloons> I'd been holding back on publishing it
<jcastro> juju-nskaggs                            2.0-beta13                    nskaggs              -        juju client
<balloons> jcastro, I would like to do per commit pushes to edge, etc, yes
<jcastro> man, if we could use the channels in the snap store to do like RCs and whatnot, dude, that would seriously improve my life
<balloons> jcastro, do let me know if it works for you. I've had only one other person trial it.. and myself on 2 machines
<jcastro> balloons: right after this call
<jcastro> balloons: bootstrapped worked, doing a deploy now
<jcastro> balloons: realtime-syslog-analytics has deployed
<balloons> jcastro, pretty slick eh?
<jcastro> balloons: aha! tab completion doesn't work in your version
<jcastro> and I just got it back in vanilla juju so I'm not giving it up
<ejat> jamespage: r u here?
<lazyPower> ejat - the openstack charmers are attending  a sprint in scandinavia this week. It may be prudent to reach out over the mailing list with questions, as they are much more likely to be seen by the interested parties.
<ejat> lazyPower: ok thanks for the info
<lazyPower> np, if there's anything non-specific ot openstack i may be able to lend a hand
<ejat> lazyPower: its odl-controller
<lazyPower> ah, I'm not familiar with that particular charm. Sorry I'm not much more help here
<ejat> no worries
<lazyPower> hey mbruzek - question for you wrt this post i'm hacking on. One thing i'm noticing is it would be great to have access to the reactive states to control templating workflow. If i'm going to hack that in, should we perhaps extrapolate that into a helper for charm-helpers? eg: build a dictionary that consists of:  config data, relation data, state data, all contained under their sub-keys of the dict?
<lazyPower> i guess thats not as straight forward as i was initially thinking.
<lazyPower> disregard
<geetha> Hi, http://paste.ubuntu.com/21260547/ this is my metadata.yaml. when I run `charm proof` against the charm, we are getting the 'E' message: "E: Unknown root metadata field (terms)" and "E: Unknown root metadata field (resources)"
<geetha> Can any one please suggest the solution for this?
<magicaltrout> its just charm tools out of date geetha
<magicaltrout> don't worry about it
<balloons> jcastro, lol! Indeed it actually won't since I have NO debian packaging. I've no idea how I would enable that. Let me ask
<geetha> which is the latest version of charm tools?
<geetha> will upgrading charm tools resolve the issue?
<magicaltrout> don't think so
<magicaltrout> its only a proof, you are free to ignore it
<magicaltrout> i'm sure by 2.0 GA it will be resolved
<geetha> oh ok.. thank you:)
<aisrael> I believe there's an open bug to fix that proof error
<aisrael> https://github.com/juju/charm-tools/issues/190
<aisrael> So, fixed but maybe not released
<magicaltrout> ah see! what do i know?! ;)
<magicaltrout> its been a few weeks since i ran charm proof
<magicaltrout> it may have been released by now
<lazyPower> mbruzek - bitesized review for you when you've got a chance https://github.com/juju-solutions/layer-docker/pull/60
<zeestrat> Hey, anyone know how to add regular users to the controller administrator group in Juju 2.0-beta13? I can't see any way to elevate/grant users to the admin group so they can create models.
<jcastro> evilnickveitch: are you changing the title stubs for articles in the docs?
<jcastro> ie. config-manual -> clouds-manual
<evilnickveitch> jcastro, we actually removed all the old config pages.  All the info was in the credentials page
<evilnickveitch> but then we had to make new pages for some of it anyway
<evilnickveitch> so, they aren't just renamed, they are completely different pages
<jcastro> I just don't get why we wouldn't have the existing pages reditect to the new ones
<jcastro> or at least just all the old config pages to the new creds page
<evilnickveitch> jcastro, I never said we shouldn't
<jcastro> evilnickveitch: can you take an item to file an RT to get those fixed?
<evilnickveitch> jcastro, yeah. I don't think it pays to be too specific as the docs aren't really finished, but I can get all the config-* pages pointed at clouds
<jcastro> yeah the config pages in particular is all I care about
<evilnickveitch> ok
<jcastro> if we had a fancy "we can't find foo so here's where we think you should go" would be one thing
<jcastro> but right now if you 404 we kind of leave people with no place to go
<jcastro> evilnickveitch: from digging in GA the config pages had about 3k views in the past 30 days
<evilnickveitch> jcastro sent. The last one got actioned pretty quickly.
<jcastro> excellent, thanks man!
<bdx> icey: whats up? Where did you leave off with charm-vault? - Is there a consul charm that interfaces with it, or did it not get that far?
<bdx> cholcombe:^
#juju 2016-07-29
<cholcombe> bdx, as far as i'm aware it works
<cholcombe> bdx, are you having an issue with it?
<kjackal> Hello juju world
<lazyPower> mornin #juju o/
<icey> bdx: it doesn't 100% work as it doesn't get the info from the existing consul charm; I was working on a slight fork of the existing consul charm to be an agent charm, ie: local subordinate consul that doesn't run the gui but didn't get around to finishing / puyblishing it
<lazyPower> icey - yeah, we really do need to upgrade that charm. thing is we dont have any consumers in ~containers anymore now that we've dropped the swarm work
<icey> lazyPower: I have a vault charm that is a consumer :)
<lazyPower> and with swarms new self-contained bidniss shipping with docker.
<icey> well, would be if it related correctly :-P
<lazyPower> heh heh
<lazyPower> should be simple, its just ip:port
<icey> but lazyPower the consul docs suggest running a consul agent on EVERY machine and querying your local agent, rather than a remote agent...
<icey> basically no consul client has HA support where you can give it several IPs
<icey> :-/
<lazyPower> i mean, thats fine, it works either way
<lazyPower> you can query remotely or query via the local proxy
<lazyPower> it all goes to the same place
<icey> lazyPower: but querying remotely means I'm down if the node I'm talking to goes down since no HA love
<lazyPower> thats a possibility, yes
<icey> at least querying locally means if my machine is down nothing is different ;-)
<lazyPower> i guess? it'll buffer your writes
<lazyPower> but not your reads
<icey> lazyPower: yeah; I decided to work towards the doc suggested version of local agent but never got to finish that work; making the vault charm I've got use a remote consul shouldn't be too bad, besides the fact that it (vault) will not let you configure multiple consuls to talk to
<lazyPower> making a local agent should be mostly done. fork the charm, convert it into a sub and drop the UI bits, add the relation and the template logic and you're done right?
<icey> lazyPower: yeah
<lazyPower> also i'm not 100% on it not buffing reads, it may give you a cache
<icey> OH
<icey> and add a relation
<icey> for the consul-agent
<lazyPower> yep
<icey> otherwise, yeah it's all there
<lazyPower> seems like that would be short sighted for them to not give you a minimal cache of the kv data
<lazyPower> so i bet they do
<lazyPower> but i dont recall having read that anywhere
<jcastro> lazyPower: the conjure up instructions for observable kubernetes doesn't seem to work for me
<lazyPower> jcastro - ok, can you describe whats happening?
<jcastro> it seems like it's trying to install a package?
<jcastro> jorge@ivory:~$ conjure-up cs:~containers/observable-kubernetes
<jcastro> Reading package lists...
<jcastro> Building dependency tree...
<jcastro> Reading state information...
<jcastro> E: Unable to locate package cs:~containers
<lazyPower> ah you missed bundle/ in the url
<lazyPower> cs:~containers/bundle/observable-kubernetes
 * lazyPower checks the readme
<jcastro> nope
<jcastro> same error
<jcastro> and I am following the readme
<lazyPower> hmm i wonder if we missed some metadata on our last publish or something
<lazyPower> stokachu - ping
<lazyPower> jcastro - one thing that i know is a grey area is we dont have that path under CI, so we've lost visibility into if we break that route. it should 'just work' but we're also manually having to set metadata points so conjure knows where to look for the bundle
<jcastro> hmm, how do you guys feel like just all of us concentrating on a working bundle, updated revs across the board today?
<lazyPower> and i think thats the core of the issue
<lazyPower> jcastro - i'm eyeballs deep in this blog post, but i'm getting close to finishing. I think ~ 11/12 i'll be done.
<jcastro> ok
<jcastro> I guess I'll start my post but with the native bundle
<stokachu> jcastro: no cs:
<stokachu> just ~containers/observable-kubernetes
<stokachu> jcastro: oh i see a bug in the code, we check for cs: and not cs:~
<lazyPower> stokachu thanks for taking a look
<stokachu> np
<jcastro> lazyPower: ok so check this out, adding a # passes charm proof in the bundle
<jcastro> so I will document the bundle itself for constraints
<jcastro> and then by default not define any, so the primary use case should be pull, modidy, deploy, not deploy out of the store as-is. I'll update the readme and whatnot
<jcastro> and then do a PR
<lazyPower> jcastro - ok, sounds good to me. I know that the target was intended to give machine constraints, but given the context of what we've found i think thats a better option
<jcastro> lazyPower: does `juju run --application kubernetes is-leader` usually take a long time?
<lazyPower> a couple seconds at most
<jcastro> readme is out of date on that too, fixing
<jcastro> oh nm, cluster is not up yet
<jcastro> lazyPower: upgrade worked: Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"clean"}
<lazyPower> jcastro - awesome. v1.3.3 is latest however. you did 1.3.1?
<lazyPower> but i know we should be compliant. The only downside is what we've already discussed, the missing federation bits.
<jcastro> yeah I wanted to see how that worked
<lazyPower> Pretty spiffy how it tears down and reconstructs in place right?
#juju 2016-07-31
<jobot> hi, my charms are not deploying with the message "waiting for agent initialization to finish", juju-debug log shows this over and over http://pastebin.ubuntu.com/21591371/
<slim> hello guys, I have a question: Can Juju 2.0 be used for deployment of a security critical app? I know that it is discouraged in the docs but they seem a bit outdated
<slim> Anyways, where can I find some documents regarding Juju's security decisions and design
<slim> anyone? :)
<slim> some experiences or opinions maybe?
<slim194> hello guys, I have a question: Can Juju 2.0 be used for deployment of a security critical app? I know that it is discouraged in the docs but they seem a bit outdated... Anyways, where can I find some documents regarding Juju's security decisions and design
<slim194> hello guys, I have a question: Can Juju 2.0 be used for deployment of a security critical app? I know that it is discouraged in the docs but they seem a bit outdated... Anyways, where can I find some documents regarding Juju's security decisions and design
<bdx> slim194: in what respect?
<cafaroo> Im having some problems bootstrapping juju on an Proliant SL170z. I get failed deployment and the following error "An error occured handling 'sda': OSError - [Errno 6] No such device or address: '/dev/sda2'". Anyone that has had the same error?
<cafaroo> I think this may be related to https://bugs.launchpad.net/curtin/+bug/1562249 (Had this problem with cciss before with other hardware). Any clues?
<mup> Bug #1562249: Failed to deploy machine with HP Smart Array Raid 6i <curtin-clear-holders> <curtin-sru> <landscape> <curtin:Fix Committed> <Landscape Server:Invalid> <MAAS:Invalid> <https://launchpad.net/bugs/1562249>
#juju 2017-07-24
<veebers> Drat, I didn't realise that my debugging of the juju github merge check job was ran across a bunch of PRs. If you see a comment/check from the bot 'github-check-merge-juju-pl' please ignore it, sorry for the noise
<jinhoooman> LMAO WTF IS JUJU
<pranav> tinwood: PM?
<tinwood> hi pranav
<rick_h> ak_dev: will give it a go this morning.
<stormmore> o/ juju world
<rick_h> welcome to the team stormmore :)
<stormmore> thanks  rick_h :) hope I can live up to the team's reputation ;-)
<jeiooioi> wtf is juju i dont get it its something you run on servers or sth? wtf
<ak_dev> jeiooioi, jinhoooman : its a orchestration tool to manage deployments / VMs on the cloud or on bare metal
<ak_dev> Basically, it makes tasks of deploying complex applications from hours of setting stuff up to running a single command
#juju 2017-07-25
<ak_dev> thedahac, tvansteenburgh, rick_h, kjackal : if you havet not already deployed, please do try to deploy this new bundle
<ak_dev> https://usercontent.irccloud-cdn.com/file/NXjEsqoq/bundle.yaml
<ak_dev> you won't have to run any extra commands, just deploy :)
<ak_dev> thanks again
<kjackal> hi ak_dev testing it on aws now. thank you
<ak_dev> kjackal: hey, thanks for testing it :-)
<ak_dev> some pods dont run on these though, maybe you can debug better
<kjackal> ak_dev: you are right containers do not start, although the bundle deployes...
<kjackal> let me open an issue and put there some logs with what I see
<ak_dev> kjackal: some of them start right?
<ak_dev> yeah sure that would be great!
<kjackal> ak_dev: yes some pods seem to start, others crashloop
<ak_dev> kjackal: yeah cool, so thats exactly what happened with me too
<ak_dev> great then, so it works for you too (kind of)
<kjackal> ak_dev: any preference where I should open this issue?
<kjackal> ovn charm layer?
<ak_dev> kjackal: no idea, I dont exactly know where the issue is, or what exactly it is
<ak_dev> do you think it could be something with those pods?
<ak_dev> or with OVN?
<kjackal> ak_dev: I do not know right now. I'll open the issue under ovn layer and we can expand to whatever component has the real problem
<ak_dev> kubectl describe pod <pod_name> gave me something like "connection refused to the pod ip : port" for health check
<ak_dev> kjackal: yeah, sure
<ak_dev> kjackal: I also forgot to tell you, the open port thing works, obviously you might have noticed that now, thanks for the help you provided before :-)
<kjackal> ak_dev: I saw that, I think the open ports you are doing is a bit aggressive. :)
<ak_dev> kjackal: haha yeah
<ak_dev> but i noticed once, when I deployed without all those ports open, services werent starting
<ak_dev> maybe poor judegement from my part, will sure fix it after the more important problems get solved, i think
<armaan> jamespage: Hello, I am running Mitaka (deployed with juju) on Trusty and now I want to upgrade to Newton. IIRC, for that I will have to first upgrade Trusty to Xenial. Could you please let me know whether apt-get dist-upgrade is enough for upgrading trusty to xenial and I don't need to upgrade charms?
<jamespage> armaan: the trusty->xenial hop is not currently possible - there are a few nuances
<jamespage> specifically lxc vs lxd
<jamespage> and the fact that juju still thinks the application series is trusty
<jamespage> that said, the charms do support both series
<armaan> jamespage: Ok, so how could I upgrade a production Mitaka environment to Newton?
<armaan> which is running on trusty
<jamespage> you can't at the moment
<armaan> jamespage: Ok, so in your opinion what should be upgrade strategy for me here?
<jamespage> armaan: limited choice atm
<jamespage> armaan: build a second xenial based cloud, migrate workloads
<jamespage> or I think the juju team are doing some work on a documented process (i.e. not fully automatic) for doing in-place - but thats WIP
<armaan> jamespage: ok and this is also what canonical does for their customers who want to upgrade from Mitaka?
<jamespage> I think there have been a few who have taken that first route, and some who are waiting for the second option
<jamespage> armaan: was this a juju 2.x based deployment?
<jamespage> or is it the older 1.25.x series?
<armaan> jamespage: older 1.25.x series?
<jamespage> armaan: which version of juju are you using?
<armaan> jamespage: juju 1.25
<jamespage> armaan: again there is not currently an upgrade process from 1.25.x to 2.x.x
<jamespage> that's part of the work the juju team have on
<armaan> jamespage: ok, thanks for letting me know.
<kjackal> ak_dev, the kubernetes team appointed some time in this two week sprint to support the work you are doing on ovn. Idealy we would like to have your bundle fully functional at the end of this two week sprint. After that so we can distil the tasks needed to have your bundle and cdk converge.
<ak_dev> kjackal: hey! sorry was out for a while, that great! who do I contact for help from the team?
<ak_dev> Yeah, we will try and complete this at the end of the further two weeks
<ak_dev> thanks for helping out :-)
<kjackal> ak_dev: EU time its me and US time you can ping Cynerva or ryebot, but we show a preference towards opening issues
<ak_dev> kjackal: oh okay, cool, I will open an issue whenever I feel the need to
<kjackal> thanks ak_dev
<ak_dev> kjackal: no, thank you :-)
#juju 2017-07-26
<bdx> does juju big-data have its own irc channel?
<bdx> do the big data charms offer a way to decouple hdfs and hadoop?
<bdx> http://docs.ceph.com/docs/kraken/cephfs/hadoop/ < anyone looked into this
<kjackal> hi bdx, this is the channel for big data charms as well. kwmonroe is leading the big data effort.
<kjackal> bdx: the abstraction hadoop processing offers over the storage solution is included in the slave+namenode node of the hadoop processing bundle
<kjackal> bdx: would you be interested in cephfs as an alternative?
<kjackal> bdx: you might want to make this an official request by opening a feature request issue against the hadoop processing bundle or just send an email to the juju list
<kjackal> bdx: just saw you already brought that to the list. Thank you
<armaan> jamespage: Hello, I am hitting this bug: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1684040
<mup> Bug #1684040: "Missing relations: network-service" is not covered in metadata nor documentation <OpenStack neutron-gateway charm:New> <https://launchpad.net/bugs/1684040>
<armaan> Mitaka + Trusty environment
<jamespage> armaan: do you have the relation between nova-cc and neutron-gateway?
<armaan> jamespage: After adding the nova-cc and neutron-gateway relation, i get a 'Incomplete relations: network-service' message.
<jamespage> armaan: can you make sure you have a relation between nova-cc and neutron-api as well
<jamespage> armaan: your message is 'incomplete' rather than 'missing' which has a different meaning - so not quite the same bug
<jamespage> I think we should probably add the neutron-api - nova-cc relation to REQUIRED_INTERFACES - its been optional in the past to support nova-network based deployments
<jamespage> but that feels like a thing of the past
<armaan> jamespage: Yes, a relation exists between nova-cc and neutron-api as well.
<armaan> jamespage: ahhh, wait
<armaan> jamespage: I guess that was the issue, this fixed the problem "juju add-relation nova-cloud-controller neutron-api"
<armaan> thanks! :)
<jamespage> yw
<armaan> jamespage: After adding the relation earlier, neutron agents are failing to report their state. http://paste.openstack.org/show/616542/
<armaan> jamespage: All the services look fine to me http://paste.openstack.org/show/616551/
<noops> current method of deployment which i m using : juju deploy /charm/mycharm --resource install = "tar file" -to <any-unit> . How can I do that using yaml file if i don't want to provide resource through cli
<anrah> if you mean bundle? I don't know whether that is possible.. At least it was not some time ago
<noops> yes i mean that
<noops> So providing through cli is the only way ?
<anrah> https://bugs.launchpad.net/juju/+bug/1623217 at least according to that Bug it is not yet resolved
<mup> Bug #1623217: juju bundles should be able to reference local resources <sts> <talisman> <juju:Triaged by ecjones> <https://launchpad.net/bugs/1623217>
<jhobbs> how can I specify in the 'machines' section of a bundle, which zone I want a machine in?
<jhobbs> if I want machine 0 in zone 'zone2' how do I specify that in the bundle?
<jhobbs> it seems to ignore 'zone' field and says 'zone' is an invalid constraint
<thedragon> how easy is it to set up juju?
<rick_h> jhobbs: which provider?
<jhobbs> rick_h: maas
<thedragon> open stack?
<rick_h> jhobbs: hmm yea not sure. It looks like zone is supported in the code to get a node, but not sure where that gets maps out to a valid constraint to listen to. I thought it was availability-zone or something on AWS, but normally Juju round-robins zones for unit resilience
<jhobbs> rick_h: i can do juju deploy cs:ubuntu --to zone=zone3
<jhobbs> that works fine, i just don't know how to from a bundle
<jhobbs> i will file a bug
<rick_h> jhobbs: ah! so it's not a constraint
<rick_h> jhobbs: it's a placement directive ugh
<rick_h> jhobbs: and bundles try to parse that as an application name in the bundle vs understanding that zone thing
<jhobbs> it's a constraint to maas but a placement directive to juju
<rick_h> jhobbs: so sounds like an issue in how we model that and a bug on the /juju/bundlelib probably
<rick_h> jhobbs: right, but not something juju does with --constraints "x=y"
<jhobbs> yeah
<jhobbs> ok, thanks rick_h, i'll file a bug
<jhobbs> https://bugs.launchpad.net/juju/+bug/1706704
<mup> Bug #1706704: bundles: no way to specify which zone a machine should be in <cdo-qa> <juju:New> <https://launchpad.net/bugs/1706704>
<Mmike> hi, lads. Is there a way to do a staging, manual upgrade of juju? I'm running juju 2.0.2 and I'd like to go to latest 2.2, but if I allow all of my agents to connect to the stateserver, bad-things-happen
<Mmike> so I was thinking, firewalling deployed machines, upgrading stateserver via 'juju upgrade-juju', and then 'manually' (somehow) upgrading the agents
<Mmike> is that even an option?
<ak_dev> Hi all,  I would like to say that the prototype of the OVN charm is ready, you can deploy pods successfully using that
<ak_dev> This the charm : cs:~aakashkt/ovn-13
<ak_dev> and the repo : https://github.com/AakashKT/ovn-kubernetes-charm
<ak_dev> I  would also be bringing up issues on the repo soon, as kjackal suggested
<ak_dev> Finally, would like to have you guys review the charm and give your suggestions on what can be improved  / added
<ak_dev> Thank you
<ak_dev> Mmike : Sorry i posted the message just after yours, dont have an answer to your query, but I am sure somebody else will help out :-)
<Mmike> ak_dev: no worries :) It's IRC, that's how it works :)
<ak_dev> Mmike: Yeah, still kind of new here :-)
<natefinch> marcoceppi or arosales (or anyone else who knows):  are there stories for rolling restarts, rolling upgrades, or a/b deployments in Juju yet?
<arosales> natefinch: I believe rolling upgrades aren't a juju-core base function but are modeled through actions and config given the charm author implements such a method.
<arosales> natefinch: for a/b deployments though model migration may be worth taking a look, https://jujucharms.com/docs/2.1/models-migrate
<arosales> hth natefinch
<natefinch> Thanks :)
<natefinch> oh yeah, model migration, forgot about that.  That would be nice.
<arosales> natefinch: model migration does work under the same cloud.
<arosales> If others have thoughts on rolling upgrades please chime in
<natefinch> what's the state of the art in docker support these days?
<arosales> natefinch: modeled by the author at the charm level
<natefinch> ok
<arosales> natefinch: k8 charms are a pretty solid
<arosales> http://jujucharms.com/canonical-kubernetes
<natefinch> yeah, we've kind of decided against k8s because it's so complicated
<arosales> ya, pending the scale k8's may not be worth the over head. But once its running provides nice features to keep scaled and running
<arosales> to keep docker containers scaled and running, that is
<natefinch> right
<natefinch> docker is also a problem for us, since we use a lot of storage-based systems, postgres, cassandra, kafka, etc... and those don't play well with docker by default
<tvansteenburgh> natefinch: you can leave your storage external at first
<tvansteenburgh> e.g. as mentioned in https://insights.ubuntu.com/2017/07/20/run-django-applications-on-the-canonical-distribution-of-kubernetes/
<natefinch> does networking work on AWS yet?
<thumper> natefinch: which bit exactly?
<natefinch> like creating subnets and assigning services to a specific subnet
<thumper> you generally assign applications to spaces, not subnets
<thumper> if you create a space and put the right subnets in it, then I think it works
<natefinch> sure.... but the docs say spaces are only supported on MAAS
<natefinch> "Note: Advanced networking features, such as 'spaces`, are currently only supported by MAAS."
<thumper> spaces are kinda supported on AWS, but FSVO working
<thumper> that is a focus this cycle
<natefinch> *nod*
<natefinch> what about networking among containerized services?
<natefinch> thumper, arosales: ^ .   (i.e. last I heard that was still a problem)
<thumper> otp
<natefinch> kk
<arosales> natefinch: we regularly put applications in lxc containers for OpenStack
#juju 2017-07-27
<stub> natefinch: The coordinator layer allows a charm to do rolling upgrades and rolling restarts easily
<stub> (implementation wise it is awful, so would be good to get a hand from Juju here, or perhaps some sort of 3rd party lock service to avoid implementing it over the peer relation and leader settings)
<natefinch> stub: ahh, nice.
<armaan> jamespage: Hello, I am getting this error on my mitaka deployment , "UnsupportedRpcVersion: Specified RPC version, 1.3, not supported by this endpoint.". Should i file a bug for this?
<leoge> Hi,How can I run Juju  and Juju-gui localy?
<pranav_> Hello #juju. Facing issue fetching charm from store.
<pranav_> cannot get discharge from "https://api.jujucharms.com/terms": third party refused discharge: verification failed: no macaroon
<pranav_> full error at http://paste.ubuntu.com/25182180/
<pranav_> kjackal: Any pointers?
<kjackal> pranav_: looking at the charm now
<kjackal> pranav_: usualy you can do a charm agree
<pranav_> Thanks. We are able to pull charms without terms fine
<pranav_> We have tried with/without agreeing terms. Same issue for us
<pranav_> charm pull fails too
<kjackal> pranav_: can you try a "juju agree vtas-hso/eula/1"
<kjackal> and then the deploy again?
<kjackal> charm pull works for me
<kjackal> can you also try to do a charm logout and charm login ?
<pranav_> tried
<pranav_> same issue :(
<pranav_> kjackal: i will try with on a fresh node with a new launchpad user and see if that works
<kjackal> pranav_: here is what I try http://pastebin.ubuntu.com/25182282/
<kjackal> could you enable some debugging?
<kjackal> pranav_: eg with the --debug flag during juju deploy
<pranav_> kjackal: same issue still even with new launchpad user. http://paste.ubuntu.com/25182306/
<pranav_> kjackal: enabled debug in the output
<kjackal> pranav_: Ahh I see you cannot agree to the terms
<kjackal> pranav_: do you happen to know the maintainer of the charm you want to deploy?
<pranav_> its ours :)
<pranav_> kjackal: if you can point me to steps i can do them
<kjackal> pranav_: most probably the author of the terms need to grant permissions to the terme
<kjackal> *terms
<pranav_> kjackal: surprisingly was working fine till yesterday
<kjackal> I have seen this in the past once...
<pranav_> has broken since yesterday evening
<kjackal> If I am not mistaken the solution was to do a juju grant <terms> everyone
<pranav_> I found this old reference https://lists.ubuntu.com/archives/juju/2016-September/007818.html
<pranav_> ok. Let me quickly check
<kjackal> pranav_: you are saying it might be something wrong with your juju deployoment?
<pranav_> I don't think so. Usually charm pull works regardless of whether you have juju model deployed
<pranav_> and this issue is occuring on multiple juju controllers
<kjackal> any news pranav_ ?
<pranav_> kjackal: juju terms doesnt have anything for terms
<pranav_> did release terms with charm. same issue still
<pranav_> juju grant* doesnt have anything for terms
<kjackal> pranav_: is there a publish?
<kjackal> or is it only release?
<pranav_> kjackal: did charm release
<pranav_> kjackal: since you were able to pull the charm, do you still feel issue is with the terms?
<kjackal> pranav_: good point. I do not know
<kjackal> let me try something. brb
<kjackal> no luck charm pull just works over here
<kjackal> pranav_: you might have more luck asking at the list or at #juju-gui (in case they have more insight on the store)
<pranav_> kjackal: Sure will try there. Thanks.
#juju 2017-07-28
<fuzzywaffler> hello - I have a local juju machine that will only expose charms to 10.x.x.x.  How can I have JuJu take advantage of the dhcp on my main NIC?
<fuzzywaffler> trying to have multiple users gain access to the services running from the juju.
<fuzzywaffler> I have an ubuntu server 16.04LTS that is running JuJu properly. The system is able to launch charms without issue. The server is headless so I am setup with SSH. SSH works fine. When I go to expose the charms that juju has created it only creates them as an internal 10.x.x.x ip using lxdbr0. I have a windows system that is running JuJu.exe 2.2.1 and I am trying to bootstrap the local machine 146.x.x.x to the windows juju. It sit
<fuzzywaffler> o have the server call a dhcp request for the charms or am I supposed to have users run juju to connect to the local juju server? It seems like others had similar issues. I tried to setup the server as MAAS but BIND9 causes an issue with lxd so maas was removed.
<noops> Hi when I am trying to deploy a charm, I am getting this error .ERROR storing charm for URL "cs:~vtas-hyperscale-ci/hyperscale-controller-2": cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": cannot unmarshal wait error response: invalid character '<' looking for beginning of value
<noops> What could be the possible reason for that?
<armaan> hello folks, Does the `neutron-openvswitch` charm needs an `openstack-origin` parameter too?
<kjackal> hi noops, how do you deploy the charm? Also you might find the --debug flag helpfull
<BlackDex_> hello there i'm trying to upgrade juju (1.25.3 to 1.25.12) but i get the message "connection is shut down"
<BlackDex_> nick BlackDex
<BlackDex> i tried using --series trusty and --upload-tools
<BlackDex> that seems to try to upload the correct version, but it still gives me the same error
<BlackDex> also trying an other version is not able
#juju 2017-07-29
<noops> Hi, I need to create bundle using local charm
<noops> Not getting how to go for it
<noops> Hi, I need to create bundle using local charm
<noops> how can we do that
<ak_dev> hi, I have a doubt regarding peer relations, if anybody is online, please do ping
#juju 2018-07-23
<stickupkid> manadart: if you get time, can you have a look at this one as well? https://github.com/juju/juju/pull/8946
<manadart> stickupkid: Yeah, it's on my list. Deep in the zone stuff ATM.
<stickupkid> manadart: sure take your time :)
<stickupkid> manadart: i'm working out how you can add-cloud with a password and it all just work
<stickupkid> I'm sure the CI server hates me :|
<rmcd> Hey guys, I've just written an interface for a couple of charms I'm writing for Spicule, and I'm coming to build a new revision of the new charms using the new interface and it's not really building. "build: Conflict: File in destination directory was modified after charm build: layer.yaml", and when I build with --force, it says it's unable to locate the interface. I'm pretty new to this, so do
<rmcd> I need to publish the interface somewhere before building the charms? Can't really see anything on the docs about it...
<stub> rmcd: Use the $INTERFACE_PATH environment variable to point to your local interfaces; interface:foo will be looked for in $INTERFACE_PATH/foo , and if it isn't found there a lookup is made.
<rmcd> So I need to copy the interface repo to that interface path before building?
<stub> rmcd: The error message (File in destination directory was modified) likely means exactly what it says, and --force is how you can force it.
<stub> rmcd: Yes, or do it the other way and set the environment variable so that your repo is what is found.
<stub> I have $INTERFACE_PATH set to $HOME/charms/interfaces, and keep my interface repo in $HOME/charms/interfaces/foo
<rmcd> Awesome, thank you so much for your help! :)
<hml> externalreality: pr 8956 is reviewed.
#juju 2018-07-24
<veebers> externalreality: the check merge job runs the tests in a xenial container, we should be able to tell which version of mongodb it's running easily
 * veebers checks
<veebers> externalreality: I can see it's installing juju-mongodb3.2 from xenial-updates (this matches with what I see in the make file install-deps)
<babbageclunk> I think it probably depends which series the controller is running.
<veebers> aye, we have mongodb-server-core, but that's bionic/cosmic only
<veebers> babbageclunk: if I've deployed an app under an alias (i.e. juju deploy cs:ubuntu blah) is it possible, in code, to map blah -> cs:ubuntu? (i.e. app name to charm name)
<babbageclunk> veebers: sorry, didn't see this - yeah, an application has a charm, which has a url.
<veebers> Hmm, I can see a way to get name -> charmurl
<babbageclunk> veebers: What context are you in?
<veebers> hah sweet
<veebers> babbageclunk: Deciding what extra meta data needs stored and what exists already for doing a resource-get on behalf of a charm
<babbageclunk> veebers: ah, ok - so in the unit agent?
<veebers> babbageclunk: I'm not 100% certain where it will occur, but the unit agent make a lot of sense. I'm just storing the data atm (at deploy-ish time)
<babbageclunk> oh gotcha
<manadart> Need a review for LXD cluster nodes and AZs: https://github.com/juju/juju/pull/8961
<stickupkid> manadart: i'm on it
<stickupkid> manadart: i have a question about the "default" profile, should we be using "default", or should we be using some way to get the profile name?
<manadart> stickupkid: We'll be discussing and evolving that in the course of designing profile/device pass-through.
<stickupkid> manadart: perfect, just reading your PR and that cropped up
<stickupkid> mandart: slightly OT, i was looking into cleaning up some lxd tests yesterday and noticed this https://github.com/juju/juju/blob/develop/provider/lxd/environ.go#L46
<stickupkid> manadart: turns out we never send the provider to the environ - so we should clean that up and remove it
<manadart> stickupkid: Yes; I also added some TODOs in my PR for moving logging/checking. Created a card for it yesterday.
<stickupkid> manadart: perfect :)
<stickupkid> manadart: done - LGTM
<manadart> stickupkid: Thanks.
<stickupkid> manadart: "github.com/juju/juju/cmd/juju/machine.TestPackage" that package is failing constantly now
<manadart> stickupkid: Will look.
<stickupkid> manadart: I've made a bug for it https://bugs.launchpad.net/juju/+bug/1783284
<mup> Bug #1783284: Intermittent unit test failure: machine.TestPackage <juju:Incomplete> <https://launchpad.net/bugs/1783284>
<stickupkid> I had a look last week, but I wasn't able to work out why we don't get any real good stack trace of the error
<stickupkid> In fact i tried to run the go test with -count=64 to try and force it into failing, but it didn't fail locally at all
<stickupkid> I've got a theory (probably wrong), that it's failing on a tear down
<manadart> stickupkid: Ack.
<stickupkid> manadart: how much do we want to refactor the lxd tests?
<stickupkid> the goal being, removing stubs?
<manadart> stickupkid: That is my position, but maybe too much to bite the whole lot off as a single exercise.
<stickupkid> yeah, that's what i'm thinking, maybe incremental steps...
<manadart> stickupkid: Another one: https://github.com/juju/juju/pull/8964
<stickupkid> manadart: approved, much better - less branching
<manadart> stickupkid: We should probably start back-porting LXD provider commits to 2.4 as well...
<manadart> stickupkid: Ta.
<stickupkid> manadart: yeah make sense to me
<rick_h_> morning party folks
<manadart> rick_h_: Howdy.
<rick_h_> bdx: ping when you're about
<rick_h_> kwmonroe: same to you please
<magicaltrout> kwmonroe has the holiday blues
<magicaltrout> he's gone on strike
<rick_h_> manadart: doh
<rick_h_> oops magicaltrout
<rick_h_> magicaltrout: do you have any use for lxd profile edits that we can/should be aware of for big data charms so we can spec/make it nice and awesome?
<kwmonroe> rick_h_: strike is over, i have returned from holiday.  we don't customize lxd profiles in big data charms today, but k8s does.  see this for the rigamaroll: https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD.  if/when big data charms need gpu accel in containers, or has a process that needs /proc or /sys, they might need something similar to the cdk profile.
<kwmonroe> rick_h_: any config or command that makes editing lxd profiles easier would be nice and awesome.
<kwmonroe> (i mean, juju config/command)
<rick_h_> kwmonroe: k, just wanted to check in.
<fallenour> o/
<fallenour> Im looking to build a high availability juju controller set (3) as a individual model, whats the best way to do this? I want to use a model so I can take advantage of the permissions controls
<rick_h_> fallenour: I'm confused. So basically you bootstrap, juju switch controller, juju enable-ha and you get three api servers setup and running
<rick_h_> fallenour: then any models you create/use are using those HA api servers in the controller model there
<rick_h_> fallenour: is there something else you're looking for?
<fallenour> rick_h_: Correct, I want to enable ha, but I also want to control how thast accessed. I dont remember how I did it last time, but I set up a model, and inside that model I built all of my controllers. Im not sure how I did that
<rick_h_> fallenour: so the controller is the api server and then you can juju add-model and use juju add-user to create users and then juju grant... those users access to different models
<rick_h_> fallenour: that's how the layering functions
<fallenour> I guess a better way to put it, unless the models are divided, everyone that has access to the model can make any changes to any system inside that model. I want the controllers to be apart from model A, inside model B, and only give access to Model B to admins
<rick_h_> fallenour: so each model can be granted access indepdently
<fallenour> rick_h_: Correct. My issue is, if  I enable-HA inside the normal model, the controllers will also be inside that same model, model A, and not inside of model B,
<rick_h_> fallenour: so enable-ha only does one thing. It brings up more machines into the controller model. (juju switch controller) if you juju status on that model you can see the new machines listed
<fallenour> rick_h_: So you are saying all Ill need to do is simply create a new model, and then activate HA inside of that new model, model B, and that will do it?
<rick_h_> fallenour: enable-ha can never touch any other model
<fallenour> yes
<fallenour> YES!
<rick_h_> fallenour: I'm saying that enable'ing HA in any other model than the controller model doesn't make sense/work
<fallenour> Ok, so we are on the same page then, so the Model B (controller model) already exists upon deployment?
<rick_h_> fallenour: yes, bootstrap comes out of the box with a controller model for the controller bits (e.g. HA api servers) and a default model which you can use to deploy workloads/etc
<fallenour> so switch over to Model B(Controller model) and THEN do juju enable-ha, correct?
<stickupkid> manadart: https://github.com/juju/juju/pull/8965 these are just some clean up, that we should land, we can always work on more in the future
<rick_h_> fallenour: the controller model can never be removed
<rick_h_> fallenour: it only goes away with destroy-controller command
<rick_h_> fallenour: but the default model can be removed, new ones added, etc
<fallenour> ok so that explains a lot
<manadart> stickupkid: Ack.
<rick_h_> fallenour: so it sounds like I'd suggest you bootstrap, juju switch controller, juju enable-ha, juju remove-model default, and then follow https://docs.jujucharms.com/2.4/en/tut-users from there
<fallenour> rick_h_: Ok, so just rebuild from scratch, all the way back from bootstrap?
<rick_h_> fallenour:  not sure what you've got so I can't say
<rick_h_> fallenour: I mean all of that is possible at any time
<bdx> those docs are looking real nice
<bdx> thank you to everyone who put effort into migrating/making the new docs.jujucharms.com
<bdx> its beautiful
<stickupkid> manadart: i get the comments, i think my PR is a better first step, before tackling the arch feature...
<stickupkid> rick_h_: help update for LXD - https://github.com/juju/juju/pull/8966
<plars> Hi, I have a strange error from a unit. We tried to remove the application, and it shows up as terminated, but it won't go away. In debug-log I see this:
<plars> machine-0: 23:06:58 ERROR juju.worker.dependency "unit-agent-deployer" manifold worker returned unexpected error: failed to query services from dbus for application "jujud-unit-sru20170725637-1": Failed to activate service 'org.freedesktop.systemd1': timed out
<manadart> stickupkid: You can lick the arch feature in less code than it takes to shim it out here.
<plars> anyone seen something like that? The current version of juju in that model is 2.2.9 but it won't let me upgrade it
<stickupkid> manadart: ok, i'll drop that commit
<manadart> stickupkid: We also don't need to shim out IsSupportedArch, because we can always just return a mock arch that gets the return that we want from that method.
<manadart> stickupkid: I am thinking to ice my logging PR too for now. What we discussed with rick_h_ means putting back some mess that we took away :(
<stickupkid> manadart: dropped that commit, so it's just a simple update to the provider
<rick_h_> manadart: do we need to change the plan?
<rick_h_> plars: no, that's a new one to me.
<plars> rick_h_: any suggestions on debugging or repairing it?
<rick_h_> plars: looking for existings bugs atm to see if there's something more to help
<plars> thanks!
<rick_h_> plars: and coming up empty...
<rick_h_> plars: can you file a new bug with details on version/cloud/what was running/etc please?
<rick_h_> plars: I mean it might be some cleanup step race condition but I'm not sure. The fact that it's 2.2.9 makes me :( but the issue is that if you can't upgrade then double :/
<plars> rick_h_: sure, tbh I'm not sure how it got in this state. We've been running very stable for a long while and got a similar error one a unit when trying to deploy. Then it started having this on an existing unit that we tried to get rid of
<plars> rick_h_: I'd be happy to upgrade, but it gives me an error that it can't because of that unit
<plars> ERROR some agents have not upgraded to the current model version 2.2.9: unit-sru20170725637-1
<rick_h_> plars: ugh, yea it's so tough to debug this stuff. Maybe we can see if we can upgrade around it or force it in some way
<rick_h_> plars: is the machine that the application on still there?
<rick_h_> plars: e.g. can we juju remove-machine --force to help put some pressure on things?
<plars> rick_h_: yes - it's maas, but I can't remove the machine because it hosts a lot of other applications/units
<rick_h_> plars: yea, that's what I was worried about
<rick_h_> plars: can they be migrated off?
<rick_h_> plars: I guess no, but figure I'll ask
<plars> rick_h_: on another model, I also have a machine that I can't remove, even with --force
<rick_h_> stickupkid: one thought sorry, can you verify that the current help text there conforms to that template we got a while ago?
<rick_h_> stickupkid: just to make sure that while we're in there we bring it up to standard across the whole thing
<plars> rick_h_: on that one, the maas machine that it was once using is gone. It just seems to silently fail
<rick_h_> plars: ? that seems odd. --force with remove-machine is a pretty big hammer that usually doesn't fail unless something is really odd
<plars> and the machine never disappears. On that one, the whole model can go if there's an easier way to force that
<plars> 2        down   10.101.49.149  nx38gq   xenial  default  Deployed
<plars> is how it shows up
<rick_h_> plars: no, we're looking to add some add-model force bits to 2.5 this cycle but I don't have it yet
<rick_h_> sorry, remove-model --force bits
<plars> that one is stuck on 2.0.2 - and can't update for the same reason. No units are even deployed
<rick_h_> plars: is it in a happy state? e.g. does the agent report ok?
<rick_h_> plars: I'm curious of model-migrations can be used to help garden up to the later versions with fixed bugs
<rick_h_> stickupkid: https://docs.google.com/document/d/1ySjCNqd0x6veLfcBetxLI9NH7qfw3xayLWJNqXMvyW8/edit specifically
<stickupkid> rick_h_: sure let me look
<plars> rick_h_: the agent for the one where I can't remove the machine?
<rick_h_> plars: right, but it shows it still there?
<plars> rick_h_: yes, if I do juju status on that model, it shows the machine is there. In reality, that's the only machine left in the model, and it's gone
<rick_h_> plars: oh I see
<plars> rick_h_: https://bugs.launchpad.net/juju/+bug/1783357 - please tell me if you need any other information, or have suggestions for debugging
<mup> Bug #1783357: Failed to activate service 'org.freedesktop.systemd1': timed out <juju:New> <https://launchpad.net/bugs/1783357>
<fallenour> anyone in here have a good contact with the maas team? Its really becoming frustrating that the system im working with keeps trying to change all of its configuration instructions mid-build. It completely defeats the purpose of asking me what domain, IP, storage config, etc if you are just going to randomly generate all of that, and change it all, including hardware zone.
<pmatulis> fallenour, consider documenting your situation and sending to the juju mailing list
<fallenour> whats the flag option for acquiring devices in a specific zone with juju?
<fallenour> is it --zone=<zonelocation>
<rick_h_> fallenour: yea check out https://docs.jujucharms.com/2.4/en/charms-deploying-advanced under "deploy --to"
<pmatulis> fallenour, 'zone' is a placement directive. it can be used whenever juju spawns a machine (commands: deploy, add-unit, add-machine, bootstrap)
<fallenour> is it --zone=<zonelocation>
<fallenour> sorry, meant to send that earlier
<pmatulis> fallenour, it is dependant on your chosen cloud provider
<fallenour> wow, im losing my mind. Ok, next question. So ive got ha enabled, thank you very much fro that btw rick_h_ rebuilding was definitely the smarter decision, id like to integrate docker. I did just notice there was a remove-k8s, which makes me think theres a segment for kubernetes and docker. Can you provide some enlightment for me on that?
<fallenour> pmatulis: cloud provider is maas.
<rick_h_> fallenour: heh, so now you're on the bleeding edge of stuff. There's work going on to enable Juju models on k8s but it's kind of "soft launch" as things like storage and features are in progress.
<rick_h_> fallenour: definitely not ready for production infrastructure yet unfortunately
<fallenour> Oh! Also, for those deploying charms, if they keep pushing because "its not working" with maas, tell them to check to see if they imported bionic (18.04LTS). It hung up my openstack deployment because of that.
<rick_h_> fallenour: however, if you have the time/resource definitely play with it and see if it fits your needs
<rick_h_> fallenour: ah, definitely. Having the right MAAS images is vital
<fallenour> rick_h_: You know, as much as I might hate myself in the morning, you guys have helped me a ton. Ive got 3 extra servers. Ill see if I cant contribute a few pints of blood and a few more bpm to help with testing it.
<fallenour> also, the new bionic boot, it looks amazing. im not even gonna lie, its beautiful
<rick_h_> fallenour: all up to you, I just want you to know where stuff sits and as we build stuff for users to solve problems it's always <3 to get feedback that we're on the right track
<fallenour> its like 4-6 fonts smaller.
<rick_h_> lol
<fallenour> So rick_h_ pmatulis I was thinking. Id like to build a web app cluster with a separate docker cluster, both backed by ceph storage clusters, so the applications and containers can store across the ceph storage drive array across the 3 servers each, what are your thoughts?
<fallenour> from my perspective, it should give all the apps and containers access to a total of about 1.8 TB storage space, with the ability to easily swap out the drives to increase size. Are there any risks I should be aware of, and am I overlooking anything?
<rick_h_> fallenour: you're stepping into kwmonroe and tvansteenburgh's expertise there. I'm not sure
<rick_h_> basically can you setup that ceph cluster as a storage provider for kubernetes and deploy in that way?
<fallenour> kwmonroe: tvansteenburgh Can you two provide some insight? Itll be the first time ill be combining ceph storage with docker.
<fallenour> rick_h_: yea its a pretty interesting idea, especially if it pans out. completely flexible application deployment with completely flexible storage.
<tvansteenburgh> fallenour: if you're using kubernetes for your "docker cluster" then integrating with ceph will be pretty straightforward
<tvansteenburgh> if you're not, then i have no idea
<fallenour> tvansteenburgh: Um...can you clarify? Wait, easier question, is there a kubernetes for juju?
<tvansteenburgh> fallenour: yah, `juju deploy canonical-kubernetes`
<tvansteenburgh> or, for a minimal version `juju deploy kubernetes-core`
<fallenour> tvansteenburgh: how many machines does it take? I only currently have 3 physical set aside for it, is that enough?
<tvansteenburgh> fallenour, then you want kubernetes-core
<kwmonroe> dang it tvansteenburgh, i knew that one.  fallenour, here's some details on both of those: https://jujucharms.com/canonical-kubernetes/ (takes 9 machines) and https://jujucharms.com/kubernetes-core (takes 2 machines)
<fallenour> tvansteenburgh: I was just reading up on it as well, it seems a lot of thought into building it. Yea I just found that one
<fallenour> OOH MY GAWD 9 MACHINES!?
<kwmonroe> 9 times the fun
<fallenour> kwmonroe: Slow your roll there google, we poor people over here.
<fallenour> LOL
<kwmonroe> :)
<kwmonroe> fwiw fallenour, the big bundle is meant to represent a production cluster, so you have 3 etcd units, 3 workers, 2 masters.. thars 8 right thar.
<tvansteenburgh> fallenour: for the poor people we have microk8s: https://github.com/ubuntu/microk8s
<fallenour> kwmonroe: Any reason why I cant cram those onto 3 boxes?
<fallenour> kwmonroe: tvansteenburgh  I mean, 3 3 and 2.
<tvansteenburgh> fallenour: sure you can
<tvansteenburgh> fallenour: you could use the lxd provider and put it all on one machine
<fallenour> tvansteenburgh: Im sensing a downside coming.. o.o
<tvansteenburgh> there's no downside
<kwmonroe> fallenour: you could certainly adjust the bundle.yaml for canonical-kubernetes (cdk) to change num_units from 3 3 2 to 1 1 1.  but if you're gonna hack it up like that, you may as well use kubernetes-core (which we've hacked/condensed for you).
<knobby> fallenour: if that one machine falls over you're out of luck
<fallenour> kwmonroe: knobby tvansteenburgh No no no, I mean put 3 3 and 2 on 3 physical machines, one on each for each machine. so 1 1 and 1 2 2 and 2 3 and 3
<tvansteenburgh> a game of sudoku spontaneously broke out
<fallenour> so spreading all 8 over 3 machiens instead of 8 machines. I know 8 is a lot more beefy, but the demand on 8 machines for someone like myself wont reach justifying 8 for a while
<fallenour> Ooh, yea, so A, B, C Machines, etcd1 worker1 and master1 on machine A, etcd2, worker2, and master2 on machine B, and etcd3 and worker3 on machine C
<fallenour> sorry
<knobby> fallenour: the only downside to reducing the number of machines is that you lose the highly available part of it to a degree. I'm running it and not using 9 machines. I smashed etcd onto the (single)master I have and 2 of the workers
<fallenour> knobby: well you would still keep the HA, just spread it across less hardware.
<fallenour> the odds of that many boxes dying at the same time without a serious issue occuring is really low.
<knobby> fallenour: but if you lose 2 machines you're lost. In the 9 machine setup, it would be ok
<knobby> I completely agree, fallenour and I am making the same gamble locally
<fallenour> knobby: yea but again, the odds of even losing 2 machines at the same time at a moderate load is still really low. like im buying a lotto ticket low, and ill see you all on my yacht. I got that much better odds.
<fallenour> make that two yachts then XD
<fallenour> knobby: but yea, I mean whats the best way to build that into a yaml?
<kwmonroe> fallenour: i would start with the kubernetes-core yaml (https://api.jujucharms.com/charmstore/v5/kubernetes-core/archive/bundle.yaml), adjust it so there are 3 machines in the machines section, keep easyrsa as is (in a lxc container on machine 0), bump up num_units for etcd and k8s-worker to 2 (or 3, or whatever), and adjust those "to:" directives to be like "to: - '0'  - '1' - '2'" as you want.
<knobby> fallenour: just use the lxd stuff like kubernetes-core has. I'm not a master of the --to stuff unfortunately.
<kwmonroe> fallenour: you'd effectively be making a bundle somewhere between core and cdk.  the only thing i'd be careful of is to ensure the k8s-master and workers are on different machines.. so like easyrsa+master on machine 0, etc+worker on machines 1 and 2.
<fallenour> knobby: kwmonroe yea Im working on it now, ill let you guys take a look once i get it done. Feel free to let me know your thoughts once I get it done.
<fallenour> if you guys like it, ill publish it.
<knobby> eh, in a low usage scenario I'm not worried about mixing masters and slaves on the same hardware. But then again, I live dangerously...
<fallenour> alright, so I got it done
<fallenour> knobby: kwmonroe tvansteenburgh Im curious though, do you think I shoudl go ahead and build in ceph into it?
<knobby> fallenour: if that was the end goal, I would
<fallenour> knobby: whats the best way to integrate it? shoudl I just toss it in anywhere, or do I need it to establish a specific relationship?
<fallenour> kwmonroe: tvansteenburgh rick_h_ Do I just kinda "toss" ceph osd/mon onto the pile, and its "good"
<kwmonroe> fallenour: you're out of my league there -- i haven't used ceph myself.
<fallenour> kwmonroe: saaaadness. YOU WERE THE CHOOOSEN ONE!
<kwmonroe> you guy buy 6 more machines, and i'll google how to use ceph ;)
<fallenour> kwmonroe: LOOOL
<fallenour> kwmonroe: Yes...."buy"....*pulls out lightsaber and laser pistols*
<knobby> fallenour: you need some monitors, they are like kubernetes master, and then machines with disks, which are the osd part
<knobby> you probably want to run osd/mon on each machine is my guess
<knobby> fallenour: I have a PR up to allow relating ceph to kubernetes and getting a default storage class for free so you can just make persistent volume claims and get them backed by ceph.
<magicaltrout> kwmonroe: we've been charming up https://livy.incubator.apache.org/ in our efforts to get Hue up and running. Whilst not part of the Big Top stack, would you like us to eventually stick it in bigdata-charmers or just hold on to it?
<kwmonroe> magicaltrout: i'm cool with you holding it.  the only benefit for putting it into bd-charmers would be that we would auto build/release it when things like layer-basic changes.  if you own it, i'll just open an issue reminding you guys to push it yourselves (which is what i do for giraph)
<magicaltrout> fair enough
<magicaltrout> due to a myriad of wifi issues we didn't have a call in the end but i did shoot uros a bunch of questions which he half answered with promises of grandeur and so on
<magicaltrout> i'll forward it on
<magicaltrout> he also said he wished he could grow a beard like rick_h_ but sadly his child like features prevent it....
<rick_h_> LoL
<rick_h_> Naw, he's got that wise man sans beard thing going
<rick_h_> Eternal scholar
<magicaltrout> oh, i thought he just couldn't be bothered going to the hairdressers
<veebers> babbageclunk, anyone: have you seen that odd github tls issue in any PR over the last day?
<babbageclunk> veebers: no, I didn't see it yesterday
<babbageclunk> veebers: although it looks like check-merge jobs are failing at the point of launching the container
<babbageclunk> and merge jobs too.
<babbageclunk> veebers: eg http://ci.jujucharms.com/job/github-check-merge-juju/2518/console
<babbageclunk> I'm having a look on grumpig now
<veebers> babbageclunk: oh :-\ ok thanks, let me know what you find. Thanks re: tls issue
<babbageclunk> veebers: I can't launch a new lxd container on it, getting this: https://paste.ubuntu.com/p/CJS4rKtJbb/
<veebers> babbageclunk: I would just reboot grumpig for a start, easiest and laziest way to debug :-)
<babbageclunk> ok
<veebers> babbageclunk: there are a couple of things you need to do firest
<babbageclunk> cool
<cory_fu> wallyworld: That update to Juju edge did fix the issue I was having, thanks
<wallyworld> cory_fu: great! pr is lgtm also, looks awesome
<cory_fu> wallyworld: Great.  I'll get a quick PR together for your charms repo to work with that before I EOD
<wallyworld> no rush!
<veebers> hey wallyworld o/ welcome back to the sensible timezone :-)
<wallyworld> indeed
<babbageclunk> also sensible hemisphere
<veebers> anastasiamac: when you have a moment could you review: https://github.com/juju/juju/pull/8346 the part I was specifically interested in is 391-392
<veebers> anastasiamac: heh, hold off for now, want to make a slight change to it
<anastasiamac> veebers: oh awesome! good thing i did not look yet then :)
<veebers> ^_^
#juju 2018-07-25
<veebers> anastasiamac: when you have a moment would love a review on that pr :-)
<anastasiamac> veebers: k... but i might collect at some stage :)
<veebers> wallyworld, kelvin: What's the best way to have a k8s cluster to test with. I've had issues with microk8s, unable to tear things down, uninstall needs a reboot as there is resource issues etc. For now I would prefer something I can deploy, leave and blowaway the namespaces as needed.
<wallyworld> i use aws
<wallyworld> or you can use lxd
<wallyworld> deploy kubernetes-core
<veebers> wallyworld: Ah, using the bundle that kelvin has put locally in our testing repo? (I think he had to touch something to make it work with lxd or our tests)
<wallyworld> have you tried microk8s.reset
<kelvin> yeah, lxd
<veebers> wallyworld: aye, I have
<wallyworld> i have a version of kubernetes-core in my repo, i just edited it to remove the lxd nesting
<veebers> wallyworld: ack, how many machines does it use out of interest. I might need to reboot at any rate.
<wallyworld> 2
<wallyworld> 3
<wallyworld> 0,1,2
<veebers> lol, ascii art matrix :-) Sweet, I'll use that I think
<wallyworld> kelvin: i've update the go k8s sdk dependencies https://github.com/juju/juju/pull/8969
<kelvin> wallyworld, that's cool. I will pull dev after this landed to test my changes.
<wallyworld> kelvin: can i get you to +1 it?
<kelvin> wallyworld, sure.
<veebers> anastasiamac: If you could eyeball the test I added when you have a moment that would be grand
<anastasiamac> veebers: of course
<anastasiamac> veebers: lgtm'ed
<veebers> awesome, thanks anastasiamac o/
<anastasiamac> :)
<veebers> wallyworld: circling around, this is the bundle yeah? https://github.com/wallyworld/caas/tree/master/bundles/kubernetes-core
<wallyworld> yeah, it's probably out of date compared to upstream
<wallyworld> all i did was take upstream and unnest the lxd containers, justed use a distince new machine
<wallyworld> *distinct
<veebers> ok seet
<veebers> sweet even
<veebers> kelvin: hah, deploying that k8s bundle with the lxc profile changes is doing odd things to my machine, seems it's rebooting the usb subsystem or something over and over
<kelvin> veebers, it kills my x-org and log me out. I have to login then re-open all the windows.
<veebers> 0_0 oh man, hopefully that's a pain /me hopes that doesn't happen :-|
<kelvin> veebers, let me know when u get it up and running or get any issues.
<veebers> kelvin: will do just waiting on charm sw install, and cluster dns now
<veebers> kelvin: if you're having an issue building a caas charm, you can either comment out the resource, build and add it back in and push, or you need to build charm command from source (there is both py and go parts)
<veebers> Although I've never wired up the py part (I just modified after the build L:-p)
<kelvin> ok, ic. I did that for device as well
<veebers> kelvin: looks like I'm cooking with gas: https://paste.ubuntu.com/p/TZcBmZHW6C/
<kelvin> awesome!
<veebers> I'm seeing this error (tip of develop) is it expected? (seems harmless as it doesn't seem to stop anything) ERROR unable to detect LXC credentials: open /home/leecj2/.config/lxc/config.yml: no such file or directory
<anastasiamac> veebers: looks like there are some failures including https://bugs.launchpad.net/bugs/1783400
<mup> Bug #1783400: misleading error message "unable to detect LXC credentials" when no credentials are necessary <lxd-provider> <juju:Triaged by simonrichardson> <https://launchpad.net/bugs/1783400>
<veebers> wallyworld: is there a way to get upgrade-juju to use my built jujud in path, and not have it attempt to build it itself?
<veebers> anastasiamac: ah ack, thanks seems like it's in hand then
<kelvin> veebers, i got this error https://paste.ubuntu.com/p/SgxzxszGN2/
 * veebers looks
<veebers> kelvin: you have a charm in the staging charmstore? can you link me please?
<kelvin> veebers, im using mariadb at ./caas/charms/
<veebers> kelvin: so a local build of a charm?
<kelvin> veebers, yes.
<veebers> kelvin: where can I see the charm code, have you modified it? (Is this in Ians repo)
<kelvin> veebers, what did was just comment the resources:
<kelvin>   mysql_image: -> build -> add back to the build version of metadata.yaml
<veebers> kelvin: ok, makes sense. The metadata.yaml defines mysql_image as a resource, the make_pod_spec tries to resource-get it, but it's not anywhere so it fails.
<veebers> kelvin: normally you would --resources mysql_image=<the path> but that's broken, I'm working on it :-)
<veebers> kelvin: so you can log into the staging charm store, publish your changes there, attach a resource and deploy from there
<veebers> (as a work around for now)
<kelvin> ah, ic
<veebers> ah I forgot to use a custom operator image. That's why no logs are happening duh
<wallyworld> veebers: if it finds a jujud it will use that
<jamespage> not sure exactly when it landed but thankyou for the improvements in machine provisioning observe-ability with the MAAS provider - nice to know what's going on!
<hml> stickupkid: if you have a few minutes, quick pr review please?  https://github.com/go-goose/goose/pull/64
<stickupkid> hml: looking
<hml> stickupkid: ty
<stickupkid_> hml: why do you use a httptest.Server?
<stickupkid_> ah never mind, just read the package path
<stickupkid_> "testservices" <---
<hml> stickupkid_: :-)  itâs part of the test double for openstack
<stickupkid_> hml: done
<hml> stickupkid_: ty
<fallenour> Ive got an issue where my install for openstack, https://jujucharms.com/openstack-telemetry/ , has been stuck on a loop "installing packages" since yesterday, which is a bit abserd, can anyone provide any tshoot ideas I can do? Ive already checked all the hardware, as well as the systems for connectivity
<pmatulis> fallenour, ssh to the machine and check logs?
<fallenour> pmatulis: any log in particular you think? The only thing I can think of that might be impacting it might be MAAS proxy for APT repo, but I would assume MAAS team would update maas repos to also pull for bionic as well, especialyl since they added bionic image into it. Plus I checked its repo list, its got archive.ubuntu.com listed
<pmatulis> fallenour, i meant ssh'ing to the actual juju machine that is stuck
<stickupkid_> manadart: for some reason I'm getting a weird error with my host-arch branch, that i need to resolve
<manadart> stickupkid_: What's the problem?
<stickupkid_> https://pastebin.canonical.com/p/wdrDNt3sy5/
<stickupkid_> manadart: check the error out
<stickupkid_> manadart: it's a total lie, it really does support that, but the contraints validator is broken, maybe?
<manadart> stickupkid_: This the thing I alluded to in the review comment. Let me look
<stickupkid_> manadart: https://github.com/juju/juju/blob/develop/environs/bootstrap/tools.go#L31
<stickupkid_> is this the issue, i.e. we're checking the local machine vs the server machine?
<manadart> They should both be AMD64 though.
<stickupkid_> manadart: `(server: x86_64, local: amd64)`
<manadart> Ah, so we will have to normalise the server's string.
<manadart> This will happen when we are running edge (for whatever branch), because it checks if we can build tools locally for uploading to the target.
<stickupkid_> manadart: fixed it
<manadart> Nice.
<hml> stickupkid_: do you have a few minutes to play teddy bear on some ca cert stuff?
<stickupkid_> hml: sure
<hml> stickupkid_: standup ho?
<stickupkid_> yup
<manadart> hml: Approved that PR.
<hml> manadart:  thank you
<plars> rick_h_: so I got that bad unit to clear finally by upgrading systemd. I see nothing in https://launchpad.net/ubuntu/+source/systemd/229-4ubuntu21.2 that explains it, but it worked
<plars> rick_h_: on the model with the machine that won't delete with --force though, it's still stuck. One thing I noticed is that jenkins charm had previously been deployed there, so it has some storage associated with it in juju. It could be getting stuck trying to deal with that?
<rick_h_> plars: hmm, normally if you want to remove something with storage you have to provide the --destroy-all-storage flag or something to it
<plars> rick_h_: I don't see that in juju remove-machine. I used --force though, and I also tried to remove it with juju remove-storage.  juju destroy-model seems to be waiting on the machine and storage to disappear
<rick_h_> plars: bah yea not sure
<plars> rick_h_: on a side note - my juju client is at 2.3.7-xenial-amd64 - is it ill-advised to allow upgrade-juju to bump the version in the models to a higher version than that?
<plars> or is it best to try to be on the latest in every model
<rick_h_> plars: yea, the big thing is the version on the controller
<rick_h_> plars: as that does most of the work for things
<rick_h_> plars: but ideally you'd update the client to the latest, the controller, and then each of the models
<plars> rick_h_: that's the latest version of the client in xenial, without moving to the ppa I guess. we rely on juju-deployer right now too if that makes a difference. Not sure if compatibility is affected in future versions
<rick_h_> plars: snaps ftw :)
<plars> rick_h_: I've thought about moving to the snap version - coming from the packaged version though, is there migration to consider since this is already in production?
<plars> rick_h_: and does juju-deployer continue to function with the snaps?
<rick_h_> plars: no, it's only the juju client that's involved
<rick_h_> plars: oh hmmm, no deployer is hopefully never used any more
<rick_h_> plars: It'd be good to hear what you're using for it vs raw juju commands
<plars> rick_h_: it's just always been a convenient way to describe our environment in a big yaml file. Most of our new instances are copy/pasteable from a similar one with only changing a couple of values
<plars> rick_h_: you would suggest what instead? bundles?
<rick_h_> plars: yea, bundles are what does all that and is baked in
<rick_h_> plars: so the question is what do you use in the deployer vs the raw bundles code
<plars> rick_h_: good question... is there a good way to convert or export it, or do we just need to go through the entire environment by hand?
<rick_h_> plars: just try your bundle file you're using with the deployer in a new model juju deploy and check out the latest bundle docs to see if there's slight differences in how things are specified
<rick_h_> plars: we attempted to take the external tool and bake it into Juju well
<rick_h_> so it's not exact
<plars> rick_h_: did that just now actually, and tried it with --dry-run, I got: ERROR invalid charm or bundle provided at "./test-bundle.yaml"
<plars> so I'll have to pick through them manually and see what I can find, ok
<rick_h_> plars: k
<stickupkid_> manadart: it seems like the maas we're using is returing weird addresses again... you have any suggestions?
<plars> rick_h_: I think one of the main things we used was the ability to specify something like "branch: git://git.launchpad.net/..." I suppose the workaround for that is a script that first pulls in the necessary branches and do charm: ./xenial/foo?
<thumper> externalreality: where are we on the 2.4.1 release?
<thumper> I see we got the go ahead
<externalreality> multiJob project CI job should be finished in a few minutes. Then moving on to manual steps.
<externalreality> I'd say that is about half way
<externalreality> thumper, ^^^
<thumper> externalreality: thanks
<thumper> externalreality, wallyworld: it seems somehow we got an invalid version number landed in 2.4 branch
<thumper> how did this happen?
<thumper> "2.4.2 " <- note the space
<thumper> see http://ci.jujucharms.com/job/github-merge-juju/815/console
<thumper> veebers: does the script autocommit to the branch?
<wallyworld> I guess we don't have landing test coverage
<veebers> thumper: the release process does yes
<veebers> wallyworld, thumper: this is a straight push to the repo
<thumper> we need to validate the text before we commit it
<veebers> The release process should validaate input
<veebers> yeah, what thumper said
<thumper> I'm going to edit the branch..
<veebers> awesome, thanks thumper
<externalreality> thumper, thx, I there is indeed a space in the release notes, which I copy pasta'ed into the build params. It's like I can't finish kicking myself in the teeth
<externalreality> thumper, thx for fixing
<veebers> externalreality: The release jobs should have done a better job at handling this. On the bright side, your pain today means the next run will be more streamlined ^_^
<thumper> https://github.com/juju/juju/pull/8977/files landing now
<thumper> babbageclunk: https://bugs.launchpad.net/juju/+bug/1782745 might be interesting, and relates to leadership
<mup> Bug #1782745: Agents are in 'failed' state after restoring from a backup <juju:New> <https://launchpad.net/bugs/1782745>
<thumper> I know we had some changes in 2.4.0 around leadership...
<babbageclunk> thumper: taking a look
<fallenour> crap. when you rm a file, were does it go? please tell me its not "gone"
<fallenour> bionic changed my maas deployment, and now the web MAAS interface wont load
<babbageclunk> thumper: I asked him for more logs - the ones in the bug are from the unit, but the error looks like it's coming from the controller (the lease manager has died for some reason).
<thumper> fallenour: normally gone is the answer
<thumper> veebers: do you have a few minutes to chat?
<veebers> thumper: I can try, my machine is under a bit of load so might not work in HO/meetup
<thumper> veebers: are you running tests?
<veebers> thumper: deploying k8s cluster locally
<thumper> i'll be fine :)
<thumper> 1:1?
<veebers> sure thing omw
<wallyworld> kelvin__: i left some comments - the device handling stuff on k8s.go isn't quite right, see if what i say makes sense
<kelvin__> wallyworld, looking now, thanks
#juju 2018-07-26
<externalreality> Gotta step away for the minutes... build looks to be going fine
<veebers> externalreality: aye, just waiting for ppa publish so should be finished shortly :-)
<externalreality> veebers, ack
<fallenour_> is anyone else seeing an issue with dns not working with juju instances, containers or otherwise?
<fallenour_> scratch that, just containers inside of machines
<fallenour_> this is so weird
<fallenour_> how can it both work, work with ips, but not work?
<fallenour_> elaboration, machine can ping dns and ip, containers can ping ip, but not dns
<externalreality> veebers, done
<externalreality> moving on to manual
<thumper> fallenour: which provider?
<thumper> fallenour: seems like the resolver in the container isn't set up right
<fallenour_> thumper: maas
<anastasiamac> thumper: since u were in juju/errors, did u see there were some new PRs?
<thumper> anastasiamac: that is why I was there
<anastasiamac> \o/
<fallenour_> thumper: wait, what do you mean the resolver isnt setup right? it should inherit it just like the machine does through dhcp like its host machine does via maas
<fallenour_> thumper: do juju containers inherit dns settings differently?
<thumper> fallenour: I'm not entirely sure, but there was a bug around containers on maas missing some search domains
<thumper> I don't recall exactly
<thumper> which versions of maas and juju?
<fallenour_> thumper: 2.4.0-beta2 for maas, and 2.3.7
<fallenour_> system is an upgrade from xenial to bionic
<thumper> when did the upgrade happen?
<fallenour_> thumper: today o.o
<fallenour_> technically yesterday
<thumper> was the machine in a juju model before?
<fallenour_> issues started yesterday as well
<fallenour_> yea, it was juju before, its with the same maas system as before
<fallenour_> the only thing I can think of is maybe 8.8.8.8 isnt a dns server? which would be insane
<thumper> seems like a weird interaction between the upgrade and the containers
<thumper> are the continers upgraded or just the host?
<fallenour_> thumper: the whole thing is a rebuild
<fallenour_> thumper: machine instances are bionic
<thumper> the machine was rebuilt by maas rather than upgraded in place?
<fallenour_> thumper: charm was initially the telemetry openstack deployment, then I switched to openstack-base after it kept failing
<fallenour_> thumper: no, I wiped the entire controller environment, and started from a new controller with maas cloud
<thumper> fallenour: unfortunately you have now exceeded my knowledge in this area
<fallenour_> thumper: saadness
<fallenour_> thumper: do you think it would be smarter if I added dns settings to the configurations manually
<fallenour_> ok so now its getting even weirder, the containers have an ip configured in the resolv.conf file, and it resolves. now im just lost x...x
<fallenour_> ok so I think I might have found it, but I needto understand how resolv.conf works better. can anyone explain to me what it means by search <domain name> <systemname>
<fallenour_> ok, so I checked into that, figured out that I understand even less now x...x
<fallenour_> thumper: one thing I did just notice is the /etc/apt/sources.list file is different between machine and container, not that that would have any bearing on host resolution.
<thumper> fallenour: no, I don't think that would make any difference
<fallenour_> thumper: ok so I did notice something odd. All the machines can resolve to just about any host I can think of via dns, with the exception of the maas system
<thumper> fallenour: I think perhaps asking on #maas (assuming that channel exists)
<veebers> wallyworld: I need your brain, I've introduced a json/yaml mismatch which is a pain https://github.com/juju/juju/blob/develop/core/resources/resources.go#L13 I think the easiest way forward would be to nicely ask charmstore team to alter the json presented in the api response? (or we can change the yaml side to  be the camelcase bits)
<veebers> kelvin__: query: I've juju deployed the k8s cluster, how would I use kubectrl to describe or log any of the pods/container details?
<kelvin__> veebers, first, u need scp the kubeconfig file from master node to ur local at ~/.kube/config, then kubectl -n modelname logs podName [-f]  or kubectl -n modelname describe podName
<veebers> kelvin__: awesome, it was the ~/.kube/config part I was missing, thanks!
<kelvin__> veebers, np
<wallyworld> veebers: what's the charn store producing? "registrypath"?
<veebers> wallyworld: no the json on, ImageName
<wallyworld> it ultimately needs to produce  a split repo/path
<wallyworld> we might need to do a patch for that
<wallyworld> vino: canoical-kubernetes bundle is significantly diofferent to the kubernetes-core bundle
<veebers> wallyworld: ack, we could converge the json/yaml tags then
<veebers> wallyworld: we should also consider upgrade testing with caas charms deployed
<wallyworld> at some point yeah
<wallyworld> 2.5.x upgrades work fine
<vino> wallyworld: i used canonical-kubernetes bundle
<vino> both gui and exportbundle
<veebers> wallyworld: I know jam was looking at some 2.3 resource update transaction stuff which may prove interesting
<wallyworld> ok, that's good because it adds more complexity compared to the normal bundle
<wallyworld> veebers: interesting for what?
<vino> wallyworld: i do see issues while deploying the bundle created by export-bundle command.
<wallyworld> k8s was only a soft laucnh in 2.4
<vino> but the one exported gui gets deployed properly.
<veebers> wallyworld:  if we're doing resource update transactions wrong now, may mess with upgrades later. Although that being said I'm not sure if the bug jam was looking into was upgrade or just general missing transactions etc.
<wallyworld> vino:  that sounds like a bug to fix - the yaml should be identical, what's the difference?
<vino> There is additional bindings for each app which is not actually issues.
<wallyworld> veebers: it was just a general crappy bug to do with writing resource changes
<wallyworld> and not relevant here
<veebers> wallyworld: ack, afaict a fix/something only exists in 2.3
<vino> wallyworld: the machine constraints.
<vino> let me pastebin
<wallyworld> ok, i'll need to see the yaml for each
<wallyworld> if the gui export is differnet to the CLI one the CLI one needs to be fixed
<fallenour_> thumper: ok so I found this: https://bugs.launchpad.net/juju/+bug/1764317 , and I pulled my package info for juju with dpkg -l, and now im totally confused. According to that, there was a fix release from the 10th, which says 2.4, but according to my package release for bionic, mine is up to date, but according to my dpkg output, im still on 2.3.7-16.04, which is a whole version behind. Why is that?
<mup> Bug #1764317: bionic LXD containers on bionic hosts get incorrect /etc/resolve.conf files <bionic> <cdo-qa> <cdo-qa-blocker> <foundations-engine> <kvm> <lxd> <network> <uosci> <juju:Fix Released by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1764317>
<vino> wallyworld: the gui yaml is - https://pastebin.ubuntu.com/p/BdHVy3jdWX/
<vino> and the cmdline generated : https://pastebin.ubuntu.com/p/Y2PTHBh7K4/
<vino> machine constraints difference - the commandline is missing the additional machine details.
<vino> the way i get is different in my code.
<wallyworld> vino: looks like the CLI one is missing the constraints
<wallyworld> those will need to be added
<vino> wallyworld: yes. this is not case with other simple bundles.
<vino> i do see the in machines category generated by CLI is set for each application in the gui yaml file
<wallyworld> depends if the bundle includes constrtaints or not. you need to look at the bundle data struct and ensure all necessary content is queired from state
<vino> ok.
<vino> its good we verified this way.
<wallyworld> yes :-)
<wallyworld> vino: the relations block looks different as well
<vino> thats what the error i do see.
<vino> when i am trying to deploy
<vino> there are 24 relations
<vino> all the entries are there which are found in gui
<wallyworld> vino: the yaml looks diffenre though, notice the extra "-"
<vino> yes thats the difference i notice.
<vino> wallyworld: i am not sure what it means here. :( the error i am getting here is the bundle has additional endpoints.
<wallyworld> vino: it does indeed have extra endpoints in it compared to gui. i'd need to see exact message
<vino> https://pastebin.ubuntu.com/p/qmpFk3FvFB/
<vino> both has 24
<vino> the ordering is different.
<vino> And also, the machine constraints are there in CLI but that resides in every app in the gui yaml. And ofcourse there are additonal mac constraints - This needs to fix as well. But i need to understand the difference between these 2 sets of machine constraints.
<thumper> fallenour: 2.3.7 is the latest xenial release, but snaps are now the preferred way to get juju client updates
<veebers> wallyworld: are we wanting to make the --resource <docker image> fix and the use of a file for secrets one PR, or just fix for now, cloud/creds split then circle back to secrets inclusion?
<vino> wallyworld: this issue with relation is not just with kubernetes bundle. this issue i do see with mediawiki.
<wallyworld> vino: sorry, was in meeting. that error in the pastebin is exactly because the relations are messed up in the CLI export
<wallyworld> they should be a list of 2 item tuples
<wallyworld> also, the machine hardware constraints from the provisoning info need to be exported to macth the gui output
<wallyworld> veebers: i am happy to have the --resource fixes in the one pr
<vino> wallyworld: there are 2 sets of constraints. One in Application and other in machine itself.
<veebers> wallyworld: ack, just looking at extending a test for the fix
<vino> CLI export is setting the Application constraints in machine struct.
<wallyworld> vino: i think the gui javascript reads and exports the machine hardware provisioning info and adds that to each machine record (but you'd need to check the javascript)
<vino> but yes. Relations require fix.
<wallyworld> also, for now we can leave off endpoint bindings
<vino> ok.
<wallyworld> we just want gui output = cli output
<wallyworld> to start with
<wallyworld> there will be some small differences in annotations
<wallyworld> but we should be able to replicate what the javascript produces
<wallyworld> then we can start fixing the gaps, eg recording placement intent etc
<vino> wallyworld: i mis understood b/w he constraints in app and mac and implemented the logic for one for other.
<vino> i will fix both.
<wallyworld> sounds good, thank you
<wallyworld> not a lot to fix, just a few little tewaks
<wallyworld> *tweaks
<vino> sure. I am able to understnad better only when i am testing these bundle. The ones i verified were easy and starightforward. Although i missed mediawiki. I did verify mediawiki single which worked perfectly. to take and deploy.
<vino> Iam sur that the ubuntu bundle issue i mentioned to u this morning cud be becuase of my ocde issue only.
<vino> not the bundle :(
<wallyworld> could be. the main thing at the moment is to ensure the relations are expored correctly
<wallyworld> as a list of (2 element) list. each item is a list of 2 endpoints
<vino> yup got it.
<veebers> wallyworld: FYI https://github.com/juju/juju/pull/8979 (when you have a moment)
<veebers> wallyworld: oh, I'll fix up https://github.com/juju/juju/pull/8968 as per your comments then hit the cloud/creds bits
<babbageclunk> thumper: Should I land https://github.com/juju/juju/pull/8954, or do you want me to hold off?
<wallyworld> veebers: done
 * thumper looks at babbageclunk's pr
<babbageclunk> thumper: thanks!
<veebers> wallyworld: I may have misunderstood you earlier, you're keen for the --resource fix + the addition to take in a file path in this PR or hold off on the file path and just have the imagepath string support that currently exists?
<thumper> babbageclunk: land away
<thumper> babbageclunk: I only had one comment about error types
<thumper> but they could be changed later
<babbageclunk> thumper: ok, checking now
<thumper> I just landed a change to juju/errors from an external contributer that added a timeout type error
<thumper> and we already have invalid types
<wallyworld> veebers: since it's your EOD we can land as is and follow up tomorrow, will only be a small addition
<veebers> wallyworld: ok, I'll tweak as per PR comments
<babbageclunk> thumper: ah - ok, so it's a historical quirk that core/lease has its own ErrInvalid? Yeah, I'll harmonise those at some point in the future.
 * thumper nods
<thumper> babbageclunk: also...
<thumper> I'd like to start moving loggers into the worker objects themselves
<thumper> rather than package level loggers
<thumper> to facilitate getting the logs into the right model's logs
<babbageclunk> thumper: oh, yes - that's something I've been doing with my own new workers
<babbageclunk> thumper: but I didn't do that with the lease manager.
<thumper> that's fine, it was like that befor
<babbageclunk> cool, I'll make a note of that too.
<thumper> no doubt we will do a pass through at some stage once we have things configured properly
<babbageclunk> yeah, I'd say so
<babbageclunk> oh, come on! my merge failed with `fatal: unable to access 'https://github.com/alecthomas/gometalinter/': Could not resolve host: github.com`
<veebers> babbageclunk: hit up IS, it's not the first time we've had odd access issues like that :-|
<babbageclunk> I mean, it resolved it plenty of times to get all the source code, right.
<veebers> babbageclunk: right, seems like a really odd (and annoying) issue.
<babbageclunk> veebers: I'm just retrying the merge before bothering IS
<veebers> I wonder if it's something obsure as a bad port on a machine or switch, although I imagine something like that might be self-diagnosable?
 * babbageclunk shrugs
<veebers> wallyworld: if a docker resource excludes a registry, should we just assume it's docker.io or is that assuming too much? (we currently assume it is that and add it). It means the
 * veebers actually thinks when that might actually happen.
<wallyworld> veebers: the behaviour should be the same as docker pull etc; docker.io is the default IIANM
<veebers> It'll only happen when someone uses --resource I think, if I attach a publicly available image to a charm it'll be a registry.charmstore. . .
<wallyworld> it will happen if the user overrides the resource yes
<wallyworld> we can leave it empty
<wallyworld> and let k8s/docker do its thing
<wallyworld> that would be best
<veebers> wallyworld: I think blank would be best
<veebers> wallyworld: hah aye, what you said :-)
<wallyworld> yep
 * veebers makes the change (and fixes a crap test he wrote)
<vino> wallyworld: are u still ard ?
<wallyworld> a bit
<vino> :)
<vino> ok.
<vino> i wud like to discuss abt the comment u have provided. "provisioned machine hardware".
<vino> my understanding is it is done for now. parity with gui
<wallyworld> there's a method on machine to get it i think. my understanding is that's what the gui javascript does
<wallyworld> HardwareCharacteristics() or something
<vino> Aint i doing the same.
<vino> ?
<vino> i am addressing that in the bundle.go constarints.go
<vino> arch/cpucore/cpupower/mem
<vino> I always wanted to ask abt another aspect : availabilityZone which currently i am not addressing.
<vino> Plus, cud u please suggest me some more complex bundles for manual verification.
<vino> currently the kube is in parity with gui.
<vino> I have provided the pastebin link for both in the PR
<wallyworld> vino: sorry, something came up, will look tomorrow
<fallenour_> is anyone aware of what the latest up date version of juju is, the snap version?
<rick_h_> fallenour_: should be 2.4.1 hopefully. It was in the process of being released yesterday. /me checks with snap info juju
<rick_h_> woot!   stable:        2.4.1
<fallenour_> rick_h_: ok, so thats a concern for me then. According to both snap refresh, as well as dist-upgrade, mine is the latest, and thats at 2.3.7, which has issues.
<fallenour_> rick_h_: I need to move to 2.4 or greater because of the current maas issue with bionic
<rick_h_> fallenour_: ? snap refresh juju gets you 2.3.7? what does the output of snap info juju show you?
<rick_h_> fallenour_: maybe you need to make sure to track the stable channel?
<fallenour_> rick_h_: snap "juju" has no updates available
<fallenour_> rick_h_: if Im not mistaken, I am tracking stable channel.
<rick_h_> fallenour_: right, so I want to make sure snap info juju shows 2.4.1. In fact I bet you haveboth installed?
<rick_h_> fallenour_: what does "which juju" show you?
<rick_h_> and what does "/snap/bin/juju --version" do
<fallenour_> rick_h_: ok, so this is gonna be fun
<fallenour_> it says I have 2.4.1 installed. dkpg says I have 2.3.7 installed
<fallenour_> snap bin says I have 2.4.1 installed
<fallenour_> now, my brain is the sad
<fallenour_> rick_h_: ok, so assuming juju is working just fine. That means maas isnt working. I assume their team will fix it eventually, but I dontwant to wait for them to fix the problem. DNS names arent resolving, /etc/resolv.conf is the best way to do this? And if so, would simply explicitly naming the name server should fix the issue with resolving to the package repos, which is currently hindering all of my charm installs
<rick_h_> fallenour_: so the key thing is you need to bootstrap using the /snap/bin/juju so that the controller is 2.4 which has bionic fixes
<fallenour_> rick_h_: is that sound logic? and if so, sed -i to deliver the changes via juju run machines --all sed -i "nameserver <nameserverip>" /etc/resolv.conf
<fallenour_> rick_h_: to rebootstrap? Im confused
<fallenour_> re-bootstrap*
<rick_h_> bdx: kwmonroe tvansteenburgh jamespage cory_fu and anyone else. First spec on discourse for your lxd profile enjoyment. Please feel free to read/comment/etc. https://discourse.jujucharms.com/t/wip-specification-for-lxd-profile-updates-permitted-by-charms/78
<veebers> Morning o/
<veebers> So with lxc clustering, can I setup a machine in my basement as a 'cluster' and bootstrap to it from my laptop (from the comfort of my office)?
<hml> veebers: i think so, iâve been told that there isnât a difference between  remote and cluster lxc
<veebers> nice, that would free up my laptop a wee bit
<hml> veebers: i was going to try with one of the nucs on a maas later
<hml> veebers: or tomorrow
<hml> veebers: iâve been told that the video linked to the discourse on lxc cluster is amazing at explaining
<veebers> hml oh I missed that, I'll take a look thanks!
<veebers> hah, I should have read the post that I opened but hadn't read yet ^_^
<hml> :-D
<rick_h_> veebers: yes you can. You can have a one node cluster and do that for sure
<rick_h_> veebers: let us know how it goes success/fail in that discourse post to generate conversation
<veebers> rick_h_: very nice, I might try set something up this weekend if I can find a moment
<veebers> rick_h_: will do, it won't be with maas though, just a machine with ubuntu installed on it
<rick_h_> veebers: yea, maas is just a handy way to have some extra machine around
<rick_h_> veebers: so the only issue you'll have to make sure is to manually run the lxc bridge setup command in that doc
<rick_h_> veebers: after that it's an add-cloud, add-credential, bootstrap
<veebers> rick_h_: sweet, can't wait to deploy the k8s bundle to it and free up my laptop :-)
<pmatulis> why do you call it a one-node cluster? juju is just connecting to a remote LXD host
<veebers> huh, true :-)
<veebers> if I setup clustering from the get-go then I can start adding hardware as I retire it from general use (laptops etc.)
<veebers> wallyworld: are there some good examples of k8s configs that I could take a look at? want to confirm my understandings
<wallyworld> what do you mean by configs?
<veebers> wallyworld: i.e. it's possible that a config will define a couple of clusters, and a user and that user has access to all those clusters?
<veebers> wallyworld: i.e. what add-k8s parses
<wallyworld> i don't have any aprt from what we get by deploying cdk
<thumper> well... there we go
<thumper> mailing lists are closed
<anastasiamac> thumper: end of an era!
<veebers> wallyworld: ack, does what I say make sense? (re: users -> clusters)?
<wallyworld> I think so. i think a config can define more than one cluster. not sure about user access
<thumper> anastasiamac: that it is
<thumper> all hail our new discourse overlord
<veebers> wallyworld: we might have a caas issue where you can't teardown if a deploy fails. I'll dig in and come up with some actually useful data, just FYI at this stage :-)
<wallyworld> veebers: that's an issue for iaas as well
<veebers> wallyworld: ah ok
<veebers> wallyworld: unless I've got the wrong end of the stick I don't think that we can just go through and add a cluster endpoint + server to each user cred, the config field context defines what cluster a user has access too, so looks like we'll need to parse the cluster details and conditionally create a user cred for that cluster based on the context
#juju 2018-07-27
<wallyworld> veebers: correct, i'm not saying we just copy the cert for every user unconditionally; we need to change the cred schema to include the cert and endpoint for the clusters for which tyey have access as well want to model those bits as part of the cred, not the cloud
<veebers> wallyworld: ack, ok I'm up to speed now ^_^
<veebers> wallyworld: might be a dumb q: With a credential, it's just an authtype + map[string]string right? so the cred schema addition is adding to the attrs? or have I missed th boat
<wallyworld> no, that's correct. the attr values depend on the auth type
<veebers> wallyworld:  you mentioned a new authtype for the k8s bits, won't the authtype stay the same as is there for the creds? (OAuth2AuthType, UserPassAuthType etc.). I'm not sure how the AuthType works, perhaps this is at a level about the credential itself?
<wallyworld> the auth type defines the allowed/supported cred attrs
<wallyworld> here we'll be using ca cert and endpoint as attrs
<wallyworld> they are specific to k8s clusters so we'll need a new auth type and schema
<wallyworld> each environ provider has a method that returns the supported cred schemas for that provider
<wallyworld> CredentialSchemas() or something
<veebers> wallyworld: ack, thanks that makes sense
<kelvin_> wallyworld, got a min to talk about the k8s custom resources?
<wallyworld> kelvin_: sure, give me 5
<kelvin_> wallyworld, yup.
<wallyworld> kelvin_: jump in HO?
<kelvin_> yup
<manadart> Just threw up a guide for anyone wishing to kick the tires on the new provider for Oracle's OCI: https://discourse.jujucharms.com/t/oracle-cloud-infrastructure-oci-support-maturing-in-juju-edge
<rick_h_> manadart: awesome!
<rick_h_> manadart: and <3 replying to zeestrat on there and hopefully getting him going.
 * manadart nods.
<rick_h_> pmatulis: I've updated the original 2.4.0 contact us https://discourse.jujucharms.com/t/juju-2-4-0-has-been-released/53
<rick_h_> pmatulis: I want to try to push discourse as first and foremost please as we update the instances
<pmatulis> rick_h_, ack
#juju 2019-07-22
<anastasiamac> a review plz https://github.com/juju/juju/pull/10450
<kelvinliu_> anastasiamac: looking now
<anastasiamac> kelvinliu_: thnx \o/
<kelvinliu_> np ^ ^
<anastasiamac> another trivial - honestly :D - https://github.com/juju/juju/pull/10451
<anastasiamac> babbageclunk: tyvm!
<babbageclunk> :)
<hpidcock> kelvinlie_: I think it makes sense the juju run-action for k8s only supports running on the workload? or do we also want it to support running in the operator pod?
<hpidcock> `juju run` makes sense to allow both, but I'm not entirely certain running an action in the operator makes sense.
<anastasiamac> hpidcock: any chance u could have a look at https://github.com/juju/juju/pull/10452 -mit's mostly wording PR
<hpidcock> sure thing
<hpidcock> anastasiamac: looks great
<anastasiamac> \o/
<hpidcock> correct australianism would be sick bickies
<anastasiamac> or just 'sick' (depending on ur generation) ;D
<hpidcock> I feel sick bickies resonates with me more :P
<anastasiamac> kelvinliu_: m looking at https://bugs.launchpad.net/juju/+bug/1830949... can u confirm? for add-k8s command, region option what is the valid value?
<mup> Bug #1830949: [k8s] add-k8s command accepts incorrect cloud type <usability> <juju:Triaged> <https://launchpad.net/bugs/1830949>
<anastasiamac> kelvinliu_: any remote cloud that a user has? any local cloud a user has?
<anastasiamac> kelvinliu_: only regions that are avaialable for this cloud?
<anastasiamac> kelvinliu_: a comment on the bug would be awesome!
<anastasiamac> :S)
<kelvinliu_> yes, looking
<kelvinliu_> added a comment, anastasiamac i think we could have a small enhancement from the cmd side.
<anastasiamac> kelvinliu_: thnx
<kelvinliu_> np
<anastasiamac> kelvinliu_: right, so really we should only accept a region for this option not <cloud>/<region>
<anastasiamac> kelvinliu_: and the region needs to be valid for the current cloud..
<anastasiamac> right?
<anastasiamac> kelvinliu_: so is it a bootstrap cloud that is used here?
<kelvinliu_> we do support --cloud=<cloudNameOrCloudType>  --region=<cloudNameOrCloudType>
<kelvinliu_> these two options will be splitted to cloud + region then combined together
<anastasiamac> kelvinliu_: isn't it confusing that both cloud and region accept the same value?
<anastasiamac> shouldnt one accept cloud, the other region?
<anastasiamac> we only accept <cloud>/<region> in places where we do not have both oprtions separately
<kelvinliu_> because juju --cloud is <cloud>/<region>
<kelvinliu_> to make it flexible and consistent with other cmd, so we made this decision
<kelvinliu_> i think the add-k8s cmd is smart enough to handle any edge case of the two options' combination. if not, then it's a bug
<anastasiamac> kelvinliu_: well, it's obviously a bug already :)
<kelvinliu_> i think it's more like message enhancement
<anastasiamac> kelvinliu_: but also it's just confusing... if i as user will supply -cloud a/1 and region b/1 how is the decision made?
<kelvinliu_> they will get an error, i think
<anastasiamac> kelvinliu_: the fact that right now, -region accepts xxxx/us-east-1 is not a message enhancement but clud name validation
<anastasiamac> too flacingble U with a lot of guess work around 'what does user actually mean' feels messy
<anastasiamac> too flexible UX*
<kelvinliu_> the problem here is we don't always require both cloud and region.
<kelvinliu_> we might just require one of them
<kelvinliu_> sometimes, we do need both of them
<anastasiamac> kelvinliu_: that's why in most places we accept one [[cloud][/region]]
<kelvinliu_> so current process is Juju will try to figure out as many as i can, if missing any of them, ask user to provide
<kelvinliu_> * provide the one was missing
<anastasiamac> yes, again this feels wrong and clumsy as UX
<anastasiamac> it should have one argument, usually we call it '-cloud'
<anastasiamac> and it will accepts either <cloud name> or /<region name> or <cloud name>/<region name>
<anastasiamac> and we have infrstructure to process that
<anastasiamac> as well as it provides consistent UX and experience for user throughput Juju
<anastasiamac> throughout*
<kelvinliu_> yeah, it's good to have a  discuss with Ian end of this week, this wasn't the idea introduced by me. I might not understand the full story behind this design.
<stickupkid> CR anybody? https://github.com/juju/testing/pull/146
<hml> manadart: have you seen these goland tricks?  https://medium.com/@keperry/5-tips-to-speed-up-golang-development-with-intellij-or-goland-6646110e9c5e
<manadart> hml: Knew extract and implement. Not forr or err; that's nice.
<hml> manadart: those last two look very useful.  type them over and over
<hml> manadart: looks like you can create your own live templates too.
<timClicks> I've written up some advice for working with the vSphere provider code https://discourse.jujucharms.com/t/some-notes-for-working-with-the-vmware-vsphere-provider-code/1867
#juju 2019-07-23
<achilleasa> jam: I have pushed some commits to the charm.v6 PR to make the tests easier to read. If it looks good I will push a cleaned up commit log and merge
<stickupkid> jam, you got 5 minutes to discuss the following https://github.com/juju/python-libjuju/pull/321#discussion_r305900673
<jam> stickupkid: sure, I need about 10min then we can chat
<jam> achilleasa: lgtm
<jam> stickupkid: heading to HO
<stickupkid> jam, here is the pinned version of the facades https://github.com/juju/python-libjuju/pull/321/commits/3ecc14fcc65bebfcebe402892b7074b14c2fa4f7
<stickupkid> jam, the client now very roughly (v1) states which version it knows how to talk about - if you want the latest and greatest, you have to bump this and understand the changes
<achilleasa> Can I get a quick CR on https://github.com/juju/charm/pull/285? It's just a spelling typo fix
<manadart> hml: All caught up with rebasing/merging. Latest one is for review here: https://github.com/juju/juju/pull/10455
<hml> manadart:  good news. looking
<hml> manadart: did my pr get closed when the merge completed on your pr?
<manadart> hml: Yeah: https://github.com/manadart/juju/pull/1
<hml> manadart: darn it, it wonât let me change the  backing.  oh well
<hml> manadart:  i can review for you.  itâd be easier if someone else could qa though
<achilleasa> hmmm bringing my charmrepo.v5 changes into juju breaks stuff as some of the code needs macaroon-bakery.v2 and other parts need macaroon-bakery.v2-unstable... does anyone have a bit of time to pair with me to help?
<jam> achilleasa: presence lgtm
<jam> stickupkid: minor tweak in that we need a way to specify that we could support >1 version. so the version pointer needs to be a list, and we should pick the highest version that is in both lists.
<jam> stickupkid: otherwise, I like it a lot
<stickupkid> jam, right yeah, ok makes sense
<jam> stickupkid: I think it is "max(set(known).intersect(set(discovered)))" with caveats for things like "no set overlap"
<jam> unlikely to happen in practice so we can be a bit jankier there as long as seeing a facade we don't know about doesn't kill us from using all the ones we *do* know about
<jam> stickupkid: speaking of, should we set "facades[-1]" for ones we don't know, or should we just pretend they don't exist ?
<stickupkid> jam, nice, ok
<stickupkid> jam, so i think they shouldn't exist, but people use the clients directly so i'm unsure
<jam> stickupkid: k, so to the Shim, I would say they don't exist. to the list of what is available for someone using a client directly I'm less concerned. It would be good to have something like "available facades"
<stickupkid> agreed
<jam> all the versions the libjuju could give you intersected with the versions the server supports
<stickupkid> yeah
<jam> stickupkid: 10453 reviewed
<hml> stickupkid: there were some FAILs in the test with the weird make end.  trying again
<stickupkid> jam, replied https://github.com/juju/juju/pull/10453
<stickupkid> jam, https://github.com/juju/python-libjuju/pull/321/commits/e4d7279f91e1792a3b1afd5bab1b7fe3735c4a96 pylibjuju now picks the best between what it knows and what the api server knows \o/
<hml> manadart:  slice of az is now rebased and ready for review:  https://github.com/juju/juju/pull/10456
<manadart> hml: OK.
<manadart> hml: Trade you for forward-merge: https://github.com/juju/juju/pull/10458
<hml> manadart:  i have one more pr for 2.6 if you donât mind waiting an hour.  had to fix a merge conflict,  pushing up now to merge the pr.  :-D
<hml> manadart: and yes iâll review the merge into develop
<stickupkid> ah damn it, not this again go install github.com/juju/juju/acceptancetests/repository/trusty/fill-logs/actions: build output "/home/ubuntu/go/bin/actions" already exists and is not an object file
<stickupkid> hml, you're right https://github.com/juju/juju/pull/10424
<hml> stickupkid: the bug says itâs reproducable in 2.6 which makes me wonder about the clouds.yaml and credentials.yaml.
<hml> stickupkid: i never added credentials for local anywhere
<stickupkid> hml, yeah... i did fix this when i re-wrote the LXD credentials stuff a long time ago, so was surprised it broke
<stickupkid> hml, i think something broke 2.5 and then anastasia fixed it with her credentials work for 2.6
<stickupkid> hml, maybe not worth fixing tbh
<hml> manadart:  sequence for spaceid is on a per model basis yes?
<manadart> hml: Yes, same collection/mechanism as machine IDs etc.
<hml> manadart: does the sequence for a model migrate too?
<manadart> hml: Yes. The sequences collection is imported early, so everything works - later migration logic can add entities with the next available IDs and such.
<hml> manadart:  if i migrate a model from develop to pr base controller, the spaces ids go from 1-4 to 5-8.  if i migrate again from a pr based controller to other pr based controller, the ids stay 5-8.
<manadart> hml: Yes. Because develop got the IDs before juju/description was updated, so they were not transported in the serialised model.
<manadart> So new ones get created at the other end.
<manadart> Migrations between stable releases should be OK though.
<hml> manadart:  rightâ¦ between stable releases they stay the same.
 * manadart nods.
<manadart> hml: Approved the AZ patch too.
<hml> manadart: approved the merge
<hml> ty
<manadart> hml: Ta.
<stickupkid> hml, fixed 2.5 issue :) https://github.com/juju/juju/pull/10459
<hml> stickupkid: ack - will look at it this afternoon if thatâs okay
<stickupkid> hml, fine by me, if it works can you $$merge$$ it for me
<hml> stickupkid: :-)
<stickupkid> hml, ta
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/10460? This touches the deploy code so please take your time and try to break it ;-)
#juju 2019-07-24
<hpidcock> kelvinliu: babbageclunk: https://github.com/juju/juju/pull/10446 I think it's finally ready for review.
<hpidcock> ended up trimming a lot that I was doing out that was not needed.
<timClicks_> 90% sure that I've re-enabled resource constraints (root-risk, num cpus) on vsphere
<babbageclunk> hpidcock: I've got a call at 4 but I'll look after that
<hpidcock> no rush :)
<kelvinliu> hpidcock: yep looking
<timClicks> babbageclunk: hey we have dozens of models on vsphere/braixen - should I clean them up?
<babbageclunk> timClicks: can you see who created them in the UI?
<babbageclunk> definitely clean them up unless they were created by k8s-test
<timClicks> no I don't think it's possible to see the creator
<timClicks> perhaps I'll try to find the age
<timClicks> I think that we have a CI job that's not cleaning up after itself
<babbageclunk> that's really weird - there's no concept of an owner?
<babbageclunk> no, think it's just accretion of testing
<babbageclunk> (rather than anything automated)
<timClicks> there probably is, but I can't find it
<timClicks> I'll delete anything that's older than 3 months
<babbageclunk> I couldn't find it either - I think that's very likely fine
<stickupkid> jam, https://github.com/juju/juju/pull/10453#discussion_r306690098 responded
<jam> stickupkid: works for me
<jam> manadart: fwiw, I talked with tim a bit today about the issues with the cache, I don't think he realized the extent of coherency issues. We'll likely talk the most next week
<manadart> jam: Yep, just had 1:1 with him.
<jam> gotcha
<achilleasa> Can I get a CR on https://github.com/juju/juju/pull/10460? It is changing stuff in the deploy code so please take your time and try to break it ;-)
<achilleasa> also, I seem to be getting lots of intermittent test failures (unrelated tests)...
<stickupkid> turns out pylibjuju doesn't remove the sodding LXD profile when it's done with it's tests and I've ended up with 1000 profiles :|
<achilleasa> stickupkid: try this: lxc profile list | grep juju | cut -d' ' -f2 | xargs -L 1 lxc profile delete
<stickupkid> achilleasa, yeah did that :D
<stickupkid> achilleasa, i need to fix pylibjuju tbh :D
<stickupkid> add another task to my never ending list
<stickupkid> why pylibjuju goes mental and creates lots of models
<stickupkid> "juju models --format=json | jq '.models | .[] | .name' | xargs -I % echo % | grep -v -E "controller|default" | xargs -I % juju destroy-model -y --force %"
<stickupkid> embedded stucts don't work in python :|
<stickupkid> gr8 the schema gen has a bug where it doesn't get all params/methods from structs that have embedded values in them
<stickupkid> sigh
<manadart> stickupkid: You can't just nest dicts?
<manadart> Need a review - just dead code removal: https://github.com/juju/juju/pull/10462
<stickupkid> manadart, turns out it got dumped into kwargs, even though it should be mapped to the field in the object in python
<stickupkid> manadart, that was rather painful to work out
<stickupkid> manadart, in otherwords the typed objects just don't work with nested objects
<hml> manadart: approved
<manadart> hml: Ta.
<achilleasa> can someone please verify for me if running the following command in 'juju/apiserver' on the 2.6 branch fails? go test -check.v -check.f TestToolsUploadedSecurely
<manadart> achilleasa: Pass here.
<achilleasa> manadart: this is what I get: https://pastebin.canonical.com/p/DmJWcFzjP2/ (with go 1.12.6)
<achilleasa> actually there are lots of tests in that package that fail with the same error...
<achilleasa> HEAD is ebae7c6df409ff82ccdcf385ac91ca4334df1106
<manadart> On the current HEAD (58d60964ab) I got 1 failure for model not found in cache (intermittent). 1.12.5.
<hml> manadart:  with rebased to develop, it became too easy to resolve the todos in the pr,  https://github.com/juju/juju/pull/10443  .  could you pls take a quick look at the last commit for sanity check.
<achilleasa> I got the error on CI for my PR but it seems unrelated so I also tried against 2.6 and still get it... odd
<achilleasa> manadart: did you make dep before running the test?
<manadart> achilleasa: Yes.
<hml> achilleasa: that bug was a pita yesterday trying to merge
<manadart> hml: Stand by.
<hml> achilleasa: interesting, itâs passing for me with 1.12.7
<achilleasa> running on my laptop now to double-check
<achilleasa> ok, so it passes on my laptop... weird...
<manadart> hml: Left one comment.
<hml> manadart: rgr
<hml> manadart:  ping
<manadart> hml: pong
<hml> manadart:  have time for a quick ho?  on spacename vs spaceid in subnets?
<manadart> hml: Yep.
<stickupkid> i'm going to spend tomorrow breaking my CMR PR into smaller ones, so they're easier to land
#juju 2019-07-25
<wallyworld> kelvinliu: hpidcock: did you guys have time to talk now? or when would be convenient?
<hpidcock> free now
<wallyworld> ok, let's join standup
<kelvinliu> yep, just finished lunch
<hpidcock> wallyworld: https://github.com/juju/cmd/pull/63 the PR I was mentioning in our meeting, so whenever you feel like looking at a PR
<achilleasa> jam: I discovered a small bug in my bundle-split implementation. Can you please take a look at https://github.com/juju/charm/pull/286?
<jam> achilleasa: in your example, 'wordpress' is nested under mysql, what does that actually mean?
<jam> achilleasa: is it just a typo in the example?
<achilleasa> jam: ah crap. sorry, bad copy paste; should be a sibling of mysql
<jam> achilleasa: right so the issue is that a sibling that didn't otherwise need anything would cause us to retain the key but it shows up as a 'delete'
<achilleasa> jam: exactly
<achilleasa> Looks like omitempty does not properly translate to individual map values
<jam> achilleasa: why use '== true' isn't that implied by just '&& value' ?
<achilleasa> jam: oops, you are right! Let me fix that
<achilleasa> jam: btw, since you are checking this PR would you mind also taking a quick look at one more charm-related? https://github.com/juju/charm/pull/287
<stickupkid> CR any body : https://github.com/juju/python-libjuju/pull/326
<stickupkid> This pylibjuju change doesn't require the writing of the definitions, but I suspect consuming does :|
<achilleasa> jam: I have addressed your question for 286; can you take a look?
<jam> achilleasa: so what I don't understand is why "dropping" the object is accomplished by setting it
<jam> achilleasa: I would have thought we would call Delete or *not* called Set to copy the value
<achilleasa> jam: ah. The reflect package does not provide the means to delete keys from a map. The way to delete things off a map (I have added a comment to the overlay code about this peculiarity) is to set the key to an 'empty' reflect.Value
<achilleasa> s/empty/zero/
<jam> lgtm
<wallyworld> babbageclunk: did you have time to talk about app relation data?
<babbageclunk> sure sure
<babbageclunk> wallyworld: in standup?
<thumper> I need some review on this https://github.com/juju/juju/pull/10463
#juju 2019-07-26
<wallyworld> hpidcock: kelvinliu_: i wouldn't mind a chat about the jujud operator tag stuff if you have a moment
<kelvinliu_> wallyworld: yep, im free
<wallyworld> ok, we can jump in standup if harry is free also
<hpidcock> yep
<hpidcock> wallyworld: we are ready
<thumper> hpidcock: what is in juju/cmd.v2 that differs from the v1 package?
<thumper> can we not just make github.com/juju/cmd work for us?
<hpidcock> thumper: https://github.com/juju/cmd/pull/63
<thumper> why couldn't we do the change without an API break?
<thumper> ... we don't need to talk in depth, I'm just curious
<thumper> I don't want to waste your time
<thumper> I guess the PR comment doesn't illustrate the benefits enough to me to grok the work
<hpidcock> ahh right yeah, sorry about that, I'll try to add more detail in future. Essentially this is the first formatter that can take it's own arguments.
<hpidcock> It was just a quality-of-life feature that wallyworld liked
<hpidcock> but it's pretty useful for scripts using juju commands
<thumper> I still don't get the use case
<achilleasa> can I get a quick CR on https://github.com/juju/charm/pull/288?
<stickupkid> rick_h, updated the offer pr https://github.com/juju/python-libjuju/pull/326
<rick_h> stickupkid:  awesome
#juju 2019-07-27
<atdprhs> hi everyone, i deployed kubernetes using juju, however when i run juju status, i see that the public ip address of kubernetes worker is it possible to expose another nic ip address?
<atdprhs> there are 2 network interfaces on kubernetes worker
<atdprhs> I want to expose both of them
<atdprhs> i tried unexpose then expose but it's not working as expected, and i can't find anywhere in any documentation on how to achieve this
<atdprhs> here is what I am trying to do, anyone can help would be much appreciated >> https://discourse.jujucharms.com/t/expose-multiple-ip-address-for-kubernetes-worker/1882
<atdprhs> https://stackoverflow.com/questions/57229743/expose-multiple-ip-address-for-kubernetes-worker
<cmsander1> Anyone around, I'm curious if this is a juju bug or somehow a microk8s thing.
<cmsander1> Running through the juju+microk8s guide. After bootstrapping this is what happens trying to deploy the example K8s charm.
<cmsander1> https://pastebin.ubuntu.com/p/FxbQCVhyjk/
<cmsander1> You'll see the URL that is retrieved is: https://api.jujucharms.com/charmstore/v5/~juju/mariadb-k8s-1/archive?channel=stable:
<cmsander1> Which if you try to access will give you https://pastebin.ubuntu.com/p/B26fKKZScj/
<cmsander1> So it *appears* juju is adding a ":" to the channel, removing that indeed retrieves the charm.
<cmsander1> This happens both w/o specifying the channel and when specifying the channel. on the deploy
<cmsander1> I'll idle here a bit before opening a bug to see if maybe I"m grossly misinterpreting something.
<cmsander1> (This is juju 2.6.5-bionic from snap)
<cmsander1> and ... this looks like a microk8s networking issue. Looks like it's hard coding a lot of settings that may not be valid.
#juju 2020-07-20
<bradm> is there any way to configure lxd when you're using juju on a maas cloud?  particularly I'm thinking of targetting storage for lxd containers, rather than using a pre-definied directory.  I could just mount the space on /var/lib/lxd, but seems a bit of a copout.
<bradm> hmm, perhaps https://jaas.ai/lxd/27 could work, there's nothing specific to nova-compute lxd there from a brief look
<icey> bradm: charm-lxd is absolutely intended for use with openstack; there has been some discussion about migrating it to a generic LXD charm but that's never gotten to a high priority
<bradm> icey: jamespage said it was deprecated tho? woukd it be a reasonable charm to use for openstack to configure lxd? looks like its called charm-nova-lxd now
<icey> bradm: yes, it got moved to charm-nova-lxd to ease access to the lxd name and to clarify that the charm _is_ for nova-lxd, not just to configure lxd; that said, nova-lxd is also deprecated at this point
<jamespage> tl;dr this is not the charm you are looking for
<icey> :)
<jamespage> bradm: I'm guessing that you want something to deploy just LXD using the MAAS provider
<jamespage> or are you looking for a way to manage the LXD containers that Juju creates in that scenario?
<stickupkid> manadart_, achilleasa CR --> fixes the smoke tests https://github.com/juju/juju/pull/11844
<bradm> jamespage: the latter, I want to manage the storage pools for lxd, I want to put tge lxd containers for openstack in a different filesystem
<stickupkid> achilleasa, OUT=$(exit 1 || true); echo $?
<achilleasa> stickupkid: OUT=$(fgrep /dev/zero blah || true); echo $? ;-)
<achilleasa> exit 1 is short-circuiting the or bit; it's cheating ;-)
<stickupkid> grrr
<mirek186> Hi, has anyone manage to deploy OpenStack placement with hacluster?
<stickupkid> achilleasa, updated it, we just let it fail
<stickupkid> @achilleasa, might need to grab your thoughts about something later!
<achilleasa> stickupkid: sure thing. Btw, I am a bit confused about the verbosity case statement
<jamespage> mirek186: yep
<jamespage> mirek186: -> #openstack-charms might be a better place to ask
<mirek186> ok, thanks
<stickupkid> achilleasa, updated my PR https://github.com/juju/juju/pull/11844
#juju 2020-07-21
<kelvinliu> wallyworld: got this pr for adding validation for controller-external-name, could u take a look? thanks https://github.com/juju/juju/pull/11845
<stickupkid> anybody know what the yaml should look like for model-defaults, zero tests
<stickupkid> as in `juju model-defaults something.yaml`
<stickupkid> it's not the same as `juju model-config` which is handy /s
<stickupkid> juju model-defaults -m doesn't work right?
<stickupkid> hml, so you're right we don't need to adjust model-defaults for charmhub-url, but I did bring the command up to parity with model-config https://github.com/juju/juju/pull/11848
<hml> stickupkid: does it not allow setting the value to empty string?
<stickupkid> the only thing I didn't check... damn it
<stickupkid> hml: it works when you try to add-model
#juju 2020-07-22
<manadart_> Anyone around for a simple review? https://github.com/juju/juju/pull/11851
<manadart_> Merge 2.8 into develop: https://github.com/juju/juju/pull/11852
<mirek186> Does anyone know how to deploy ntp on bare machine level? I know you can have relation with other charms, but what if all charms are deployed in LXD containers but you want just a single ntp server on the machine hosting LXD?
<blahdeblah_> mirek186: Just use a basic charm like ubuntu, deploy it to the bare metal, and associate ntp with that.
<mirek186> but what if it's part of a bigger deployment e.g openstack-bundle , where machine provisioning is done by maas for you, it's easy for ceph-osd and nova-compute as those components go on bare ubuntu, but I have few infrastructure machines where every charm is deployed into lxd container.
<blahdeblah_> You can still push a basic charm onto the machine by adding it manually, or you can just configure MAAS to deploy an appropriate NTP config for you.
<blahdeblah_> 9 times out of 10, the only thing you really need to change about the default ntpd or chronyd config is ensure that a list of at least 4 nearby NTP servers are used rather than random defaults which are half an Internet away from your network.
<mirek186> blahdeblah_ with MAAS is there an out of the box option or you have to add an extra curtin script for it?
<blahdeblah_> It has been a while since I touched it, but you could at least specify your own list of NTP servers, I think.
<mirek186> In MAAS settings there is an option for NTP but is this a one which goes then to each deploy machine? If yes, is there a point of an extra ntp charm then?
<blahdeblah_> Like I said, it's been a while since I touched it.  But when I still worked for Canonical, the MAAS team were trying to make it so that MAAS deployed out of the box with a decent NTP config so that you didn't need the charm.
<mirek186> Ok, thanks for letting me know.
<mirek186> Could someone help me trace down probably my misconfiguration on ovn-chassis, I have external network 10.10.60.0/24 and config: physnet1:br-ex, br-ex:br60 . When trying to create instance I've got an error nova.exception.PortBindingFailed: Binding failed for port, my ovn knowledge is zero, so I would appreciate if someone could point me in the right direction to verify first my ovn charm config is correct and then why it's it's failing to create. The nova
<mirek186> host has access to br60 so it's not that I think.
#juju 2020-07-23
<stickupkid> manadart_, achilleasa can one of you review this bad boy? https://github.com/juju/juju/pull/11848
<stickupkid> hml, I've updated the description to include the other fields now https://github.com/juju/description/pull/83
<stickupkid> hml, so it's up for re-review :)
<hml> stickupkid: do we have an answer to my question in the PR?  sorry, donât remember
<hml> iâll review after lunch
<stickupkid> hml, yes, i added the fields to model the charm. I was just waiting to understand what we wanted to model and get approval for it
<hml> stickupkid: rgr
<achilleasa> stickupkid: got some minor comments on your PR
<stickupkid> achilleasa, sorry missed this message
<stickupkid> sure will check
<stickupkid> achilleasa, done
<thumper> morning
<hml> hi thumper
#juju 2020-07-24
<wallyworld> hpidcock: jenkins is busy and is timing out a bit, but there's some final test runs in progress (snap, release, ci) for this PR https://github.com/CanonicalLtd/juju-qa-jenkins/pull/482
<hpidcock> wallyworld: looking
<wallyworld> hpidcock: ty, i just pushed a small change to add extra validation to the git commit sha params. i added a validating string plugin to jenkins
<wallyworld> ah balls, just broke it, fixing
<hpidcock> wallyworld: loving the validating-string for job params
<wallyworld> yeah
<wallyworld> i just remove dstaging root as a param for release job and injected it but not evrything picks it up, sigh
<wallyworld> there's a few jobs like that
<wallyworld> where param injection seems to be flakey
<hpidcock> wallyworld: welcome to my world
<wallyworld> i'll just add it back as a param for now
<hpidcock> wallyworld: added some comments
<wallyworld> ty
<wallyworld> hpidcock: i already pushed validating strings in those other places :-)
<wallyworld> hpidcock: and yeah, deliberately killed 2.7 cause no more releases and if we do, we will make it work the new way
<stickupkid> achilleasa, whoops https://github.com/juju/description/pull/84
<achilleasa> stickupkid: done
<stickupkid> ta
<manadart_> achilleasa: Can you look at this one? https://github.com/juju/juju/pull/11856
<achilleasa> manadart_: sure; give me a few min and I can look
<manadart_> achilleasa: Found an issue with it.
<achilleasa> manadart_: did you see my comments in the PR?
<manadart_> achilleasa: Yep, thanks. I think I've addressed it all and fixed the prior issue. QA is working now.
<manadart_> Anyone able to tick a forward merge? https://github.com/juju/juju/pull/11858
<SpecialK|Canon> .win 25
<SpecialK|Canon> sorry; new keyboard
<stickupkid> SpecialK|Canon, mechanical keyboard with flashy LEDs?
<SpecialK|Canon> stickupkid: laptop keyboard with no ortho :(
<achilleasa> stickupkid: these validators make me sad https://github.com/juju/description/blob/v2/model.go#L1042... guess why :D
<stickupkid> why does a dto have validation :|
<achilleasa> see the sets being passed in by value?
<stickupkid> MAP
<stickupkid> set.Strings
<stickupkid> not yours, don't modify it
<stickupkid> NYDMI
<achilleasa> there are more such validators there
<achilleasa> I will do a drive-by
<stickupkid> hml, the ground works for refresh API https://github.com/juju/juju/pull/11815
<hml> stickupkid: rgr
<achilleasa> stickupkid: or hml can one of you take a look at https://github.com/juju/description/pull/85?
<achilleasa> let me know if the PR description seems confusing :D
<stickupkid> EOW... I'll pick it up neext week if hml doesn't get a chance
<achilleasa> no rush; I changed juju's go.mod to work with my local copy so I can keep working until this bit is reviewed
<achilleasa> I 'm EOW in 30min as well :D
