[08:10] <blr> hmm, I appear to be missing /var/lib/lxc/juju-trusty-lxc-template/config - how do I restore that?
[13:44] <marcoceppi> blr: you can have juju recrate the template
[13:44] <marcoceppi> is this for local provider?
[14:41] <jcastro> rbasak: so, marco has releases of charm tools he'd like to get into wily then on whatever train that goes to trusty-backports
[14:41] <jcastro> he has the packaging and everything
[14:41] <jcastro> we just need to know how to move from a to b
[14:42] <marcoceppi> rbasak: this also requires several new packages to be added to archive
[14:44] <rbasak> jcastro, marcoceppi: trusty-backports or trusty-updates? trusty-backports has a well-defined path and so is easier if you're happy with that
[14:44] <marcoceppi> rbasak: well we ened to get into wily first I think
[14:44] <rbasak> Juju itself is going into trusty-updates, not trusty-backports (so gets recommended to new users automatically, rather than being opt-in)
[14:44] <rbasak> Sure, wily first.
[14:45] <rbasak> marcoceppi: can you show me what you have please?
[14:45] <marcoceppi> trusty-updates would make sense for charm-tools, but again, there are several new deps not in archive yet (but packaged in ppa) that need to be added
[14:45] <marcoceppi> rbasak: sure, what do you need?
[14:45] <jcastro> yeah let's go wily first and worry about trusty later
[14:46] <rbasak> marcoceppi: ideally source packages ready for upload! But whatever you have will do as a starting point.
[14:46] <marcoceppi> rbasak: I've got a source package
[14:47] <rbasak> marcoceppi: link please? And what about the other new packages you mentioned?
[14:47] <marcoceppi> rbasak: which one of these do you need? http://paste.ubuntu.com/11953935/
[14:47] <marcoceppi> rbasak: they're also all uploaded to the ppa:juju/stable
[14:47] <rbasak> marcoceppi: I can pull them from the PPA, no problem.
[14:48] <marcoceppi> rbasak: cool
[14:50] <marcoceppi> rbasak: lmk if there's anything I need to fix in teh packaging, It's been a...fun time getting to this piont
[14:50] <rbasak> Will do
[14:52] <rbasak> marcoceppi: any plans to move to Python3 ?
[14:52] <marcoceppi> rbasak: this should support py2 and py3 - maybe that was amulet though
[14:52] <marcoceppi> rbasak: I can make a release that has both supports
[16:05] <bhundven> marcoceppi: hello!
[16:07] <bhundven> marcoceppi: I was wondering if you got my pm about kvm constraints?
[16:21] <jingizu_> Hi everyone. I have been wrestling with trying to do the auto install of OpenStack using the openstack-installer (and manual juju deployment to my MAAS) and keep getting stumped when juju tries to bootstrap its first node. It fails when trying to download the tools from https://streams.canonical.com. I think it is because it is doing this as root and is not
[16:21] <jingizu_> honoring the maas-proxy. Anyone know of any workarounds?
[16:31] <bhundven> who should I talk with about FTBFS?
[16:40] <bhundven> http://paste.ubuntu.com/11954461/
[17:33] <jobot> Hello, me again. Rebuilt suitecrm charm based exactly on the sugarcrm charm in the store. Had it install once successfully, but now getting stuck on something in the database-relation-changed hook. http://paste.ubuntu.com/11954685/ . It seems like the second mysql command is failing or executing too soon... thanks for any advice
[18:28] <jose> Odd_Bloke, rcj: ping. having some troubles with the install hook of ubuntu-repository-cache: http://paste.ubuntu.com/11955045/
[18:31] <Odd_Bloke> jose: Yeah, we have several MPs waiting to fix this and other problems.
[18:31] <jose> Odd_Bloke: oh good. I'll take a look then!
[18:31] <Odd_Bloke> jose: I'm EOD, but rcj might be able to point you at them. :)
[18:31] <jose> if they're in the queue then we should be g2g
[18:31] <jose> oh, yes, I see two in there
[18:31] <jose> I'll finish with this blog post and I'll go ahead and take a look
[18:31] <jose> thanks!
[18:36] <lazyPower> jobot: o/ looking over your paste, the mysql command looks like its malformed and its giving you help output
[18:39] <jobot> lazyPower: Thanks. That was my initial thought. So, I did change some of it, and it worked (that's why the hook and log differ with the user/password stuff). Then it stopped working again, so I changed it back but still no go. The hook is the exact same code as the SugarCRM hook in the juju store, so it should be working ... ?
[18:39] <lazyPower> are you sure its not taking a different path than you're expecting?
[18:40] <lazyPower> i would place some debug messaging aroudn the code path and make some assertions about whats happening, and then attempt running the assembled command by hand during a debug-hooks session
[18:41] <jobot> good idea, I will try that. Thanks.
[19:06] <wolsen> if anyone finds the time, I've posted an MP for mongodb unit tests failing -- https://code.launchpad.net/~billy-olsen/charms/trusty/mongodb/fix-unit-tests
[19:59] <lazyPower> jobot: just checking in to see how you're getting along with that rather tall order of scrabble i threw out earlier
[20:01] <lazyPower> beisner: ping
[20:01] <beisner> lazyPong ;-)
[20:01] <lazyPower> beisner: i've taken a look at https://code.launchpad.net/~billy-olsen/charms/trusty/mongodb/fix-unit-tests/+merge/266140 - and this LGTM.
[20:01] <lazyPower> mind if i merge it or do you want to kick the tires first? it got some love from osci
[20:02] <beisner> lazyPower, yep, kicked and re-kicked, good to go from my perspective.  fyi, that is re: bug 1479069
[20:02] <mup> Bug #1479069: unit tests failing for trusty/mongodb <openstack> <uosci> <mongodb (Juju Charms Collection):In Progress by billy-olsen> <https://launchpad.net/bugs/1479069>
[20:02] <lazyPower> yep, i'll close it out upon merge. TA for confirmation.
[20:03] <beisner> lazyPower, splendid.  thanks a lot!
[20:05] <lazyPower> rev75 is up. Thanks wolsen
[20:20] <jobot> lazyPower: thanks for checking in. I did discover something : http://paste.ubuntu.com/11955611/ . Something in the database-relation-changed hook at the point noted is causing a blank space to be put in front of the database variables, which then causes the mysql command to fail
[20:21] <lazyPower> paydirt
[20:21]  * lazyPower looks
[20:22] <lazyPower> jobot: interesting, lots of newlines in the output. i see the echo pass=, is that related to a debug statement you have inline?
[20:22] <jobot> yes, i put that in to see at what point the space appears... could be right after that #Update the database values
[20:24] <lazyPower> you can try piping it through tr
[20:24] <lazyPower> FOO="$(echo -e "${FOO}" | tr -d '[[:space:]]')"
[20:25] <lazyPower> that will remove all spaces from the string though
[20:25] <jobot> cool thanks
[20:25] <jobot> still curious to know where the space is coming from
[20:25] <lazyPower> worth a shot anyway :)
[20:25] <lazyPower> wheres it reading the value from?
[20:25] <lazyPower> is it coming from config-get?
[20:26] <jobot> originally from relation-get
[20:35] <blr_> marcoceppi: yes, using the local provider, sorry how do I rebuild the template?
[20:36] <lazyPower> blr_: you should be able to remove the template via the LXC command line and it will rebuild it the next time you request a lxc deployment with juju
[20:36] <lazyPower> blr_: sudo lxc-ls --fancy
[20:36] <lazyPower> find the container series you wish to destroy, it will look like: juju-<series>-lxc-template
[20:37] <lazyPower> to remove it: sudo lxc-destroy -n juju-<series>-lxc-template
[20:37] <lazyPower> then rebootstrap && juju deploy ubuntu and it will recreate it for you. This incurs ~ 200 mb download, so be mindful if you're on a limited connection.
[20:39] <blr_> lazyPower: thanks, I have a local package cache thankfully.
[20:39] <lazyPower> no worries, i know that some people are on metered connections and pulling down 200mb of debootstrapped lxc templates can be tricky when troubleshooting.
[20:58] <wolsen> thanks lazyPower
[21:02] <jobot> lazyPower: the trimming the spaces off the variables seems to be working nicely :)
[21:21] <jingizu_> I was able to get beyond the issue of not being able to bootstrap because the MAAS server had a different timezone than the bootstrap node in a VM on the MAAS host. However, once bootstrapped and Landscape deployed, I got the Landscape web UI, logged in and configured all -- it sees all of my machines, checklists are all green. When I click install it tried
[21:21] <jingizu_> to bootstrap juju again, this time on a different (physical) host. It fails yet again because it can't find tools! Any ideas why Landscape is not using the already-bootstrapped juju environment?
[21:22] <jingizu_> If I 'juju ssh landscape/0' from the MAAS/juju host, I can see that /var/lib/landscape/juju-homes/ exits, and each time I try and fail to install via Landscape, it creates another juju environment called '1', '2', '3', etc.
[21:23] <jingizu_> Confused as to why it's not using the actual juju environment that bootstrapped and installed Landscape in the first place
[21:30] <jobot> lazyPower: actually tbd :S
[21:30] <lazyPower> jobot: i'm about to EOD, best of luck. If you run into further roadblocks i'll be back tomorrow or you can hit the list up and i'll circle back in the morning.
[21:31] <jobot> ok thanks, have a good one
[21:31] <lazyPower> cheers :)
[22:11] <jose> aisrael: ping
[22:13] <veebers> Hi all, I imagine it's safe enough to purge/rm everything from '/var/cache/lxc/' as per comments on bug 1393932
[22:13] <mup> Bug #1393932: 'container failed to start' with local provider <deploy> <doc> <local-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1393932>
[22:53] <veebers> rats, nope didn't help, still getting this error when attempting to start the local machine http://pastebin.ubuntu.com/11956432/
[22:55] <jose> veebers: have you tried deleting the machine and re-creating?
[22:58] <veebers> jose: yes, I do `juju destroy-environment  local` and `sudo lxc-destroy --name leecj2-local-machine-1` then `sudo lxc-destroy --name juju-trusty-lxc-template` then rm the cache and attempt to redeploy
[22:58] <jose> ok, so destroy-environment local should destroy the machines as well... they are not being destroyed?
[22:58] <jose> and you have to destroy them manually?
[22:59] <veebers> jose correct, just tried juju destroy-environment and a sudo lxc-ls --fancy shows both machines still there
[22:59] <jose> wat
[23:00] <veebers> where would I find the logs for what destroy is doing?
[23:00] <jose> lemme double check something
[23:00] <veebers> ~/.juju/.local/ is gone though
[23:00] <veebers> cool, thanks :-)
[23:00] <jose> I believe /var/log/juju/
[23:00] <jose> or /var/log/juju-username-local
[23:04] <veebers> ah yeah /var/log/juju-leecj2-local/, doesn't have anything useful though :-\
[23:20] <veebers> jose: I can't find any log for whats happening during the destroy-env, any ideas where else I could look?
[23:23] <jose> veebers: what about all-machines.log?
[23:24] <jose> I'm not sure I can give you a lot more help than trying to figure out what it is on those logs, don't use local that much
[23:24] <jose> however there's definitely people around who do
[23:24] <veebers> jose: ack, fair enough. Thanks for the help so far :-)
[23:25] <veebers> odd, the log access time 'Jul 29 10:49 /var/log/juju-leecj2-local/all-machines.log' is way different to the last message logged there: 2015-07-28 22:49:43 ERROR . . .
[23:25] <jose> UTC vs local time?
[23:26] <jose> veebers: what time is it for you?
[23:27] <veebers> I'm in NZ, current time is Wed 11:26am 29/7
[23:41] <jose> so yeah, utc vs local
[23:45] <veebers> waigani: so the machines show up in the lxc-ls (root) juju-trusty-lxc-template and leecj2-local-machine-1
[23:46] <veebers> trying to start the template machine works (it boots) trying to start the local machine doesn't: sudo lxc-start -n leecj2-local-machine-1 -F
[23:46] <veebers> error log here: http://pastebin.ubuntu.com/11956432/
[23:46] <jose> oh, have you deleted the image? maybe redownloading a new image would help
[23:48] <veebers> jose: is that different to deleting what is in /var/cache/lxc ?
[23:48] <jose> there was an on-air session explaining all of that, let me find and re-verify
[23:49] <veebers> this was all (largely) working for me yesterday but the template image I was using was ancient and had modifications made to it which broke other things, so I deleted the template image to try again
[23:52] <jose> eh, can't seem to find the video
[23:57] <waigani> veebers: so you've deployed ubuntu, lxc container has showed up in sudo lxc-ls? What does `juju status` say?
[23:59] <veebers> waigani: states: "agent-state-info: container failed to start", so I try and manually start the containers
[23:59] <veebers> waigani: 11:45 <veebers> waigani: so the machines show up in the lxc-ls (root) juju-trusty-lxc-template and leecj2-local-machine-1
[23:59] <veebers> 11:45 <veebers> trying to start the template machine works (it boots) trying to start the local machine doesn't: sudo lxc-start -n leecj2-local-machine-1 -F
[23:59] <veebers> y