[00:52] <davecheney> bigjools: sup
[00:52] <davecheney> thumper: do you have any contacts on u1 ?
[00:53] <davecheney> sidnei: ping
[01:10] <sidnei> davecheney: pong
[01:11] <davecheney> sidnei: nate is looking for someone that can help him with windows installer tools
[01:11] <wallyworld_> thumper: did you write the DiskManager stuff for tools?
[01:11] <davecheney> does u1 use anything for producing it's windows installer that would be of use ?
[01:12] <sidnei> davecheney: uhm, brian curtin was our windows guy, i think mike mccraken is in charge now. let me see if i can figure out what they are using.
[01:12] <davecheney> sidnei: what timezone are they in
[01:13] <davecheney> i wanna try to cross connect nate and those two guys
[01:13] <sidnei> mike is in cali
[01:13] <davecheney> kk
[01:13] <davecheney> that'll work
[01:13] <sidnei> i think brian left recently
[01:13] <davecheney> oh snap
[01:13] <davecheney> ok, i'll write to mike
[01:17] <sidnei> davecheney: seems to be bitrock: http://bitrock.com/
[01:17] <sidnei> http://bazaar.launchpad.net/~ubuntuone-control-tower/ubuntuone-windows-installer/trunk/view/head:/scripts/build_installer.py fwiw
[01:17] <davecheney> sidnei: thanks
[01:18]  * sidnei disappears again
[01:34] <thumper> wallyworld_: no
[01:34] <wallyworld_> yeah, i saw john did it
[01:34] <thumper> davecheney: yes
[01:34] <wallyworld_> but nothing seems to use it apart from tests
[01:34] <davecheney> thumper: yes what
[01:34]  * thumper shrugs
[01:35] <davecheney> i think I forgot the question
[01:35] <thumper> davecheney: yes I have contacts on u1
[01:35] <thumper> davecheney: beuno
[01:35] <davecheney> thumper: ahh, this is a question from Nate
[01:35] <davecheney> who wants to know what we use for win32 installers
[01:35] <thumper> ah
[01:35] <davecheney> sidnei: said talk to Miek McCracken
[01:35] <thumper> they are in very similar time zones
[01:36] <davecheney> thumper: anyone else who delivers one win32 in the company I can point him too ?
[01:36] <thumper> sorry, don't really know
[01:36] <davecheney> thumper: s'okj
[01:36] <davecheney> it's an unusual request
[01:36] <thumper> although jamesh used to work on u1
[01:36] <thumper> he may know
[01:37]  * thumper pings him
[01:37] <davecheney> axw: do you get much chance to talk with Nate ?
[01:37] <davecheney> he's in a pretty shitty timezone for you
[01:37] <davecheney> WA <> Central time is not awesome
[01:37] <axw> davecheney: haven't spoken to him since I've started
[01:37] <axw> yeah it's like, the opposite
[01:37] <axw> :)
[01:38] <thumper> they must be ~12 hours apart
[01:38] <axw> yep
[01:38] <davecheney> axw: it is the fall that enslaves us all
[01:39] <axw> "most isolated city in the world", even when you're working remotely
[01:40] <davecheney> axw: you have your own currency over there, right ?
[01:41] <axw> yeah, the WA bux
[01:41] <axw> actually we just trade in iron ore
[01:41] <davecheney> with a picture of the little creatures brewery on the $20 bill
[01:41] <thumper> davecheney: <jamesh> The win32 code is at https://launchpad.net/ubuntuone-windows-installer
[01:41] <davecheney> ha! raw coke iron
[01:41] <davecheney> thumper: ta
[01:42] <thumper> heh, might be using py2exe
[01:42] <davecheney> axw: do the ATMs dispence iron ore and uranium ?
[01:42] <thumper> but should still use a .msi somehwere
[01:42] <axw> looks the same as bzr's
[01:43] <axw> davecheney: what is an ATM? I keep my money under my bed
[01:43] <axw> anyway... ;)
[01:43] <thumper> money... or opals and ore?
[01:44] <axw> yep, not a very comfortable bed
[01:53] <axw> I am shattered
[01:53] <axw> my daughter woke up twice last night, my son three times
[01:53] <axw> :(
[01:53] <thumper> :(
[01:53] <thumper> I remember those days
[01:54] <thumper> axw: don't stress, and sleep if you need to
[01:54] <thumper> the world won't fall apart if you miss a day :)
[01:54] <axw> thumper: thanks, I'm kinda used to it, just a bit worse than usual
[01:55]  * thumper nods
[01:56] <axw> thumper: have you ever tried upgrading a local provider? :)
[01:56] <axw> thumper: the bootstrap machine agent restarts repeatedly
[01:59] <axw> I'll make sure I can reproduce it again, and log a bug..
[02:06] <thumper> axw: we decided that we didn't support upgrading with the local provider
[02:06] <thumper> axw: however, if it is easy to fix
[02:06] <thumper> then we could do it I suppose
[02:06] <thumper> I just wasn't going to waste cycles on it
[02:06] <axw> thumper: ah ok
[02:07] <thumper> we should either document the limitation or fix it I guess
[02:07] <axw> do we want a bug in LP anyway? at least then we can decide to tell "upgrade-juju" to not attempt
[02:10] <axw> Bug #1214676  -- logged anyway, at least then there's a record for the next unsuspecting user
[02:10] <_mup_> Bug #1214676: upgrade-juju in local environment causes bootstrap machine agent to restart continuously <juju-core:New> <https://launchpad.net/bugs/1214676>
[02:39] <davecheney> jamespage: thanks for keeping https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-juju-2-delivery up to date
[02:39] <davecheney> i'll try to find out when we're going to deliver 1.14 stable as soon as I can
[03:07] <bigjools> o/
[03:56] <davecheney> axw: thanks for 12742045/
[03:56] <davecheney> how come none of the tests have to change ?
[03:57] <davecheney> does that mean we don't have a test that checks that the listener file exists on disk ?
[03:57] <axw> davecheney: because you don't need to manually delete unix sockets in Go
[03:57] <axw> Close() unlinks them
[03:58] <davecheney> axw: you can still leave socket turds around
[03:58] <axw> if the process terminates, yes
[03:58] <davecheney> service jujud-unit-agent stop
[03:58] <davecheney> will leave a turd around
[03:58] <davecheney> that was how I ran across this
[04:00] <axw> the existing tests all specify the path as non-abstract
[04:00] <axw> i.e. they don't go through that codepath that specifies them as abstract
[04:00] <davecheney> hmm, ok
[04:00] <davecheney> i'm not going to complain
[04:00] <davecheney> but maybe someone will
[04:00] <axw> ideally we'd have a test for it, but it seemed like an awful lot of work for very little gain
[04:01] <davecheney> axw: yup
[04:02] <bigjools> davecheney: so, you are prepared for an Azure onslaught?
[04:02] <bigjools> and axw :)
[04:02] <davecheney> bigjools: yes, bring it on
[04:02] <bigjools> because bugs are getting filed already (not sure if they are real bugs yet...)
[04:03] <davecheney> bigjools: yeah, we'll manage
[04:03] <bigjools> I'll help as much as I can of course
[04:03] <bigjools> will take you through it next week
[04:03] <davecheney> ok
[04:03] <bigjools> will prob make Monday after all
[04:03] <davecheney> \o/
[04:04] <davecheney> your weed delivery came through early ?
[04:04] <bigjools> nah sending the wife to get it
[04:04] <axw> bigjools: I know nothing about Azure, so ... probably not prepared. But I'll look forward to learning
[04:04] <bigjools> axw: there is indeed much to learn
[04:05] <bigjools> on the bright side, the weather here is marvellous
[04:05] <axw> here too - spring is slightly early
[06:03] <jam> has anyone been trying canonistack with 'use-floating-ip: true' ? I saw the machine come up, and I can ssh to the private address, but the public address tells me invalid public key
[06:05] <axw> I haven't, but I can give it a try
[06:05] <jam> axw: if you would, to give me a point of comparison
[06:15] <axw> jam: it works for me if I specify the identity file (~/.canonistack/axw_lcy02.key) and user as ubuntu
[06:15] <axw> i.e. what's in my .ssh/config for canonistack, for the private IP range
[06:16] <jam> axw: ah, user ubuntu, probably my fault
[06:16] <jam> different machine, I don't have my normal setup
[06:17] <jam> axw: thanks for the reminder
[06:17] <axw> jam: no worries
[06:17] <jam> default key was fine, didn't have the config for ubuntu@ for 172.* addresses.
[06:18] <jam> axw: interestingly, username isn't part of the 'ssh -v' output.
[06:18] <axw> so it would seem - helpful!
[06:25] <bigjools> arosales: hopeful ping
[06:26] <arosales> bigjools, hello
[06:26] <bigjools> arosales: wow you're up!
[06:26] <bigjools> arosales: I updated your config doc with my version of things, let me know how it looks
[06:26] <arosales> for a little bit longer
[06:27] <arosales> bigjools, thanks
[06:27]  * arosales is waiting for agent-state to go to started in Azure
[06:27] <arosales> using --upload-tols
[06:28] <arosales> bigjools, I saw your and gavin's reply so I was mistaken there. I was unaware the --upload-tools built juju-tools locally to match the version being used
[06:28] <bigjools> arosales: no worries
[06:28] <bigjools> I'll see if I can re-create your deployment bug
[06:30] <arosales> https://bugs.launchpad.net/juju-core/+bug/1214636 may be due to mismatch juju-tools
[06:30] <bigjools> I highly suspect so yes
[06:30] <arosales> I am trying with --uploads-tools now
[06:30] <arosales> https://bugs.launchpad.net/juju-core/+bug/1214178 I think we can make better by having the user specify the settings file
[06:30] <bigjools> I am just bootstrapping then I'll try to deploy
[06:31] <bigjools> arosales: this is the first time I ever saw a settings file :)
[06:31] <arosales> https://bugs.launchpad.net/juju-core/+bug/1214181 ~should~ be resolved once we get the correct tools in Azure
[06:31] <_mup_> Bug #1214181: Azure Provider always uploading 1.12 tools <juju-core:Incomplete> <https://launchpad.net/bugs/1214181>
[06:31] <arosales> bigjools, I take it you also saw Ben's tool in the google set up doc
[06:31] <bigjools> I did
[06:31] <bigjools> not looked at the code
[06:31] <arosales> ok
[06:32] <bigjools> we just saw the stuff on the management UI about creating and uploading a certificate
[06:32] <arosales> Azure complained my pem didn't have the right structure when I openssl command
[06:32] <arosales> I too went that route initially
[06:33] <bigjools> I love the way their ui insists your file must be a .cer, like that means anything
[06:33] <arosales> but  I think the settings file makes for a better user experience.
[06:33] <arosales> bigjools, I feel your guys' pain
[06:33] <bigjools> arosales: all of Red used the openssl tool to generate a certificate fine
[06:33] <arosales> to a _very_ small degree
[06:33] <bigjools> ha :)
[06:33] <bigjools> arosales: the settings file can only be downloaded once you upload a certificate, right?
[06:33] <davecheney> arosales:   openssl req -config /usr/share/ssl-cert/ssleay.cnf -x509 -nodes \
[06:33] <davecheney>     -days 3650 -newkey rsa:2048 -keyout azure.pem -out azure.pem
[06:34] <davecheney>    openssl x509 -inform pem -in azure.pem -outform der -out azure.cer
[06:34] <arosales> bigjools, that may be my pilot error on the pem gen .  .
[06:34] <bigjools> davecheney: exactly
[06:34] <arosales> I thought I was following Azure instructions, but the settings file helps simplify that.
[06:34] <bigjools> is the settings file documented anywhere?
[06:35] <arosales> hm . . . I'll have to check with utlemming on that
[06:35] <bigjools> I can't see anything on manage.windowsazure.com about it
[06:35] <davecheney> what is the settnigs file ?
[06:35] <arosales> Azure hands down the pems for a subscription in a xml based file
[06:36] <bigjools> AFAICT it's some xml that contains a subscription ID and the certificate
[06:36] <bigjools> but given that you have to generate and upload a certificate I question its usefulness for juju
[06:37] <arosales> bigjools, aiui this settings file generates the need certificate.
[06:37] <arosales> so I didn't have to upload a cert to have juju work.
[06:38] <arosales> just needed to parse the settings file and put the pem in the correct path
[06:38] <bigjools> arosales: ok, would love to see any docs on that
[06:38] <bigjools> how did you find out about it?
[06:38] <arosales> bigjools, ok I'll follow up with utlemming on it.
[06:38] <bigjools> thanks
[06:38] <arosales> utlemming also says this helps solve the China Azure endpoint problem too
[06:39] <bigjools> arosales: what problem?
[06:39] <arosales> utlemming told me about it when I was having some initial bootstrap issues
[06:39] <arosales> managing multiple certificates for different Azure end points
[06:40] <arosales> bigjools, I'll start a thread with utlemming so when he gets up the morning he can shed some more light on it.
[06:40] <bigjools> arosales: I honestly can't work out what the problem is with that, if Azure needs a separate cert then we just config it in a separate juju env
[06:40] <bigjools> ok cool
[06:41] <arosales> ubg state still in pending with --upload-tools
[06:42] <arosales> bigjools, http://pastebin.ubuntu.com/6009219/
[06:42] <arosales> I seem to not be able to get out of pending
[06:42] <arosales> what version of juju should I be using?
[06:43] <arosales> and is there any other special juju set up I need?
[06:44]  * arosales using 1.13.2 (compiled yesterday)
[06:44] <bigjools> arosales: I am using tip of trunk
[06:45] <bigjools> I just did a deployment, so let's see how it goes
[06:45] <arosales> bigjools, ok thanks
[06:45] <bigjools> arosales: you can try ssh-ing into machine 1
[06:45] <bigjools> check the agent log
[06:46]  * arosales just destroyed
[06:46] <arosales> will redeploy again
[06:47] <bigjools> I just deployed deploy cs:precise/juju-gui on saucy
[06:47] <arosales> bigjools, juju didn't complain about the series mismatch?
[06:48] <bigjools> it does unless you force the series
[06:48] <arosales> I was just deploying wordpress out of a local saucy repo
[06:49] <bigjools> ok
[06:49] <arosales> bigjools, I guess you just set you "default-series" to "precise" in your env.yaml, correct?
[06:49] <bigjools> arosales: no, that's saucy still.  I literally just did "juju deploy cs:precise/juju-gui"
[06:50] <bigjools> and off it goes
[06:50] <arosales> huh ok
[06:51] <arosales> bigjools, have you had success with a local deploy?
[06:51] <bigjools> I haven't tried
[06:51] <arosales> I just downloaded the latest wordpress charm and did
[06:52] <arosales> --repository=/home/arosales/devel/local-charms/   local:saucy/wordpress
[06:52]  * arosales stating the obvious
[06:53] <bigjools> I'm getting public key error trying to ssh into machine 1
[06:53] <bigjools> this means cloud-init is probably hosed
[06:54] <bigjools> either that or juju's user data went wrong - and given the bootstrap worked I suspect the latter
[06:54] <arosales> ugh
[06:54] <bigjools> I've definitely deployed before so something has recently broken
[06:55] <arosales> there was a recent fix for cloud-init ssh access on azure . .  .
[06:55] <bigjools> sadly no way of finding out since I can't ssh in ...
[06:55] <bigjools> oh do you have a reference?
[06:56] <bigjools> the fix might not be in the image
[06:56] <arosales> https://bugs.launchpad.net/cloud-init/+bug/1212723
[06:56] <_mup_> Bug #1212723: cloud-init fails to set user password on Windows Azure <amd64> <apport-bug> <cloud-images> <saucy> <cloud-init:Fix Committed by smoser> <cloud-init (Ubuntu):Fix Released by smoser> <https://launchpad.net/bugs/1212723>
[06:56] <arosales> I had thought that had gone into the Monday's daily . . .
[06:56] <bigjools> arosales: ah that's not relevant here
[06:56] <bigjools> we don't use passwords
[06:57] <bigjools> so I can ssh into machine 0 but not 1
[06:57] <arosales> ah ok
[06:57] <bigjools> something catastrophic has gone wrong in cloud-init if it hasn't picked up the ssh key
[06:57] <bigjools> I'll write this up on the bug arosales
[06:57] <arosales> ya 0 has been able to go to a started state for me, its just subsequent services that get stuck in pending
[06:58] <arosales> bigjools, if you can point to something in cloud-init I can pick this back up with utlemming and smoser in the morning (us time)
[06:58] <bigjools> it could be cloud-init or juju's fault
[06:59] <bigjools> so I'll write up as much as I can and then you can get smoser to take a look I guess
[06:59] <bigjools> we could do with a debug setting to put a password on the account
[07:00] <arosales> bigjools, which bug are you documenting in?
[07:00] <bigjools> arosales: https://bugs.launchpad.net/juju-core/+bug/1214636
[07:00] <_mup_> Bug #1214636: Azure Provider: Deployed service never goes to started <juju-core:New> <https://launchpad.net/bugs/1214636>
[07:00] <arosales> ok
[07:01] <arosales> bigjools, did that fail with the charm store gui deploy and the local charm or just the local?
[07:02] <bigjools> arosales: with the cs one
[07:02] <arosales> ok
[07:02] <bigjools> not sure it matters
[07:03] <bigjools> it's not getting that far
[07:05] <bigjools> arosales: I'm going to try with a different (older) image that I used successfully before and see if that helps
[07:05] <arosales> bigjools, ack, thanks for looking into and the help
[07:07] <bigjools> arosales: not a problem
[07:08] <arosales> geesh deploys are taking about 9 minutes
[07:08] <arosales> and that just waiting for the service to connect to the state server
[07:09] <bigjools> arosales: azure is slow :/
[07:10] <arosales> bigjools, is there a particular API call that takes long
[07:10] <arosales> I would like to bring this up with msft on why deploys take so long
[07:10] <bigjools> arosales: most of them return quickly, it's just the provisioning process
[07:11] <bigjools> waiting for a machine to come up and then boot and then provision.
[07:11] <bigjools> a certain amount of time is wasted if the image is not new enough as the apt-get update/upgrade takes a while
[07:12] <arosales> we should be hitting a local mirror for apt-get updates on the order of less than a minute, especially for a daily image.
[07:13] <arosales> It seems the console provisions vms faster than 5 minutes, but I haven't done one recently
[07:13] <arosales> would be good comparison
[07:14] <bigjools> arosales: the api calls that are slow are mostly deletion of stuff
[07:14] <bigjools> but there's plenty of bugs filed about that already
[07:14] <arosales> ok, but that shouldn't affect the deploy times
[07:14] <arosales> and there are fixing delete
[07:15] <arosales> I think you saw the latest API for that
[07:15] <bigjools> yep
[07:23] <axw> jam: I can't make the standup. should I just send comments to the list?
[07:23] <axw> (for shared review)
[07:26] <noodles775> allenap: Hi! I saw your branch with improvements to the Makefile... Another thought I had the other day is adding -y to the apt-add-repository and apt-get installs (or at least allowing -y via env or similar?). What do you think?
[07:28] <arosales> bigjools, on a quick test from the console gallery it takes about 4 minutes to create a virtual machine
[07:28] <bigjools> arosales: sounds about right
[07:32] <bigjools> arosales: huh, my known-good image no longer works
[07:33] <bigjools> davecheney: why would mongo be coming up on port 27017 and juju wanting to connect on 37017?
[07:33] <arosales> :-(
[07:38] <bigjools> arosales: you should probably call it a night! If you can grab Scott tomorrow maybe he can help debug based on my bug comments
[07:41] <arosales> bigjools, will do
[07:42] <arosales> ok good night fellas
[07:42] <bigjools> nn arosales
[07:42] <arosales> bigjools, I'll touch base with smoser tomorrow morning
[07:42] <bigjools> ok
[07:43] <bigjools> thanks
[07:43]  * bigjools eats
[07:54] <noodles775> 5
[07:55] <jam> axw: sounds good. I think we're going to try and have Tim run a shared review conversation sometime in the AU-friendly timezones. But I haven't seen him to coordinate it.
[07:57] <axw> jam: okey dokey, thanks
[08:03] <rogpeppe> mornin' all
[08:04] <fwereade> rogpeppe, heyhey
[08:04] <rogpeppe> fwereade: yo!
[08:06] <TheMue> rogpeppe: heya
[08:06] <rogpeppe> TheMue: hiya
[08:13] <fwereade> TheMue, ping
[08:13] <allenap> noodles775: I think that's a fair point. I'll add the -ys.
[08:14] <fwereade> TheMue, actually I'll just mark https://codereview.appspot.com/12347043/ WIP, please reject it yourself if you're doing a fresh branch for unset
[08:15] <noodles775> allenap: thanks (it wasn't really a point about your branch, just something I needed/wanted yesterday, and since you were there... :) ).
[08:27] <TheMue> Gna, update just made me reboot.
[08:27] <TheMue> fwereade: Seen that you pinged me?
[08:59] <bigjools> hey, anyone know why would mongo be coming up on port 27017 and juju wanting to connect on 37017?
[09:01] <jam> bigjools: "apt-get install mongodb" brings it up on 27017
[09:01] <jam> bigjools: but jujud coming up sets an upstart config that puts a different one on 37017
[09:01] <jam> with TLS enabled, etc.
[09:01] <bigjools> jam: this was after I did a bootstrap
[09:01] <jam> b
[09:01] <bigjools> bootstrap node didn't finish coming up
[09:01] <jam> bigjools: that sounds like cloud-init successfully installed mongodb-server, but jujud did not successfully finish bringing itself and mongodb up
[09:01] <bigjools> and ssh-ing in showed this
[09:01] <bigjools> yeah
[09:02] <bigjools> in cloud-init-output.log it just shows lots of failed connections
[09:02] <bigjools> no other errors obvious
[09:02] <jam> can you tar up and post the cloud-init-output ?
[09:02] <jam> I can take a look at it
[09:02] <bigjools> I an do better and put your public key on the machine if you want?
[09:02] <jam> bigjools: ssh-import-id jameinel should wokr
[09:02] <jam> work
[09:03] <bigjools> one sec
[09:03] <bigjools> ssh ubuntu@juju-azure-9nz3pmvw7e.cloudapp.net
[09:03] <jam> success
[09:04] <bigjools> jam: I'll come clean on something - this is using an old-ish saucy image but I needed to work out why I can't deploy on the latest daily.  This old image used to work.
[09:04] <bigjools> the latest daily shows a publickey error, so can't ssh in and see
[09:06] <jam> bigjools: root@default:~# mongod --version
[09:06] <jam> db version v2.0.4, pdfile version 4.5
[09:06] <jam> Wed Aug 21 09:05:32 git version: nogitversion
[09:06] <jam> that looks like a really old db version
[09:06] <jam> 2.0.4
[09:06] <jam> vs 2.4+ should be in new saucy
[09:06] <bigjools> weird - I have deployed with this image before
[09:06] <bigjools> why would this stop working?
[09:06] <jam> ||/ Name                     Version                  Description
[09:06] <jam> +++-[09:06] <jam> ii  mongodb-server           1:2.0.4-1ubuntu2.1       object/document-oriented database (server package)
[09:07] <jam> bigjools: /etc/lsb-release says this is a Precise image, not saucy
[09:07] <bigjools> yeah was about to sya
[09:07] <bigjools> but should still work nonetheless
[09:07] <bigjools> oh wait ... haha
[09:07] <jam> bigjools: if we know it is precise, we add-apt-repository ppa:juju/stable  to get newer mongodbx
[09:08] <bigjools> I know what;s up, I think my default series is wrong :)
[09:08] <bigjools> right :)
[09:08] <bigjools> thanks for helping me see the end of my nose!
[09:08] <jam> bigjools: so your machine thinks it is deploying saucy, but it is actually precise
[09:26] <davechen1y> jam: bigjools you need to install juju:ppa/stable
[09:27] <davechen1y> to pick up the mongodb dep
[09:27] <bigjools> davechen1y: jam figured it out, I had a dodgy config
[09:29] <davechen1y> bigjools: also
[09:29] <davechen1y> i wasn't thinking
[09:29] <davechen1y> we always insert that ppa via cloud-innit on the remote bootstrap machine
[09:29] <davechen1y> if required
[09:29] <bigjools> understandable
[09:30]  * bigjools heads towards TV to watch Aussies lose at cricket again
[09:30] <davechen1y> bigjools: we're going to get on like a house on fire next week
[09:30] <davechen1y> i'm super confident of that fact
[09:30] <davechen1y> jam: yolanda has a questoin about bootstrapping on canonistack
[09:30] <bigjools> davechen1y: :D
[09:31] <yolanda> hi, i just upgraded to 1.12 version of juju, trying to deploy on canonistack, but deployment is stuck on INFO juju open.go:69 state: opening state; mongo addresses: ["10.55.60.49:37017"]; entity ""
[09:31] <davechen1y> she's seeing it trying to contact the bootstrap node on the private 10
[09:31] <davechen1y> ip
[09:31] <yolanda> gives timeout
[09:31] <davechen1y> jam: mgz any ideas ?
[09:31] <davechen1y> how do you setup the port forwarding for canonistack ?
[09:34] <rogpeppe1> fwereade, jam, mgz, TheMue: https://codereview.appspot.com/13089045
[09:34] <rogpeppe1> fwereade: i hope this addresses your concerns about the lax stuff.
[09:35] <jam> davechen1y: you can either use 'sshuttle' or you can just set "use-floating-ips: true'
[09:35] <jam> davechen1y: canonistack got 2 new /24 for public IPs, so we should have enough now.
[09:36] <TheMue> rogpeppe1: *click*
[09:36] <jam> davechen1y: but the port forwarding I use is: apt-get install sshuttle; sshuttle -r ubuntu@$BOOTSTRAP 10.55.0.0/16
[09:37] <yolanda> jam, so i'll try it
[09:38] <jam> yolanda: k, if you have any more questions, feel free to ask
[09:40] <yolanda> jam, that worked!
[09:40] <davechen1y> woot
[09:43] <davechen1y> yolanda: i the default image type on openstack machines is 1core 1gb ram
[09:44] <davechen1y> jam: and mgz wrote this
[09:44] <davechen1y> they would be able to tell you
[09:48] <rogpeppe1> i just realised that i reviewed (comprehensively) entirely the wrong file
[09:48] <jam> rogpeppe1: we'll still appreciate that review :)
[09:48] <yolanda> tried a juju debug-log, having that error: http://paste.ubuntu.com/6009664/
[09:48] <rogpeppe1> jam: i reviewed constraints.go
[09:49] <rogpeppe1> jam: (a small file, but with primitives used in quite a few places)
[09:49] <jam> rogpeppe1: I can make that the assignment for next week :) I was going to do relations.go but constraints is worthwhile for the group, too.
[09:49]  * davechen1y looks
[09:49] <yolanda> that url that it complains about, works nice in a browser
[09:49] <davechen1y> yolanda: could be sporadic
[09:49] <davechen1y> can you try again
[09:49] <jam> yolanda: you can try a) swift list and b) just try juju debug-log again
[09:54] <yolanda> jam, davechen1y, different answer now
[09:54] <davechen1y> jujy deploy --constraints="cores=4 mem=4G" $SERVICE
[09:54] <davechen1y> something like this should ask for a different instance
[09:55] <davechen1y> i forget if constraints are best fit or absolute match
[09:55] <TheMue> rogpeppe1: you've got a review
[09:55] <rogpeppe1> TheMue: thanks!
[09:55] <davechen1y> yolanda: if you need more disk i *beleve* you get it by matching an instance type that has more cores and more ram
[09:56] <jam> davechen1y: they should be a "minimum" so if there is only a 4-core 8G service we'll give it to you
[09:56] <davechen1y> jam: /me can't remember the instance types available on canonistack
[09:56] <rogpeppe1> TheMue: as I said in the description, the plan is to rename Sync to StartSync - i didn't want to confuse matters by doing it in this CL
[09:56] <jam> yolanda, davechen1y: sidnei was putting together a patch to add a "root-disk" constraint to specify how much disk space the OS gets. But that isn't in a released version. (It might have landed and will be in 1.13.2, though)
[09:57] <rogpeppe1> jam, fwereade: i would very much appreciate your feedback on this CL https://codereview.appspot.com/13089045
[09:58] <davechen1y> jam: yolanda do all canonistack instances get the same root disk partitoin ?
[09:58] <davechen1y> i thought it was larger if you asked for more cores/ram
[09:59] <yolanda> davechen1y, i normally got more disk by setting a larger instance-type, for example m1.small instead of m1.tiny
[09:59] <jam> davechen1y: can you try running your mongodb with --no-unix-socket? We have a really hard time in the test suite if we try to start mongodb with a flag it doesn't recognize. So if it is ~roughly sane, we can go with it, but if there is a chance it will be hard to debug, I don't think it is worth it.
[09:59] <jam> axw: ^^
[10:00] <jam> davechen1y, yolanda: 'nova flavor-list' shows about 1300 options, so it isn't like you can remember them all :)
[10:00] <jam> sorry 300
[10:00] <jam> they all start with a 1
[10:00] <axw> jam: --nounixsocket works (I tested it before across all the code), --no-unix-socket isn't a thing
[10:01] <jam> axw: I realize it works on your machine, I'm concerned about it running on all developers and platforms we want to run the test suite on.
[10:01] <axw> right sorry, misunderstood
[10:01] <jam> We ran into trouble in the --no-ssl switch (where having an old mongo just hangs the test suite for 600s before the test times out, with poor information about why it is failing)
[10:01] <axw> fwiw I verified it's on the PPA version
[10:01] <axw> so it'll be no worse than lacking SSL
[10:01] <TheMue> rogpeppe1: rename Sync to StartSync? that's what IMHO would be wrong.
[10:02] <rogpeppe1> TheMue: no, rename StartSync to Sync.
[10:02] <yolanda> jam, i normally use a instance-type=m1.small and that's all, or flavor 2, that works for canonistack
[10:02] <rogpeppe1> TheMue: there should be no occurrences of Sync left (other than calls to presence Sync)
[10:02] <jam> axw: I can confirm that with the 2.2.4 that we produced in the tarball on S3, --nounixsocket works
[10:02] <axw> jam: thanks
[10:03] <axw> gtg to dinner, adios
[10:03] <TheMue> rogpeppe1: oh, you written the other direction above. so the first step is the new StartSync() and rename all Sync()s to StartSync() and then later to Sync()?
[10:03] <jam> axw: have a good evening
[10:03] <axw> cheers, you too jam
[10:03] <rogpeppe1> TheMue: yes
[10:03] <rogpeppe1> axw: g'night
[10:03] <TheMue> rogpeppe1: ah, then absolute +1, my fault
[10:04] <rogpeppe1> TheMue: np
[10:04] <jam> rogpeppe1: "the watcher loop will not do anything else until it finishes sync" is that appropriate? (It sounds like we are blocking when we should be doing something in a background thread)
[10:04] <davechen1y> axw: ship it
[10:05] <rogpeppe1> jam: the watcher would never do anything else while syncing
[10:06] <davechen1y> lucky(~/src/launchpad.net/juju-core/provider) % mongod --whogivesafuck
[10:06] <davechen1y> error command line: unknown option whogivesafuck
[10:06] <davechen1y> use --help for help
[10:06] <davechen1y> lucky(~/src/launchpad.net/juju-core/provider) % mongod --nounixsocket
[10:06] <davechen1y> Wed Aug 21 20:05:53 [initandlisten] MongoDB starting : pid=23438 port=27017 dbpath=/data/db/ 64-bit host=lucky
[10:06] <davechen1y> Wed Aug 21 20:05:53 [initandlisten] db version v2.2.4, pdfile version 4.5
[10:06] <rogpeppe1> jam: the only change now is that there's no time interval between asking for a sync and it actually starting one
[10:06] <davechen1y> works for me
[10:06] <jam> davechen1y: thanks for confirming, thats what I see here as well.
[10:06] <davechen1y> jam: jolly good
[10:06] <davechen1y> was just a bit gun shy after last time
[10:07] <jam> davechen1y: mongodb's start flags are surprisingly hard to expose and respond to.
[10:08]  * davechen1y sobs
[10:08] <yolanda> davechen1y, i also tried a juju destroy, debug showed unit is dying, but service doesn't die, and it has been a long time, there should be some problem?
[10:08] <davechen1y> fffuuuu web 2.0
[10:09] <davechen1y> yolanda: weird, pastebinit ?
[10:11] <yolanda> davechen1y http://paste.ubuntu.com/6009736/
[10:11] <yolanda> just got that "unit is dying" and no more notice
[10:11] <yolanda> but service is still there
[10:13] <davechen1y> yolanda: did you do remove-unit or destroy-service ?
[10:13] <yolanda> destroy-service
[10:14] <yolanda> in status it just shows "dying"
[10:15] <davechen1y> yolanda: juju status $YOURSERVICE should show
[10:15] <davechen1y> one service
[10:15] <davechen1y> with no units
[10:16] <davechen1y> rogpeppe1: +1 for nuking Sync
[10:16] <rogpeppe1> davechen1y: cool
[10:16] <davechen1y> for no other reason than we always use StartSync
[10:16] <davechen1y> so it's unneeded
[10:16] <rogpeppe1> davechen1y: well, we *did* use Sync in quite a few places
[10:17] <rogpeppe1> davechen1y: but it's always unnecessary
[10:17] <davechen1y> exactly
[10:17] <davechen1y> and one way to do things is better than two
[10:17] <rogpeppe1> davechen1y: and i think my changes give StartSync the same amount of useful guarantee that we had from Sync previously
[10:18] <rogpeppe1> davechen1y: (also, i plan to rename StartSync to Sync once the dust has settled)
[10:18] <yolanda> davechen1y, i destroyed environment and redeployed again, as charm was in a failed state, but no error in the debug-log
[10:26] <davechen1y> yolanda: im sorry it's not working on canonistack
[10:26] <davechen1y> i do not test on canonistack
[10:26] <davechen1y> this is my faliing
[10:26] <davechen1y> failing
[10:28] <yolanda> davechen1y, you mean the debug-log? or which issue?
[10:29] <jam> mgz: natefinch: given you are adding Addresses to lots of providers, are we missing a LiveTest across all implementations that it is available and gives a "sane" result?
[10:29] <jam> or is the idea that you have to build it up first and then implement the conformance test?
[10:30] <jam> (I'd rather see a patch that adds a conformance test and stubs out the ones that don't implement it until we fix them, since that clearly records the current state)
[10:46] <natefinch> greetings all
[10:46] <TheMue> rogpeppe1, fwereade, natefinch: https://codereview.appspot.com/12752044
[10:46] <TheMue> natefinch: good morning
[10:46] <rogpeppe1> natefinch: hiya
[10:47] <natefinch> jam: Yes, I believe we're missing a live test to make sure they all work... though I think they all work, since we're explicitly implementing them to get at information we know exists
[10:48] <fwereade> rogpeppe1, (btw I cast a quick eye over it, and I think I like, but I've got a lot of reviews to churn through today)
[10:48] <rogpeppe1> fwereade: should i hold on for your review?
[10:48] <jam> fwereade: fwiw I've done quite a few of them for you already :)
[10:48] <davechen1y> yolanda: not working well on canonistack in general
[10:48] <fwereade> jam, <3
[10:49] <yolanda> davechen1y, noticed some problems, yes
[10:49]  * TheMue => lunchtime
[10:50] <jam> fwereade: of course, the only one that I didn't dig into is yours :)
[10:50] <fwereade> rogpeppe1, go ahead and merge it, I'll keep a tab open and throw a fit after the fact if I spot something awful, but I don;t expect to -- I did sneak a look and it seems solid to me
[10:50] <rogpeppe1> fwereade: cool, thanks
[10:50] <fwereade> jam, haha
[10:50] <fwereade> jam, I'm a little conflicted about it anyway, I don't think it's our top priority
[10:51] <jam> fwereade: unfortunately when you comment via Reitveld, LP doesn't move the branch from "Requested reviews" to "Reviews I am doing".
[10:51] <jam> :(
[10:51] <rogpeppe1> jam: you ok with the StartSync changes going in? (from your question earlier, i presume you've at least had a glance)
[10:51] <jam> fwereade: can you land or reject: https://code.launchpad.net/~fwereade/juju-core/errors-cleanup/+merge/168928
[10:52] <jam> rogpeppe1: I haven't looked at the patch yet.
[10:52] <rogpeppe1> jam: np
[10:52] <jam> I've been reading IRC to follow along with the discussion, though.
[10:52] <fwereade> jam, hell, sorry, hat was rotted a month ago :( rejecting
[10:52] <rogpeppe1> jam: i can hold on for you if you'd like
[10:54] <jam> rogpeppe1: first thing I saw was the "string => interface" change. Is the collect logic actually correct for things that aren't strings?
[10:54] <jam> (
[10:54] <rogpeppe1> jam: i believe so, as long as they can be used as map keys
[10:56] <fwereade> allenap, ping
[10:56] <jam> rogpeppe1: I'm slightly uncomfortable about the change to Lifecycle watcher being bundled with changing the semantics of Sync.
[10:56] <rogpeppe1> jam: in the case of Cleanup, the ids are of type ObjectIdHex
[10:57] <allenap> fwereade: pong
[10:57] <fwereade> allenap, do you recall, how does maas count cores? does hyperthreading count?
[10:57] <rogpeppe1> jam: it seemed like a fairly trivial change, but i could split the collect change out into another CL if you like
[10:58] <rogpeppe1> jam: (it's only 4 lines)
[10:58] <allenap> fwereade: Let me see....
[10:58] <jam> rogpeppe1: just conceptually it is something I need to think about, and seems very orthogonal to what is going on
[10:59] <rogpeppe1> jam: i needed to use collect to fix the Cleanup watcher, and it had the wrong type
[10:59]  * fwereade hopes it's counting logical cores, not physical, because that seems to fit best with the ec2/openstack situation
[11:00] <rogpeppe1> jam: so the lifecycleWatcher needed to change for that, so it's not entirely orthogonal, but i can propose that change independently as a prereq if that's your preference
[11:01] <fwereade> and actually... jam, mgz: do you know offhand how openstack counts a flavor's cores?
[11:01] <allenap> fwereade: I think it counts cores. It evaluates the xpath count(//node[@id='core']/node[@class='processor'][not(@disabled)]) against an lshw XML dump.
[11:01] <jam> fwereade: well they are all virtual there, right?
[11:01] <jam> I don't know how it maps hyperthreading into available cores.
[11:01] <fwereade> jam, indeed, and overcommit is a whole new can of worms
[11:01] <bigjools> whatever the kernel reports
[11:02] <rogpeppe1> jam: essentially it's just moving a dynamic type cast out of collect and into the watcher-specific logic
[11:03] <jam> rogpeppe1: so given that "StartSync" actually just puts the "reqSync" into the queue, and when req := <-w.request sees it, it calls handle() and *then* flush() before we loop around to 'if w.needSync', doesn't that mean your statement about "it syncs and waits for it to complete" isn't actually true?
[11:05] <natefinch> fwereade: nproc on my hyperthreaded 4 core processor returns 8, btw.
[11:05] <rogpeppe1> jam: i don't *think* so
[11:06] <rogpeppe1> jam: because handle of a reqSync will not actually add anything to be flushed
[11:06] <natefinch> fwereade: I would hope we somehow discount hyperthreaded cores, since they don't even come close to actually doubling processing power
[11:06] <fwereade> natefinch, indeed, I'm really just fretting about whether consistency is even possible given some of our providers
[11:07] <jam> rogpeppe1: in the time since we start handling the reqSync, more might come into the channel, which will be flushed before we finish handling the reqSync.
[11:07] <jam> that time window is very small
[11:07] <jam> since we only really need to set the bool
[11:07] <jam> but it does exist, doesn't it?
[11:08] <rogpeppe1> jam: how could that happen? if there are no events to be flushed, then flush will never read on the channel
[11:08] <jam> natefinch: when I was testing it 7 years ago, 2+2 hyperthreaded cores were easily equivalent to 3 cores (as in, enabling hyperthreading allowed my threaded code to run 50% faster on 2 physical cored
[11:08] <jam> cores)
[11:09] <rogpeppe1> jam: and so i'm fairly sure there are no places where new events can arrive between handling a request and calling sync()
[11:09] <natefinch> jam: last I remember it was like a +25% and then only for jobs that have a lot of thread switching.... but regardless, we shouldn't treat them as fully powered cores
[11:11] <jam> natefinch: I think the precision of this stuff is such that people basically need to test it and figure out what works for them. :) I don't think MaaS or Juju need to grow all that aware of what the actual benchmark results for a given workload map into "cores=X".
[11:11] <jam> We just want to provide a way for people to give their input.
[11:12] <jam> Ignoring hyperthreading completely doesn't seem quite correct, though neither is considering them 100%. But I think doing "something" is reasonable, as long as it is consistent.
[11:12] <jam> natefinch: in the case of MaaS most likely people who *really* cared would use hardware-based tags to essentially define their own flavors, and deploy based on that.
[11:14] <natefinch> jam: yep, totally makes sense.
[11:31] <TheMue> standup
[11:32] <jam> rogpeppe1: fwereade: https://plus.google.com/hangouts/_/f497381ca4d154890227b3b35a85a985b894b471
[11:33] <jam> mgz: ^^
[12:42] <smoser> bigjools, or arosales i just tested a saucy daily from today and it seems functional. you could have been bit yesterday by bug 1214541
[12:42] <_mup_> Bug #1214541: hostname setting is erroring out <amd64> <apport-bug> <cloud-images> <saucy> <cloud-init (Ubuntu):Fix Released> <https://launchpad.net/bugs/1214541>
[12:42] <smoser> tested == tested from cloud-init's perspective (it ran user-data, provisioned user ... )
[13:04] <rogpeppe1> jam: "
[13:04] <rogpeppe1> LifecycleWatcher coalescing its changes is a good thing, and a pretty notable
[13:04] <rogpeppe1> change to have it silently added.
[13:04] <rogpeppe1> "
[13:05] <rogpeppe1> jam: Lifecycle watcher *was* previously coalescing its changes
[13:05] <rogpeppe1> jam: my CL doesn't change that
[13:07] <rogpeppe1> jam: it's quite possible the testing for that wasn't great though
[13:07] <jam> rogpeppe1: so is your change just making it use a common helper, or ? Either way it is a modest change that should probably make it into the "and I changed LifecycleWatcher to XXXX". It helps frame an understanding of what is actually changing. Given that I clearly got it wrong 2 times now :)
[13:07] <jam> "make it into the *summary*"
[13:07] <rogpeppe1> jam: lifecycleWatcher is still calling the same helper it always called
[13:08] <rogpeppe1> jam: it's just that i needed to change the signature of that helper so i could use it with the CleanupWatcher
[13:09] <rogpeppe1> jam: i could put "I changed the collect helper function to use interface{} keys" into the summary if you think that's worth it
[13:09] <jam> rogpeppe1: so I guess some of it is diff context: looking here: https://codereview.appspot.com/13089045/patch/5001/6009
[13:09] <jam> it certainly looks like lifecycle watcher is changing
[13:09] <jam> but I realize that last diff block could be CleanupWatcher
[13:10] <jam> which I'm *pretty* sure means CleanupWatcher is now collecting when it didn't before
[13:10] <rogpeppe1> jam: hmm, i don't see that in my diff
[13:10] <jam> rogpeppe1: https://codereview.appspot.com/13089045/diff/5001/state/watcher.go is a different way to look at it.
[13:11] <jam> but collect() is now getting called where it wasn't before (unless I'm *completely* misreading this)
[13:11] <rogpeppe1> jam: i think you're misreading the unified diff
[13:11] <rogpeppe1> jam: the collect code is being added to cleanupWatcher
[13:11] <rogpeppe1> jam: occupational hazard with unified diffs, i fear
[13:11] <jam> rogpeppe1: so I misread that it was lifecycle, but the code is added to cleanupWatcther which is the same thing I'm mentioning (adding collection to something that wasn't collecting before)
[13:12] <jam> rogpeppe1: so s/Lifecycle/Cleanup/ and my comment still applies, I think.
[13:12] <rogpeppe1> jam: you're right. sorry, i was thrown off by the name
[13:12] <jam> rogpeppe1: we seem to be missing a test for CleanupWatcher now, given that if you revert that change no tests will break, right?
[13:12] <jam> so we don't *know* that cleanup is collecting
[13:13] <rogpeppe1> jam: no, i made the change because tests broke
[13:13] <rogpeppe1> jam: in particular, TestWatchCleanup failed
[13:14] <rogpeppe1> jam: i'll retry to make sure of that
[13:18] <rogpeppe1> jam: hmm, it doesn't seem to fail any more!
[13:18] <jam> fwereade: just a gentle nudge to remind you to submit a couple bugs about container.go
[13:19]  * jam is off for the evening, though I'll probably respond at some point later
[13:19] <fwereade> jam, still talking to ian, have a doc open ready to convert :)
[13:21] <rogpeppe1> jam: i'll add a specific test for cleanup event coalescence
[13:51] <smoser> rvba, are you able to reproduce https://bugs.launchpad.net/juju-core/+bug/1214636 ?
[13:52] <_mup_> Bug #1214636: Azure Provider: Deployed service never goes to started <cloud-init:New> <juju-core:Confirmed> <https://launchpad.net/bugs/1214636>
[14:00] <rogpeppe1> i'm going to be offline for an hour or so, then i should have sporadic network access for 5 hours after that
[15:08] <rogpeppe1> fwereade: ping
[15:46] <natefinch> anyone here familiar with lxc?  I used juju to deploy locally, and I'd like to open up the service to computers on my network, but I don't really know how to do it. Right now the services just have local IP addresses like 10.0.3.187... .how do I expose that to my local network?
[15:51] <mgz> there's not a trivial way of doing that
[15:52] <natefinch> huh ok
[15:53] <mgz> you can use iptables or similar to manually route traffic on a port in, for instance
[15:53] <mgz> and we've briefly discussed making juju expose do something like that
[15:55] <natefinch> mgz: yeah, manually routing the traffic is pretty much what I was thinking
[15:55] <mgz> I'd give you the iptables command you need, but you can probably google it as easily as me :0
[15:56] <natefinch> mgz:  haha yeah, that's what I was just doing, no worries
[16:33] <arosales> fwereade, or rogpeppe1 any core folks interested/have time to join our weekly charm sync.
[16:33] <arosales> fwereade, rogpeppe1 wed at 16:00
[16:33] <arosales> utc, that is.
[16:34] <fwereade> arosales, yes please, sign me up
[16:34] <arosales> fwereade, thanks I'll add you to the invite. Please feel free to delegate and/or let me know if I should any other folks.
[16:35] <fwereade> arosales, I might end up doing so occasionally, but I'm very interested personally
[16:36] <arosales> fwereade, be great to have some core folks there, thanks! :-)
[16:38] <arosales> fwereade, invite sent.
[17:07] <fwereade> hey, it's after 6; I'm tired, see you all later :)
[17:44] <arosales> got a juju ssh question if any folks are around
[17:47] <natefinch> arosales: I'm around, but you probably know more than I do.  However, might as well ask :)
[17:47] <arosales> on juju core I am sure you have me beat :-)
[17:48] <natefinch> arosales: We'll see :)  I can at least search the codebase pretty easily for answers that can be answered that way
[17:49]  * arosales getting
[17:49] <arosales> 2013-08-21 17:43:25 ERROR juju supercommand.go:282 command failed: required environment variable not set for credentials attribute: User
[17:49] <arosales> error: required environment variable not set for credentials attribute: User
[17:49] <arosales> but I am using keys values not user
[17:50] <arosales> also odd it is opening up my hp environment when I state an aws one . . .
[17:50] <arosales> juju --debug ssh 1 -e aws-go
[17:50] <arosales> 2013-08-21 17:43:25 INFO juju provider.go:121 environs/openstack: opening environment "hp-go"
[17:50] <arosales> hmm, that may be the problem . . .
[17:50] <natefinch> heh
[17:50] <arosales> thats odd
[17:50] <natefinch> the code looks like it'll say "user" even with keys
[17:51] <natefinch> which looks like a copy and paste error
[17:51] <natefinch> (or at the very least, the error message should be made clearer)
[17:52] <natefinch> nah, itlooks like copy and paste.... looks like the section on username/password got copied for keys and then didn't change the error message
[17:52] <natefinch> But, it sounds like you may have figured it out anyway?
[17:52] <arosales> juju seems to be picking up my bash env setting when trying to ssh instead of my command line option, and just for ssh
[17:53] <arosales> juju --debug stat -e aws-go
[17:53] <arosales> 2013-08-21 17:52:08 INFO juju ec2.go:137 environs/ec2: opening environment "aws-go"
[17:53] <arosales> but ssh tries my "hp-go" environment even though I state "-e aws-go"
[17:54] <natefinch> arosales: sounds like a bug, though it's weird that it would happen only in one subcommand, I'd expect that to be shared logic... but I can see if I can find where that's set
[17:55] <arosales> i'll open a bug and see if I can work around this by taking out my env  setting in my bashrc file
[17:56] <natefinch> certainly if you say -e and it doesn't use that environment, that's a bug :)
[18:02] <arosales> https://bugs.launchpad.net/juju-core/+bug/1215052
[18:02] <_mup_> Bug #1215052: juju ssh ignores the command line "-e"  and instead uses JUJU_ENV in my .bashrc <juju-core:New> <https://launchpad.net/bugs/1215052>
[18:02] <arosales> bug filed
[18:02] <arosales> odd issue
[18:04] <natefinch> yeah, pretty weird
[18:07] <smoser> arosales, are you sure it wouldn't work with
[18:07] <smoser> juju -e aws-go --debug ssh 1
[18:07] <arosales> smoser, yup
[18:07] <arosales> that the exact command I ran
[18:08] <smoser> well, its not what you typed above
[18:08] <smoser> its not completely unreasonable if juju stopped looking for flags to it after it saw 'ssh', and instead passed those forward to something else.
 juju --debug ssh 1 -e aws-go
[18:08] <smoser> ie
 juju -e aws-go --debug ssh 1
[18:09] <arosales> smoser, so you are saying order of arguments matter
[18:09] <smoser> order maybe important
[18:09] <smoser> yes. its not terribly uncommon.
[18:09] <arosales> good point
[18:09] <smoser> especially if juju was going to pass other options on to ssh
[18:09] <smoser> ie:
[18:09] <smoser> juju --debug ssh 1 run this command
[18:10] <arosales> smoser, I'll try your order here
[18:10] <smoser> arosales, also, icame here wondering if you tested aws and saucy
[18:10] <smoser> (which i suspect you were trying :)
[18:10] <arosales> smoser, btw I was working on trying to confirm precise ssh is still working
[18:11] <arosales> per the Azure bug <arosales> juju --debug ssh 1 -e aws-go
[18:11] <natefinch> smoser: good point, ssh may be different because it expects to be passed arbitrary arguments to be run
[18:11] <arosales> https://bugs.launchpad.net/juju-core/+bug/1214636
[18:11] <_mup_> Bug #1214636: Azure Provider: Deployed service never goes to started <cloud-init:New> <juju-core:Confirmed> <https://launchpad.net/bugs/1214636>
[18:13] <arosales> lol
[18:13] <arosales> juju -e aws-go --debug ssh 1
[18:13] <arosales> error: flag provided but not defined: -e
[18:13] <smoser> ?
[18:13] <arosales> but, the following works
[18:13] <smoser> odd.
[18:13] <arosales> juju --debug stat -e aws-go
[18:13] <arosales> 2013-08-21 18:13:05 INFO juju ec2.go:137 environs/ec2: opening environment "aws-go"
[18:13] <arosales> smoser, if I put -e towards the front of the command, juju doesn't recognize it
[18:14] <smoser> put it after ssh and before '1'
[18:14] <smoser> maybe. just try that.
[18:14] <arosales> smoser, so I can't run your command example, 'juju -e aws-go --debug ssh 1"
[18:14] <smoser> arosales, try
[18:14] <smoser> juju --debug ssh -e aws-go 1
[18:15] <smoser> that would not be terribly unreasonable if the '-e' was not a juju global flag, but was a flag to the 'ssh' sub command.
[18:15] <smoser> (and just happened to be a flag to many subcommands)
[18:15] <arosales> smoser, that does work
[18:15] <smoser> alright.
[18:16] <smoser> well thats at least moderately sane
[18:16] <smoser> (i'd even argue "perfectly fine")
[18:16] <arosales> but from a mere mortal like me totally unreasonable
[18:16] <smoser> so did you verify that juju works with saucy ?
[18:16] <arosales> smoser, so precise works on aws
[18:16] <arosales> I can ssh
[18:16] <natefinch> yeah... seems like juju -e should work, even if the command you're running doesn't care about the environment
[18:16] <smoser> arosales, bigjools reported precise (with his custom image) worked.
[18:16] <arosales> smoser, I went back to precse on aws as I was getting the above error
[18:17] <smoser> i wanted to see if saucy worked on aws. to rule out general saucy error.
[18:17] <smoser> as it really does seem to me that cloud-init is functioning correctly.
[18:17] <arosales> smoser, trying that now
[18:18] <smoser> arosales, can you ssh to your precise aws ?
[18:18] <arosales> smoser, yes
[18:18] <smoser> i was going to ask you to give me output of 'ec2metadata --user-data'
[18:18] <arosales> bootstrapping with saucy now on aws
[18:18] <smoser> in a secure chanel
[18:19] <arosales> ah, just destroyed
[18:19] <arosales> let me see if saucy works and I can get you user data off precise
[18:41] <arosales> smoser, saucy ssh with juju works on aws
[18:41] <arosales> smoser, so sounds like the issue is Azure specific . .  . ?
[18:43] <smoser> arosales, can you point me at the doc you have so far ?
[18:43] <smoser> and i'll try to reproduce it?
[19:33] <smoser> marcoceppi, http://marcoceppi.com/2013/07/compiling-juju-and-the-local-provider/
[19:33] <smoser> export GOPATH="~/.juju/"
[19:33] <smoser> is wrong
[19:33] <smoser> quoting the '~' explicitly creates a directory called '~'
[19:34] <marcoceppi> smoser: that's...odd.
[19:34] <smoser> its expected :)
[19:34] <marcoceppi> It's odd that I put that quoted in the post
[19:34] <marcoceppi> I'll update it!
[19:35] <smoser> 2 other things
[19:35] <smoser> hm..
[19:35] <smoser> you need bzr for 'go get'
[19:36] <marcoceppi> smoser: ack, that was already installed on my system
[19:36] <marcoceppi> I didn't run these against a "clean" machine
[19:36] <marcoceppi> Will update the post
[19:39] <smoser> marcoceppi, becaus eyour'e the expert (i'm follooing your blog)
[19:39] <smoser> do you know
[19:39] <smoser> http://paste.ubuntu.com/6011504/
[19:39] <smoser> anyone else maybe ?
[19:39] <marcoceppi> I've not recieved that error before. What version of ubuntu? I can try to replicate
[19:40] <natefinch> smoser: almost certainly because you're running go 1.02 and juju uses 1.1 now
[19:40] <natefinch> unfortunately apt-get still installs 1.0.2
[19:40] <marcoceppi> natefinch: ah, was that a recent switch? I've not tried to compile since 1.11.4
[19:41] <natefinch> marcoceppi: yeah in the last month or so, I think
[19:41] <marcoceppi> natefinch: ah, sorry smoser, I'll update my blog to reflect the golang ppa too
[19:43] <marcoceppi> natefinch: actually, I thought there was a ~gophers ppa, but I don't see 1.1 in there
[19:44] <natefinch> marcoceppi: https://groups.google.com/forum/#!topic/golang-nuts/iJFhI8K5a2Y
[19:45] <marcoceppi> natefinch: bummer :\
[19:45] <smoser> yeah, there is no raring either in golang ppa
[19:46] <smoser> saucy!
[19:51] <natefinch> marcoceppi: when I started I think I had to use the tarball from golang.org
[20:03] <sidnei> smoser, marcoceppi: https://launchpad.net/~james-page/+archive/golang-backports
[20:03] <marcoceppi> sidnei: awesome, thanks@
[20:04] <marcoceppi> sidnei smoser updated the blog post
[20:06] <smoser> how do you normally tell juju about ssh public keys?
[20:06] <smoser> for maas it seems to have explicitly (in config)
[20:06] <smoser>     authorized-keys-path: ~/.ssh/authorized_keys # or any file you want.
[20:06] <smoser>     # Or:
[20:06] <smoser>     # authorized-keys: ssh-rsa keymaterialhere
[20:06] <smoser> but that is provider specific ?
[20:07] <marcoceppi> smoser: it normally just uses whatever is in ~/.ssh/id_rsa.pub
[20:07] <marcoceppi> You can add additional ones using either the two keys above
[20:22] <natefinch> marcoceppi: I deployed discourse using juju and it looks great except that in the registration confirmation emails, it's using the internal hostname of the EC2 instance, instead of the external hostname.... any thoughts on how to fix that?
[20:35] <marcoceppi> natefinch: Yeah, you'll need to edit the config/database.yml file
[20:35] <marcoceppi> I need to add "external hostname" configuration option to the charm
[20:35] <marcoceppi> to automatically add that information in
[20:36] <natefinch> marcoceppi: I'd be happy to contribute to the charm :)
[20:36] <marcoceppi> natefinch: please do!
[20:36]  * marcoceppi syncs his local version to charm branch
[20:36] <natefinch> marcoceppi: I didn't think you'd say no ;)
[20:37] <natefinch> marcoceppi: I am completely new to the charms, so... it'll take me some ramp up time. But especially for this particular charm, I want to use it for personal reasons. So I have skin in the game that it works well :)\
[20:37] <marcoceppi> natefinch: I've got several fixes to the auto-thin configuration that I need to land, so I'm not sure how far behind my cs:~marcoceppi/discourse version actually is
[20:37] <marcoceppi> but the github branch is always a little ahead
[20:37] <marcoceppi> and then I sync when the charm is "stable" again to the charmstore branch
[20:38]  * marcoceppi enjoys convoluted processes
[20:38] <natefinch> marcoceppi: lol fair enough
[20:39] <marcoceppi> but yeah, that part could use a bit of work (ie, update database.yml file and the nginx config with the hostname)
[20:40] <marcoceppi> I tried to keep the charm pretty simple, as I wanted to use it as an example charm, but it's kind of grown a bit beyond that, so if you have any questions as to why I've backed bits of crack in to the charm, don't hesitate to ask
[20:40] <marcoceppi> baked*
[20:40] <natefinch> haha
[20:40] <natefinch> ok
[20:56] <smoser> any ideas?
[20:56] <smoser> http://paste.ubuntu.com/6011776/
[20:58] <sidnei> whoa
[20:58] <sidnei> looks like it choked on the cert?
[21:00] <smoser> http://paste.ubuntu.com/6011792/
[21:01] <smoser> it sure does look like that. i agree
[21:01] <smoser> that above is what i've done so far to get here.
[21:51] <bigjools> morning
[21:53] <bigjools> hi arosales
[22:03] <arosales> bigjools, morning
[22:03] <arosales> bigjools, smoser was unable to reproduce what you and I found on saucy
[22:04] <bigjools> arosales: because he wasn't using juju
[22:04] <arosales> bigjools, actually he was
[22:04] <bigjools> I only saw the azure command line tool being used
[22:04] <arosales> compiled the latest even
[22:04]  * arosales not sure if he updated with his latest
[22:05] <bigjools> arosales: the last comment on https://bugs.launchpad.net/bugs/1214636 doesn't show juju
[22:05] <_mup_> Bug #1214636: Azure Provider: Deployed service never goes to started <cloud-init:New> <juju-core:Confirmed> <https://launchpad.net/bugs/1214636>
[22:05] <arosales> bigjools, smoser had to run so he may not have updated his bug with the latest
[22:08] <bigjools> righto
[22:44] <thumper> hi folks
[22:44] <thumper> this branch is giving me the shits
[22:44] <thumper> I thought it would be a small, simple branch
[22:44] <thumper> but OH, NO, no it isn't at all
[22:46] <thumper> just replacing agent.Conf with an interface, and unexporting the structure
[22:46] <thumper> FFS it is tedious
[22:46] <thumper> I seem to have all tests passing except for agent/agent_test
[22:46] <thumper> which is because there are shed loads of explicit struct tests
[22:46] <thumper> so I left it for last
[22:47] <thumper> trying to do the bare minimum to get this landed
[22:47] <thumper> aargghh!
[22:47] <thumper> that and no ubuntu edge to make me feel better
[22:57] <fwereade> thumper, ha, I only just got round to thinking "bah, I'd better get one in case a mystery benefactor kicks in 20M at the last moment", but paypal can go fuck itself, so meh
[22:57] <thumper> :)
[22:57] <thumper> so no comment on the rest of the rant then :)
[22:58] <thumper> fwereade: I may throw it at you for review
[22:58] <fwereade> thumper, just reading backwards
[22:58]  * thumper looks at the size
[22:58] <fwereade> thumper, I won't say it'd be a *pleasure*, but I won't complain too much ;p
[22:59]  * thumper sucks wind
[22:59] <thumper> 1600 lines ATM
[22:59] <fwereade> ouch
[22:59] <thumper>  17 files changed, 441 insertions(+), 446 deletions(-)
[22:59] <thumper> and this is the simplest thing
[22:59] <thumper> that made sense
[22:59] <fwereade> fucking structs
[22:59] <thumper> long live interfaces
[23:01]  * fwereade girds his loins in preparation for the morrow then
[23:25] <sidnei> thumper: im trying to figure out why local provider is not setting the hostname of the containers it creates, and if this is an lxc bug or not
[23:25] <thumper> sidnei: it normally does
[23:26] <thumper> sidnei: using clone?
[23:26] <sidnei> thumper: nope, trunk without my changes
[23:26] <sidnei> thumper: but im using the daily lxc ppa
[23:26] <sidnei> so might be a change there
[23:26] <thumper> sidnei: it should, and is a template param
[23:27] <thumper> --hostid
[23:27] <thumper> line 154 of container/lxc/lxc.go
[23:32] <wallyworld> thumper: i'm going to move the Broker interface from worker/provisioner to (somewhere, probs instance), and make Environ be composed from Broker and remove the duplicated Environ methods
[23:32] <wallyworld> as part of some refactoring
[23:32] <thumper> ok
[23:32] <wallyworld> since part of it is extracting common start nstance code
[23:32] <wallyworld> i am sharing some of your refactoring pain :-)
[23:33] <wallyworld> i still don't fully understand why the fcuk we used structs and not interfaces
[23:33] <wallyworld> maybe whoever did it did not read software enginerring 101
[23:37] <thumper> I believe it was because "we don't need it yet" was the rationale
[23:37] <thumper> however, the retrofitting of interfaces is a royal PITA
[23:39] <wallyworld> there is no such thing as "we don't need it yet" with interfaces
[23:40] <wallyworld> they are needed for all sorts of things from day one, not the least of which is for tests
[23:40] <wallyworld> and extensible, refactorable code
[23:41] <wallyworld> and Go's design almost mandates their use unless you want gobs of cut and paste boilerplate everywhere
[23:43] <thumper> wallyworld: I agree
[23:43]  * wallyworld sighs
[23:43] <thumper> wallyworld: but then again, you and I often do
[23:43] <wallyworld> often but not always :-)
[23:43] <wallyworld> like with rugby
[23:44] <thumper> no, that would be boring
[23:44] <thumper> so, think you have a shot this weekend?
[23:44] <wallyworld> nope :-(
[23:44] <thumper> heh
[23:44]  * thumper heads to the gym
[23:44] <wallyworld> we're fooked