[01:38] <hatch> can I deploy a precise charm to a trusty host somehow in the CLI? a --force option or the like?
[02:25] <rick_h_> hatch: no, I don't think it's allowed. I know we added logic to block that in the GUI.
[03:52] <marcoceppi> hatch: you can branch it locally.
[04:12] <hatch> marcoceppi: yeah that's what I ended up doing
[04:13] <lazyPower> mkdir ~/charms/trusty && cd ~/charms/trust && charm get cs:precise/charm
[04:13] <lazyPower> deploy and cross fingers
[04:13] <lazyPower> 4/5 of the time, it 'just works' - unless its got apache2 embedded in the charm. With the files being moved around as of the trusty release, some charms just plain go supernova on deploy
[04:14] <hatch> haha nah I was playing around with different providers seeing how much I could run on a single machine
[04:14] <hatch> digital ocean is leagues ahead of ec2 in terms of performance
[04:14] <lazyPower> Those SSD backed SAN's they use make all teh difference
[04:14] <lazyPower> a lot of the latency you see on AWS is the EBS IO latency
[04:15] <hatch> identical setups and the digital ocean was anywhere from 50-100ms faster per request at 20 concurrent requests
[04:15] <lazyPower> less infrastructure to manage at present too :)  But I agree, I <3 DigitalOcean
[04:15] <hatch> heh yeah true
[04:15] <lazyPower> My entire personal infrastructure is up on there, the remnants of my business, and pet projects all go on DO
[04:15] <hatch> I recently found out about a Canadian host https://www.clouda.ca/
[04:15] <lazyPower> Oh I've looked at them for a client project. They were super nice.
[04:15] <hatch> more expensive but 100% Canadian and running openstack :)
[04:16] <hatch> I'm too tired, tomorrow I'll do the same tests as I did on ec2 and DO today
[04:16] <hatch> see how they stack up for that extra $ :)
[04:17] <lazyPower> Yeah, i'm wrapping up today's todo list that got a bit hairy. I spent an obscene amount of time cycling over something silly
[04:17] <hatch> yeah, that happens hah
[04:18] <hatch> I was loadtesting the built in sqlite setup as well
[04:18] <lazyPower> I'm one of those crazy peeps that runs his ghost setup on the sqlite db
[04:18] <hatch> honestly it didn't seem like it was having any trouble at 20concurrent requests
[04:18] <lazyPower> hatch: you using loader.io to test?
[04:18] <hatch> atm I was just doing ab on a fast connection
[04:19] <hatch> so it wasn't 'true' load testing
[04:19] <hatch> but gave a reasonable idea watching the cpu on the vm and the reporting from ab
[04:19] <hatch> on DO the requests are highly CPU bound
[04:20] <hatch> 20 concurrent puts it at high 80's
[04:21] <hatch> so using something like mod_cache would be recommended to get good hardware utilization
[04:22] <hatch> anyways I think I'll get the apache2 charm set up for my url redirects then fire apache2 and ghost up on something small and see how it pans out without mysql
[04:22] <hatch> will have to set something up to do backups
[04:22] <hatch> though
[04:23] <hatch> anyways I'm outa-here, have a good night lazyPower
[10:57] <JoshStrobl> Morning everyone
[10:58] <JoshStrobl> jcastro, I got the t-shirt man, it is awesome (you were right) and it is now the softest one I own. Gonna take some pics of it (and me wearing it, of course).
[14:33] <mbruzek> Hey guys I just created a pull request to link the new charm-review-process document that JoshStrobl created in our docs. https://github.com/juju/docs/pull/158
[14:34] <rcj> [Question] Having a problem with a relation_set(...) which is failing silently http://paste.ubuntu.com/8260194/
[14:34] <mbruzek> lazyPower, marcoceppi, evilnickveitch, arosales ^
[14:35] <rcj> I'm running a hook via juju-run from a cron job and in there is a relation_set to change a variable in a peer relationship.  That isn't happening but I'm not getting errors.
[14:35] <rcj> Larger context is @ https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/hooks/hooks.py#L167
[14:45] <marcoceppi> rcj: you don't sepcify a remote unit
[14:46] <marcoceppi> you have to loop through all the units in that relationship_id
[14:46] <marcoceppi> using relation_list (I believe)
[14:48] <evilnickveitch> mbruzek, done
[14:48] <mbruzek> Thanks Nick.
[14:49] <rcj> marcoceppi, thanks.  I'll try that.
[14:52] <rcj> marcoceppi, on that same line of questioning. Can I set a configuration value where the key isn't in the config.yaml?  http://paste.ubuntu.com/8260290 This changes the 'rsync-mintues' on each invocation.
[14:56] <rcj> marcoceppi, for the relation_set I don't see how you'd specify unit in charmhelpers.core.hookenv.relation_set().  What am I missing?
[15:03] <arosales> mbruzek: thanks for working on getting pull 158 linked into the docs.  I think it would be valuable to have that in the side bar under "Charm Store Policy" with the name of "Charm Review Process"
[15:04] <tvansteenburgh> anyone know what might cause this, considering that i can manually bootstrap this environment just fine: http://pastebin.ubuntu.com/8260380/
[15:04] <mbruzek> arosales, that is what I did.
[15:04] <arosales> mbruzek:
[15:04] <mbruzek> arosales, are you seeing something different ?
[15:05] <arosales> mbruzek: no, just behind :-)
[15:05] <arosales> mbruzek: thanks for working on that
[15:06] <marcoceppi> tvansteenburgh: it's trying to conecct to the internal IP
[15:06] <marcoceppi> tvansteenburgh: do you have sshuttle running?
[15:07] <tvansteenburgh> marcoceppi: it tries the public dns name first though
[15:07] <tvansteenburgh> marcoceppi: no sshuttle running
[15:07] <marcoceppi> tvansteenburgh: I wonder if this is new in 1.20.6
[15:14] <rcj> marcoceppi, looking at relation_set() I don't know what you mean by looping through all the units.  I don't see a way to set a unit and I thought that I would set for the local unit and that would fire the X-relation-changed hook for the relation?
[15:17] <marcoceppi> rcj: I'm trying to figure out now, it was in here
[15:17] <marcoceppi> it exists for relation_get
[15:18] <rcj> marcoceppi, that would make sense for relation_get because I would want to get the dictionary for that unit, but you can't set things on remote units, only the local unit's relation, right?
[15:19] <marcoceppi> rcj: you can set relation data for remote units
[15:19] <rcj> Now this is running via juju-run, so it's not in a relation hook with any of the environment that would bring.
[15:19] <marcoceppi> at least
[15:19] <marcoceppi> it should be
[15:20] <marcoceppi> if you can get info for a unit but can't set info for a unit
[15:20] <marcoceppi> it's like an unmatched side, or so it seams
[15:20] <marcoceppi> seems*
[15:20]  * marcoceppi experiments
[15:22] <marcoceppi> okay, maybe not
[15:22] <marcoceppi> what makes you think it's silently failing?
[15:23] <rcj> I can drop into debug-hooks and run the same thing and not see the relation change.  Then the other peers in the relation aren't seeing cluster-relation-changed hooks firing.
[15:23] <rcj> and the return code from 'relaation-set' on the cmdline is 0 while it doesn't show up in 'relation-get' output
[15:23] <marcoceppi> relation settings aren't sent until that hook context closes
[15:24] <rcj> I understand, so after that experiment I exit hook and none of the peers see the change hooks fired
[15:24] <rcj> But relation-get after relation-set in the same hook should see the new value, right?
[15:24] <marcoceppi> let me spin up a peer, my juju run example worked
[15:25] <marcoceppi> http://paste.ubuntu.com/8260557/
[15:26] <rcj> is "juju run --unit <unitID <cmd>" the same as "juju-run <unitID> <cmd>"?
[15:26] <marcoceppi> rcj: one is from the unit the other is from the client
[15:26] <marcoceppi> juju run --unit is from the client side
[15:27] <rcj> And does it need to run as root on the client or can it be another use I created?
[15:27] <marcoceppi> so, client being your desktop
[15:28] <marcoceppi> from the unit you can run it unprivildged
[15:28] <rcj> great
[15:43] <rcj> marcoceppi, I can't recreate
[15:44] <marcoceppi> rcj: can't recreate which?
[15:47] <rcj> marcoceppi, the relation-set issue at present.  It's not that it's working, I just am having other issues that prevent me from getting that far with the charm.
[15:47] <marcoceppi> rcj: ah, well let me know what happens either way
[15:48] <rcj> marcoceppi, how about the config setting question though.... http://paste.ubuntu.com/8260290/
[15:48] <rcj> 'rsync-minutes' isn't part of config.yaml, can I set something in the config that isn't in that file?
[15:48] <rcj> When that hook is run it set's a new value each time.
[15:49] <marcoceppi> rcj: I guess not, I'm not very familiar with the config stuff, I think cory_fu wrote it actually
[15:50] <marcoceppi> rcj: it seems you can, > Store arbitrary data for use in a later hook.
[15:50] <rcj> I din't know if there was an obvious bug in that code
[15:50] <cory_fu> marcoceppi, rcj: Actually, tvansteenburgh wrote it, but yes, you can set arbitrary values into the config dict and they will persist as long as you call config.save()
[15:52] <cory_fu> rcj: That seems like it should work and only generate a value once.
[15:52] <marcoceppi> rcj: is this being run via juju-run as well, or hook context?
[15:52] <tvansteenburgh> cory_fu, rcj: you don't even have to call save if you using @hook or Services Framework
[15:52] <rcj> marcoceppi, that's beeing run in hook context
[15:52] <cory_fu> tvansteenburgh: Did you add config.save() to the services framework when I wasn't looking?  :)
[15:52] <tvansteenburgh> cory_fu yes
[15:52] <rcj> tvansteenburgh, https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/lib/ubuntu_repository_cache/util.py#L165
[15:52] <cory_fu> tvansteenburgh: Awesome!
[15:53] <rcj> it's always called from hook context
[15:53] <rcj> tvansteenburgh, calling function is https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/hooks/hooks.py#L118
[15:53] <cory_fu> rcj: I think he meant from a @hook decorated function.  But, you're manually saving it anyway, so it shouldn't matter
[15:54] <tvansteenburgh> rcj: yeah, so you don't actually need the save(), it'll happen for you
[15:54] <tvansteenburgh> rcj: as long as you have the latest charmhelpers
[15:54] <cory_fu> tvansteenburgh: Any idea why that config.get() would end up being false, then, if it's already been generated?
[15:54] <rcj> tvansteenburgh, the problem I'm having is that each time that code is called it sets a new value
[15:55] <rcj> and charmhelpers in my charm is very fresh
[15:56] <marcoceppi> rcj: do you see the .juju-presistent file in the charm?
[15:57] <rcj> marcoceppi, yes, .juju-persistent-config is present and has the latest value
[15:58] <tvansteenburgh> rcj: yeah, i think i need to override __getitem__ to make it work the way you expect
[15:58] <tvansteenburgh> rcj: but for now you could do `if not config.previous(var)`
[16:00] <cory_fu> tvansteenburgh: So, if you persist an arbitrary value, it will not show up in the current config but only in the previous?
[16:00] <cory_fu> That does seem like a bug, since storing arbitrary values is explicitly listed as a feature
[16:02] <tvansteenburgh> i'm gonna post a MP with a fix momentarily
[16:02] <rcj> tvansteenburgh, so I need to get the value as config['rsync-minutes'] on the first invocation and then as config.previous('rsync-minutes') for all future runs?
[16:02] <rcj> tvansteenburgh, ah, I'll wait for the MP and merge it into my tree instead
[16:02] <tvansteenburgh> k
[16:02] <rcj> tvansteenburgh, can you ping me with a patch when it's ready?
[16:03] <tvansteenburgh> rcj: certainly
[16:03] <rcj> tvansteenburgh, thanks
[16:36] <tvansteenburgh> rcj: https://code.launchpad.net/~tvansteenburgh/charm-helpers/fix-config-lookups/+merge/233558
[16:36] <tvansteenburgh> marcoceppi: got time for a quick review/merge of that? ^
[16:37] <marcoceppi> tvansteenburgh: yeah
[16:37]  * marcoceppi loks
[16:39] <tvansteenburgh> marcoceppi:  hang on
[16:39] <tvansteenburgh> marcoceppi: i want to change something
[16:39] <marcoceppi> go on
[16:44] <tvansteenburgh> marcoceppi: ok i'm done for real now
[16:44] <marcoceppi> tvansteenburgh: you're lucky I'm lazy ;)
[16:44] <tvansteenburgh> haha
[16:51] <natefinch> arg, how can open port not be idempotent by default? :/
[16:53] <natefinch> marcoceppi: ^^   I reran a failed install and got "cannot open ports 80-80/tcp on machine 5 due to conflict"  how am I supposed to handle that?
[17:29] <rcj> marcoceppi, I can't recreate the issue with the relation_set() not propagating.  The problem, if there still is one, must be on my side.  Thanks for your help.
[17:29] <rcj> tvansteenburgh, thanks
[18:08] <rick_h_> marcoceppi: got a sec to chat an idea out loud?
[18:09] <marcoceppi> rcj: I will in about 20 mins, otp
[18:09] <marcoceppi> rick_h_: * ^
[18:09] <rick_h_> marcoceppi: rgr ty
[18:33] <marcoceppi> rick_h_: back
[18:34] <rick_h_> marcoceppi: cool
[18:34] <rick_h_> marcoceppi: invite otw
[18:54] <allomov> hey, I have a problem with running juju on Openstack
[18:56] <allomov> it seems that floating IPs are not set to instances
[18:56] <allomov> I have set use-floating-ip to true in env yml
[18:58] <allomov> the question for me now: what network uuid I need to pass to network option in env yml (private or public)
[19:10] <marcoceppi> allomov: that's a good question, you'll want to pass the public one IIRC
[19:33] <allomov> marcoceppi: thank you for answer.
[19:35] <allomov> marcoceppi: unfortunately I still get this error https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/4bdabe95471797c0d6aad4829557a23d43d4e30d/fail-juju.log
[19:35] <allomov> here is my env file https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/f4068e094ce52db43008e825b111645f28854381/juju.yml
[19:35] <marcoceppi> allomov: 1.20.6 ?
[19:36] <allomov> marcoceppi: how did you know )
[19:36] <marcoceppi> lucky guess :)
[19:36] <allomov> marcoceppi: yes, it is
[19:36] <marcoceppi> (and my goldfish memory from yesterday hasn't worn off, glad you got the imagestream stuff sorted)
[19:37] <allomov> hah
[19:37] <allomov> yes, it's me
[19:37] <allomov> I generated image metadata to ~/.juju/metadata with `juju generate-image` command.
[19:37] <marcoceppi> whew, that would have been really embarassing if it wasn't you
[19:38] <marcoceppi> allomov: well the bootstrap error I don't t hink is related to networking
[19:39] <allomov> I even got it running, at least it started to bootstrap environment and cerated a node
[19:40] <allomov> but since I pointed to the private network it couldn't ssh to bootstrap node
[19:40] <marcoceppi> allomov: well that log shows a failure to bootstrap / find the tools/image data
[19:40] <allomov> after that I removed jenv file (to apply new configuration)
[19:41] <allomov> marcoceppi: so I need to point a tools url I guess ?
[19:41] <marcoceppi> allomov: yeah, so after you generate teh metadata you need to host the images metadata and the tools somewhere (usually in the object store with a publically accessible bucket)
[19:42] <marcoceppi> then set image-metadata-url and tools-metadata-url in your environments.yaml to that public location
[19:42] <allomov> could it be a local file system ?
[19:42] <marcoceppi> it needs to be accessible by the bootstrap node
[19:42] <allomov> ok, will try to go this way
[19:42] <allomov> thank you
[19:42] <allomov> marcoceppi: thank you
[19:43] <marcoceppi> allomov: sure, sorry you're hvaing such a rough go at it. If you get it running let me know where we can improve our documentation / workflow (or email the mailing list juju@lists.ubuntu.com)
[19:43] <marcoceppi> you're certainly not the only one running a private openstack cloud
[19:43] <marcoceppi> I'll also be around most of tomorrow/Sunday too if you need more help
[21:50] <allomov> marcoceppi: bad news, still can't make it run.
[21:51] <allomov> marcoceppi: bootstrap node is created, but I can't get access to it.
[21:52] <marcoceppi> allomov: can you re-bootstrap with --debug flag?
[21:52] <allomov> marcoceppi: sure
[21:52] <marcoceppi> and paste the output (careful, some credentials might leak at the topofthe log
[21:52] <allomov> marcoceppi: the point is I can't get access to created instance
[21:53] <marcoceppi> sure, running --debug will give a very verbose output client side
[21:58] <allomov> marcoceppi: here it is https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/24a64a47a6cde78e66db830c57a2dfd57534289e/gistfile1.txt
[21:59] <allomov> marcoceppi: deployment is stucked on ssh
[22:01] <allomov> I can create node in the same network and with the same image (the image I pointed creating metadata) there and ssh to it
[22:04] <allomov> marcoceppi: machine logs from openstack web console are different by the way for this two cases
[22:04] <marcoceppi> allomov: I'm guessing 172.16.0.84 is not public ip
[22:05] <allomov> it's a public network for me
[22:05] <marcoceppi> so it looks like the floating ip is getting added?
[22:05] <marcoceppi> that SSH prompt might go on for a while
[22:05] <marcoceppi> allomov: it's basically a loop until the instance registers
[22:06] <marcoceppi> > Attempting to connect to 172.16.0.84:22
[22:06] <allomov> marcoceppi: it can't ssh for 10 m
[22:06] <marcoceppi> shows that it's assigned that IP
[22:06] <marcoceppi> or at least that's what it thinks the IP is
[22:06] <marcoceppi> can you reach that IP now? is the instance up in horizon?
[22:06] <allomov> marcoceppi: no, I can't
[22:07] <allomov> marcoceppi: I tried to create new node with openstack web console with the same network and image, and I can ssh to it
[22:07] <marcoceppi> allomov: bleh, that's lame
[22:07] <marcoceppi> is the IP actually allocated and added to that instance in horizon?
[22:07] <allomov> marcoceppi: I have different outputs for juju machine and for machine created manually
[22:08] <marcoceppi> allomov: different how?
[22:09] <allomov> marcoceppi: I mean logs in openstack web console
[22:09] <allomov> marcoceppi: here is log for machine created with juju https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/34140f9c803ff71a466c73e36b536bdc45fa95b9/machine-log-1
[22:09] <allomov> marcoceppi: here is a log for machine created manually with the same image https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/1efaaa681fd2a31fef4e12d23b8e40fcf1d758f5/machine-log-2
[22:10] <marcoceppi> so, cloud init isn't running on the juju machine
[22:10] <marcoceppi> allomov: are you using Ubuntu Server or Ubuntu Cloud images? I'm guessing the later?
[22:10] <marcoceppi> latter*
[22:11] <allomov> marcoceppi: I used ubuntu image that was already in openstack
[22:11] <allomov> marcoceppi: could you tell how can I check it ?
[22:12] <marcoceppi> Not entirely sure
[22:13] <allomov> marcoceppi: do I need download and install other image ?
[22:14] <marcoceppi> allomov: I'm nto sure, and I don't want to lead you in the wrong direction
[22:14] <marcoceppi> I'm reaching the end of my knowledge with troubleshooting openstack clouds
[22:15] <allomov> marcoceppi: ok, will stay on this stage for now. thank you for your help. see you.
[22:15] <marcoceppi> You could try to add a new image, then you'd have to update the image-metadata, etc, but http://cloud-images.ubuntu.com/trusty/current/ is where we house all our cloud images which is the basis for all our cloud provided images (ec2, hp, azure, etc)
[22:16] <marcoceppi> You should be able to add the amd64.root.tar.gz to openstack and try that if you exhaust other options
[22:19] <marcoceppi> I really wish I had more to offer or insights
[22:40] <dpb1> Trying to install the juju-gui charm: http://paste.ubuntu.com/8264500/
[22:40] <dpb1> :(
[22:42] <marcoceppi> dpb1: wat. What version Ubuntu is this?
[22:42] <dpb1> trusty, up to date
[22:42] <dpb1> I was just going to try it here
[23:01] <dpb1> worked fine on an lxc here.. who knows