[01:38] can I deploy a precise charm to a trusty host somehow in the CLI? a --force option or the like? [02:25] hatch: no, I don't think it's allowed. I know we added logic to block that in the GUI. [03:52] hatch: you can branch it locally. [04:12] marcoceppi: yeah that's what I ended up doing [04:13] mkdir ~/charms/trusty && cd ~/charms/trust && charm get cs:precise/charm [04:13] deploy and cross fingers [04:13] 4/5 of the time, it 'just works' - unless its got apache2 embedded in the charm. With the files being moved around as of the trusty release, some charms just plain go supernova on deploy [04:14] haha nah I was playing around with different providers seeing how much I could run on a single machine [04:14] digital ocean is leagues ahead of ec2 in terms of performance [04:14] Those SSD backed SAN's they use make all teh difference [04:14] a lot of the latency you see on AWS is the EBS IO latency [04:15] identical setups and the digital ocean was anywhere from 50-100ms faster per request at 20 concurrent requests [04:15] less infrastructure to manage at present too :) But I agree, I <3 DigitalOcean [04:15] heh yeah true [04:15] My entire personal infrastructure is up on there, the remnants of my business, and pet projects all go on DO [04:15] I recently found out about a Canadian host https://www.clouda.ca/ [04:15] Oh I've looked at them for a client project. They were super nice. [04:15] more expensive but 100% Canadian and running openstack :) [04:16] I'm too tired, tomorrow I'll do the same tests as I did on ec2 and DO today [04:16] see how they stack up for that extra $ :) [04:17] Yeah, i'm wrapping up today's todo list that got a bit hairy. I spent an obscene amount of time cycling over something silly [04:17] yeah, that happens hah [04:18] I was loadtesting the built in sqlite setup as well [04:18] I'm one of those crazy peeps that runs his ghost setup on the sqlite db [04:18] honestly it didn't seem like it was having any trouble at 20concurrent requests [04:18] hatch: you using loader.io to test? [04:18] atm I was just doing ab on a fast connection [04:19] so it wasn't 'true' load testing [04:19] but gave a reasonable idea watching the cpu on the vm and the reporting from ab [04:19] on DO the requests are highly CPU bound [04:20] 20 concurrent puts it at high 80's [04:21] so using something like mod_cache would be recommended to get good hardware utilization [04:22] anyways I think I'll get the apache2 charm set up for my url redirects then fire apache2 and ghost up on something small and see how it pans out without mysql [04:22] will have to set something up to do backups [04:22] though [04:23] anyways I'm outa-here, have a good night lazyPower === urulama_afk is now known as urulama === liam_ is now known as Guest22638 === CyberJacob|Away is now known as CyberJacob === Tribaal_ is now known as Tribaal === jaywink_ is now known as jaywink === CyberJacob is now known as CyberJacob|Away [10:57] Morning everyone [10:58] jcastro, I got the t-shirt man, it is awesome (you were right) and it is now the softest one I own. Gonna take some pics of it (and me wearing it, of course). [14:33] Hey guys I just created a pull request to link the new charm-review-process document that JoshStrobl created in our docs. https://github.com/juju/docs/pull/158 [14:34] [Question] Having a problem with a relation_set(...) which is failing silently http://paste.ubuntu.com/8260194/ [14:34] lazyPower, marcoceppi, evilnickveitch, arosales ^ [14:35] I'm running a hook via juju-run from a cron job and in there is a relation_set to change a variable in a peer relationship. That isn't happening but I'm not getting errors. [14:35] Larger context is @ https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/hooks/hooks.py#L167 [14:45] rcj: you don't sepcify a remote unit [14:46] you have to loop through all the units in that relationship_id [14:46] using relation_list (I believe) [14:48] mbruzek, done [14:48] Thanks Nick. [14:49] marcoceppi, thanks. I'll try that. [14:52] marcoceppi, on that same line of questioning. Can I set a configuration value where the key isn't in the config.yaml? http://paste.ubuntu.com/8260290 This changes the 'rsync-mintues' on each invocation. [14:56] marcoceppi, for the relation_set I don't see how you'd specify unit in charmhelpers.core.hookenv.relation_set(). What am I missing? [15:03] mbruzek: thanks for working on getting pull 158 linked into the docs. I think it would be valuable to have that in the side bar under "Charm Store Policy" with the name of "Charm Review Process" [15:04] anyone know what might cause this, considering that i can manually bootstrap this environment just fine: http://pastebin.ubuntu.com/8260380/ [15:04] arosales, that is what I did. [15:04] mbruzek: [15:04] arosales, are you seeing something different ? [15:05] mbruzek: no, just behind :-) [15:05] mbruzek: thanks for working on that [15:06] tvansteenburgh: it's trying to conecct to the internal IP [15:06] tvansteenburgh: do you have sshuttle running? [15:07] marcoceppi: it tries the public dns name first though [15:07] marcoceppi: no sshuttle running [15:07] tvansteenburgh: I wonder if this is new in 1.20.6 [15:14] marcoceppi, looking at relation_set() I don't know what you mean by looping through all the units. I don't see a way to set a unit and I thought that I would set for the local unit and that would fire the X-relation-changed hook for the relation? [15:17] rcj: I'm trying to figure out now, it was in here [15:17] it exists for relation_get [15:18] marcoceppi, that would make sense for relation_get because I would want to get the dictionary for that unit, but you can't set things on remote units, only the local unit's relation, right? [15:19] rcj: you can set relation data for remote units [15:19] Now this is running via juju-run, so it's not in a relation hook with any of the environment that would bring. [15:19] at least [15:19] it should be [15:20] if you can get info for a unit but can't set info for a unit [15:20] it's like an unmatched side, or so it seams [15:20] seems* [15:20] * marcoceppi experiments [15:22] okay, maybe not [15:22] what makes you think it's silently failing? [15:23] I can drop into debug-hooks and run the same thing and not see the relation change. Then the other peers in the relation aren't seeing cluster-relation-changed hooks firing. [15:23] and the return code from 'relaation-set' on the cmdline is 0 while it doesn't show up in 'relation-get' output [15:23] relation settings aren't sent until that hook context closes [15:24] I understand, so after that experiment I exit hook and none of the peers see the change hooks fired [15:24] But relation-get after relation-set in the same hook should see the new value, right? [15:24] let me spin up a peer, my juju run example worked [15:25] http://paste.ubuntu.com/8260557/ [15:26] is "juju run --unit " the same as "juju-run "? [15:26] rcj: one is from the unit the other is from the client [15:26] juju run --unit is from the client side [15:27] And does it need to run as root on the client or can it be another use I created? [15:27] so, client being your desktop [15:28] from the unit you can run it unprivildged [15:28] great [15:43] marcoceppi, I can't recreate [15:44] rcj: can't recreate which? [15:47] marcoceppi, the relation-set issue at present. It's not that it's working, I just am having other issues that prevent me from getting that far with the charm. [15:47] rcj: ah, well let me know what happens either way [15:48] marcoceppi, how about the config setting question though.... http://paste.ubuntu.com/8260290/ [15:48] 'rsync-minutes' isn't part of config.yaml, can I set something in the config that isn't in that file? [15:48] When that hook is run it set's a new value each time. [15:49] rcj: I guess not, I'm not very familiar with the config stuff, I think cory_fu wrote it actually [15:50] rcj: it seems you can, > Store arbitrary data for use in a later hook. [15:50] I din't know if there was an obvious bug in that code [15:50] marcoceppi, rcj: Actually, tvansteenburgh wrote it, but yes, you can set arbitrary values into the config dict and they will persist as long as you call config.save() [15:52] rcj: That seems like it should work and only generate a value once. [15:52] rcj: is this being run via juju-run as well, or hook context? [15:52] cory_fu, rcj: you don't even have to call save if you using @hook or Services Framework [15:52] marcoceppi, that's beeing run in hook context [15:52] tvansteenburgh: Did you add config.save() to the services framework when I wasn't looking? :) [15:52] cory_fu yes [15:52] tvansteenburgh, https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/lib/ubuntu_repository_cache/util.py#L165 [15:52] tvansteenburgh: Awesome! [15:53] it's always called from hook context [15:53] tvansteenburgh, calling function is https://bazaar.launchpad.net/~rcj/charms/trusty/ubuntu-repository-cache/trunk/view/head:/hooks/hooks.py#L118 [15:53] rcj: I think he meant from a @hook decorated function. But, you're manually saving it anyway, so it shouldn't matter [15:54] rcj: yeah, so you don't actually need the save(), it'll happen for you [15:54] rcj: as long as you have the latest charmhelpers [15:54] tvansteenburgh: Any idea why that config.get() would end up being false, then, if it's already been generated? [15:54] tvansteenburgh, the problem I'm having is that each time that code is called it sets a new value [15:55] and charmhelpers in my charm is very fresh [15:56] rcj: do you see the .juju-presistent file in the charm? [15:57] marcoceppi, yes, .juju-persistent-config is present and has the latest value [15:58] rcj: yeah, i think i need to override __getitem__ to make it work the way you expect [15:58] rcj: but for now you could do `if not config.previous(var)` [16:00] tvansteenburgh: So, if you persist an arbitrary value, it will not show up in the current config but only in the previous? [16:00] That does seem like a bug, since storing arbitrary values is explicitly listed as a feature [16:02] i'm gonna post a MP with a fix momentarily [16:02] tvansteenburgh, so I need to get the value as config['rsync-minutes'] on the first invocation and then as config.previous('rsync-minutes') for all future runs? [16:02] tvansteenburgh, ah, I'll wait for the MP and merge it into my tree instead [16:02] k [16:02] tvansteenburgh, can you ping me with a patch when it's ready? [16:03] rcj: certainly [16:03] tvansteenburgh, thanks [16:36] rcj: https://code.launchpad.net/~tvansteenburgh/charm-helpers/fix-config-lookups/+merge/233558 [16:36] marcoceppi: got time for a quick review/merge of that? ^ [16:37] tvansteenburgh: yeah [16:37] * marcoceppi loks [16:39] marcoceppi: hang on [16:39] marcoceppi: i want to change something [16:39] go on [16:44] marcoceppi: ok i'm done for real now [16:44] tvansteenburgh: you're lucky I'm lazy ;) [16:44] haha [16:51] arg, how can open port not be idempotent by default? :/ [16:53] marcoceppi: ^^ I reran a failed install and got "cannot open ports 80-80/tcp on machine 5 due to conflict" how am I supposed to handle that? [17:29] marcoceppi, I can't recreate the issue with the relation_set() not propagating. The problem, if there still is one, must be on my side. Thanks for your help. [17:29] tvansteenburgh, thanks === sebas538_ is now known as sebas5384 === roadmr is now known as roadmr_afk [18:08] marcoceppi: got a sec to chat an idea out loud? [18:09] rcj: I will in about 20 mins, otp [18:09] rick_h_: * ^ [18:09] marcoceppi: rgr ty === CyberJacob|Away is now known as CyberJacob [18:33] rick_h_: back [18:34] marcoceppi: cool [18:34] marcoceppi: invite otw [18:54] hey, I have a problem with running juju on Openstack === roadmr_afk is now known as roadmr [18:56] it seems that floating IPs are not set to instances [18:56] I have set use-floating-ip to true in env yml [18:58] the question for me now: what network uuid I need to pass to network option in env yml (private or public) [19:10] allomov: that's a good question, you'll want to pass the public one IIRC [19:33] marcoceppi: thank you for answer. [19:35] marcoceppi: unfortunately I still get this error https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/4bdabe95471797c0d6aad4829557a23d43d4e30d/fail-juju.log [19:35] here is my env file https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/f4068e094ce52db43008e825b111645f28854381/juju.yml [19:35] allomov: 1.20.6 ? [19:36] marcoceppi: how did you know ) [19:36] lucky guess :) [19:36] marcoceppi: yes, it is [19:36] (and my goldfish memory from yesterday hasn't worn off, glad you got the imagestream stuff sorted) [19:37] hah [19:37] yes, it's me [19:37] I generated image metadata to ~/.juju/metadata with `juju generate-image` command. [19:37] whew, that would have been really embarassing if it wasn't you [19:38] allomov: well the bootstrap error I don't t hink is related to networking [19:39] I even got it running, at least it started to bootstrap environment and cerated a node [19:40] but since I pointed to the private network it couldn't ssh to bootstrap node [19:40] allomov: well that log shows a failure to bootstrap / find the tools/image data [19:40] after that I removed jenv file (to apply new configuration) === hatch__ is now known as hatch [19:41] marcoceppi: so I need to point a tools url I guess ? [19:41] allomov: yeah, so after you generate teh metadata you need to host the images metadata and the tools somewhere (usually in the object store with a publically accessible bucket) [19:42] then set image-metadata-url and tools-metadata-url in your environments.yaml to that public location [19:42] could it be a local file system ? [19:42] it needs to be accessible by the bootstrap node [19:42] ok, will try to go this way [19:42] thank you [19:42] marcoceppi: thank you [19:43] allomov: sure, sorry you're hvaing such a rough go at it. If you get it running let me know where we can improve our documentation / workflow (or email the mailing list juju@lists.ubuntu.com) [19:43] you're certainly not the only one running a private openstack cloud [19:43] I'll also be around most of tomorrow/Sunday too if you need more help === CyberJacob is now known as CyberJacob|Away [21:50] marcoceppi: bad news, still can't make it run. [21:51] marcoceppi: bootstrap node is created, but I can't get access to it. [21:52] allomov: can you re-bootstrap with --debug flag? [21:52] marcoceppi: sure [21:52] and paste the output (careful, some credentials might leak at the topofthe log [21:52] marcoceppi: the point is I can't get access to created instance [21:53] sure, running --debug will give a very verbose output client side [21:58] marcoceppi: here it is https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/24a64a47a6cde78e66db830c57a2dfd57534289e/gistfile1.txt [21:59] marcoceppi: deployment is stucked on ssh [22:01] I can create node in the same network and with the same image (the image I pointed creating metadata) there and ssh to it [22:04] marcoceppi: machine logs from openstack web console are different by the way for this two cases [22:04] allomov: I'm guessing 172.16.0.84 is not public ip [22:05] it's a public network for me [22:05] so it looks like the floating ip is getting added? [22:05] that SSH prompt might go on for a while [22:05] allomov: it's basically a loop until the instance registers [22:06] > Attempting to connect to 172.16.0.84:22 [22:06] marcoceppi: it can't ssh for 10 m [22:06] shows that it's assigned that IP [22:06] or at least that's what it thinks the IP is [22:06] can you reach that IP now? is the instance up in horizon? [22:06] marcoceppi: no, I can't [22:07] marcoceppi: I tried to create new node with openstack web console with the same network and image, and I can ssh to it [22:07] allomov: bleh, that's lame [22:07] is the IP actually allocated and added to that instance in horizon? [22:07] marcoceppi: I have different outputs for juju machine and for machine created manually [22:08] allomov: different how? [22:09] marcoceppi: I mean logs in openstack web console [22:09] marcoceppi: here is log for machine created with juju https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/34140f9c803ff71a466c73e36b536bdc45fa95b9/machine-log-1 [22:09] marcoceppi: here is a log for machine created manually with the same image https://gist.githubusercontent.com/allomov/188131c84fdbcbab5ea8/raw/1efaaa681fd2a31fef4e12d23b8e40fcf1d758f5/machine-log-2 [22:10] so, cloud init isn't running on the juju machine [22:10] allomov: are you using Ubuntu Server or Ubuntu Cloud images? I'm guessing the later? [22:10] latter* [22:11] marcoceppi: I used ubuntu image that was already in openstack [22:11] marcoceppi: could you tell how can I check it ? [22:12] Not entirely sure [22:13] marcoceppi: do I need download and install other image ? [22:14] allomov: I'm nto sure, and I don't want to lead you in the wrong direction [22:14] I'm reaching the end of my knowledge with troubleshooting openstack clouds [22:15] marcoceppi: ok, will stay on this stage for now. thank you for your help. see you. [22:15] You could try to add a new image, then you'd have to update the image-metadata, etc, but http://cloud-images.ubuntu.com/trusty/current/ is where we house all our cloud images which is the basis for all our cloud provided images (ec2, hp, azure, etc) [22:16] You should be able to add the amd64.root.tar.gz to openstack and try that if you exhaust other options [22:19] I really wish I had more to offer or insights === viperZ28_ is now known as viperZ28 [22:40] Trying to install the juju-gui charm: http://paste.ubuntu.com/8264500/ [22:40] :( [22:42] dpb1: wat. What version Ubuntu is this? [22:42] trusty, up to date [22:42] I was just going to try it here [23:01] worked fine on an lxc here.. who knows === scuttlemonkey is now known as scuttle|afk