=== urulama__ is now known as urulama === apuimedo is now known as apuimedo|away === apuimedo|away is now known as apuimedo [16:33] beisner: gnuoy: I'm trying to use my private openstack cloud [16:34] but i'm getting some problems bootstrapping [16:35] http://paste.ubuntu.com/13216879/ [16:35] do you think it could be due to not being an admin (I have perms for creating instances and nets though) [16:36] hi apuimedo, could you paste a (sanitized) `juju stat --format tabular` for the private cloud? [16:37] it'll help me to understand the services and topology [16:38] ok [16:39] beisner: I get the same error [16:39] trying to do the juju stat [16:39] apuimedo, do you have openstack already deployed? [16:39] yes, I have several instances running on this cloud [16:40] now I want juju to create its machines there too [16:40] apuimedo, ok, so i'm referring to that cloud that is deployed. can you juju stat that cloud? [16:40] apuimedo, ie. 1 layer down. [16:40] beisner: that cloud has been deployed with ansible, not juju [16:41] beisner: does juju need admin level credentials for the bootstrapping? [16:41] apuimedo, ok. in that case, i'd need to see your environments.yaml file, `keystone catalog` and `keystone endpoint-list` output. [16:41] apuimedo, no, shouldn't need to be admin @ the undercloud. [16:42] cool, thanks [16:42] apuimedo, sanitized for sensitive data on those pastes, of course. [16:43] also can you show `apt-cache policy juju` ? [16:43] sure [16:43] beisner: http://paste.ubuntu.com/13216940/ [16:45] apuimedo, what vers of openstack is the cloud? [16:46] Kilo [16:46] I was talking with my operator [16:46] apuimedo, it may not be directly related to the issue at hand, but you can use ppa:juju/stable to get the current stable release of Juju (1.25). https://launchpad.net/~juju/+archive/ubuntu/stable [16:46] he fixed some of my credentials and now I got past that error [16:46] oh, that sounds like a good idea anyway [16:47] http://paste.ubuntu.com/13216957/ [16:47] apuimedo, it looks like a previous juju bootstrap didn't get completely destroyed. can you remove (back up if you want to) the /home/ubuntu/.juju/environments/openstack.jenv file, then see if juju stat exits ok (showing no enviro bootstrapped)? [16:47] beisner: ^^ that will be due to the streams, right? [16:48] apuimedo, i'd rm that .jenv file to make sure the basic juju stat cmd is good. then do a `juju bootstrap --debug` ... which will show more detail if there's a failure somewhere. [16:49] cool. I'll try that now [16:53] beisner: http://paste.ubuntu.com/13216996/ [16:53] yeah... It seems I have to define the images somehow in the environment [16:53] I have to say the guide I found online was not the clearest [16:56] apuimedo, i think this is what you're after : https://jujucharms.com/docs/1.25/howto-privatecloud [16:57] yes. That's the one I'm looking at beisner [17:02] lazypower: mbruzek: check this out: https://sysdig.com/digging-into-kubernetes-with-sysdig/ [17:02] interesting [17:03] jcastro: cool [17:05] apuimedo, images and metadata are the things to sort out for the private cloud juju usage. i've got a test cloud building, and will be able to step through that on my end later today. [17:05] in the past, i've used that guide. [17:05] I'm doing that now [17:06] let's see if I succeed :-) [17:06] apuimedo, be aware too - it looks like you're hitting a destroy bug (for which a fix is underway). the bug is that the destroy errors out, leaving the .jenv file behind, and you have to manually rm it after destroying. but that won't affect bootstrapping or deploying. [17:06] https://bugs.launchpad.net/bugs/1512399 [17:06] Bug #1512399: ERROR environment destruction failed: destroying storage: listing volumes: Get https://x.x.x.x:8776/v2//volumes/detail: local error: record overflow 1.25:Triaged> [17:06] cool [17:06] thanks beisner [17:07] yw apuimedo - let us know how you turn out [17:07] I will, thanks [17:18] beisner: I set up agent-metadata-url and image-metadata-url [17:18] but somehow it seems it still goes to [17:19] 2015-11-10 17:17:53 DEBUG juju.environs.simplestreams simplestreams.go:429 read metadata index at "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" [17:19] :( [17:21] ok, my mistake [17:21] I had a typo :P [17:21] sorry about that [17:25] got it bootstrapping ;-) [17:25] thanks for the help beisner! [17:39] apuimedo, woot! you're welcome. happy deploying. [19:00] beisner: is it possible to specify the ram amount when adding machine [19:03] hi apuimedo - juju can use constraints to choose the nearest specs match presented by your cloud. i believe "equal or greater" logic is used in that. `nova flavor-list` will show the machine sizes you have to choose from. [19:03] https://jujucharms.com/docs/1.25/charms-constraints [19:03] juju machine add --constraints mem=8G [19:03] like so? [19:05] it worked [19:05] thanks [19:11] apuimedo, ok good :) === urulama is now known as urulama__ [20:28] is the new lxd provider going to obviate the local provider? [20:36] blr: well, local uses lxc, which lxd is based on (I believe) [20:39] jose: right, just curious if the long term intention is to replace the local/lxc provider [20:41] blr: yes in time the lxd provider, with better isolation and ability tondo things like HA will deprecate the local provider. [20:42] rick_h__: nice [20:49] lazypower: if you're about, have a simple MP for charmhelpers that could use a look (https://code.launchpad.net/~blr/charm-helpers/pip-constraints/+merge/276062) [21:11] beisner: is there any way to make juju deploy lxc to the machines on openstack providers with the juju-br0 model that it uses when the machine is provided by maas? [21:17] frobware: ^ this was done for the machine registration work and isn't part of OS provider work atm? Or is it planned/part of it? [21:17] apuimedo, i believe the juju openstack provider is designed to use 1 nova instance per juju unit. i don't think there is currently support for juju deploying services to lxc containers within nova instances via the openstack provider. [21:18] beisner: well, it does deploy the lxc containers [21:18] apuimedo, rick_h__ ... and i was about to say, let me check though. ;-) thanks rick_h__ ... fwiw - that would be immensely useful for us in openstack engineering too (juju/goose support for lxc containers on nova instances). [21:18] but obviously they are unreachable [21:18] beisner: well you can use provision to an lxc on a nova vm with --to=lxc and such. [21:18] apuimedo, right, i think it will, but then they are marooned in my experience. [21:18] beisner: at least I wanted to think that was possible with normal --to support [21:19] beisner: I just had my nova instances remove the anti spoofing filter [21:19] so if they would just have the bridge on the eth0 device [21:19] it would all work [21:19] or putting different /24 on each nova vm [21:19] apuimedo: beisner frobware leads the team working on networking and can best speak to that atm. however he's EU and EOD [21:19] rick_h__, something legacy is ringing a vague bell with goose -- where it didn't / couldn't know about all of the various neutron bits to wire up in advance. [21:20] EU here too :P [21:20] beisner: yea, I'll not be surprised that maas has it because it's more knowledgable [21:20] well, what I do is every time I put up a nova VM [21:20] you get a new port [21:20] so you can immediately do whatever to remove the filter === natefinch is now known as natefinch-afk [22:20] beisner: apuimedo jrwren had an interesting post around the bridge http://jrwren.wrenfam.com/blog/2015/11/10/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan/ [22:20] not sure if it's helpful at all, but fyi [22:21] rick_h__: I was considering doing it like that [22:22] the problem for OSt deployments is that the dhcp should not give you an address [22:22] so what I was thinking of was to make juju call neutron to set routes between the VMs [22:23] and, of course, use a /24 net per VM [22:23] rick_h__: do you happen to know who's in charge of networking in the context of the openstack provider? [22:24] apuimedo: frobware's team [22:24] ok