=== Guest83203 is now known as Tm_T | ||
=== zz_CyberJacob is now known as CyberJacob | ||
=== jam1 is now known as jam | ||
=== mhilton is now known as mhilton-away | ||
=== CyberJacob is now known as zz_CyberJacob | ||
=== alexlist` is now known as alexlist | ||
gnuoy` | jamespage, I'd like to sneak this small one into 15.01 if you have a sec for a review https://code.launchpad.net/~gnuoy/charms/trusty/ceilometer-agent/add-nrpe-checks/+merge/247562 (tested with mojo spec dev/full_nrpe) | 09:03 |
---|---|---|
=== urulama_ is now known as urulama | ||
=== mhilton-away is now known as mhilton | ||
jamespage | gnuoy`, +1 | 10:52 |
gnuoy` | jamespage, thanks | 10:52 |
=== kadams54 is now known as kadams54-away | ||
dcwilliams_VA | good morning! Does anyone have any knowledge of a bio-informatics organization or genetics research facility using Juju to deploy sequencing analytics and alignment tools and workflows? | 14:06 |
=== scuttle|afk is now known as scuttlemonkey | ||
=== Guest99704 is now known as balloons | ||
=== balloons is now known as Guest37568 | ||
=== Guest37568 is now known as balloons_ | ||
jacekn_ | Hello. I have 2 subordinate charms, one of the provides non-container relation. When I relate 2 subordinates nothing happens. What could be the problem or how to troubleshoot it? | 14:58 |
=== jacekn_ is now known as jacekn | ||
Redoubt | I'm trying to run some experiments with Juju, so right now I have two VMs made in VirtualBox: 1 to serve as the Juju boostrap/orchestrator node, and 1 to be some other node. Both VMs have two NICs, one connected to the vbox NAT (which means they cannot communication with each other) and one connected to a host-only network, where they _can_ communicate. The problem is that I can't convince Juju to only use that host-only network interf | 15:15 |
Redoubt | My environments/manual.jenv shows both IP addresses on the bootstrap node as state servers. If I remove one it just pops back after a minute | 15:16 |
Redoubt | Both the bootstrap and node's agent.conf show the incorrect bootstrap's IP as the apiserver, but if I change it anywhere and restart jujud it just comes back | 15:17 |
Redoubt | I can't seem to find documentation of the apiaddresses param anywhere | 15:18 |
Redoubt | What is persisting this? How do I change it? | 15:18 |
Redoubt | I assume it must be in a database somewhere and the daemons themselves are overwriting my changes, but then is there no way to change this? I can't imagine this is a unique setup | 15:21 |
Redoubt | It's worth noting that, when initially bootstrapped and added, they communicate fine. The problems are introduced when the machines reboot | 15:22 |
=== roadmr is now known as roadmr_afk | ||
lazyPower | wwitzel3: ^ how does juju determine which interface(s) to use when doing a manual provider environment? is it the first interface in the list? or is there something specific we can do to override that behavior. | 15:32 |
lazyPower | Redoubt: i dont know the answer here, but I'll ping some of the core devs to see if we cant find an answer for you. | 15:32 |
lazyPower | dimitern: see question targeted at wwitzel3 please ^ | 15:33 |
dimitern | lazyPower, looking | 15:33 |
lazyPower | Ta | 15:35 |
Redoubt | lazyPower: Thank you! | 15:37 |
dimitern | lazyPower, Redoubt, AFAIKS manual provider uses the "bootstrap-host" setting to determine which IP to use to connect to the host | 15:46 |
dimitern | so this means whichever NIC has that IP will be used | 15:46 |
Redoubt | dimitern: That's what I was led to believe from the docs as well: "All machines added with juju add-machine ssh:... must be able to address and communicate directly with the bootstrap-host, and vice-versa." However, that seems to not be the case, at least after the machines were rebooted | 15:48 |
Redoubt | Now I can't seem to get them to talk | 15:48 |
Redoubt | dimitern: I did set the bootstrap-host to the bootstrap's eth1 static IP before bootstrapping | 15:49 |
dimitern | Redoubt, wait a sec - it seems you're talking about manual provisioning, not manual bootstrap | 15:49 |
Redoubt | Oh, yes indeed I am | 15:49 |
dimitern | Redoubt, ok, so this is different :) you're specifying the IP in the add-machine command - ssh:<user>@<IP/host> | 15:49 |
Redoubt | For the non-bootstrap node, yes that's correct | 15:50 |
dimitern | Redoubt, you can use this in any environment (type: manual is just one of the possible cases) | 15:50 |
dimitern | Redoubt, in a non-manual environment any machine needs to be able to talk to the API server for that environment | 15:51 |
Redoubt | dimitern: Alright-- the "apiaddresses" line seems to be what's off in all the config files. How do I set the API server address? | 15:52 |
dimitern | Redoubt, so if you're doing add-machine ssh:user@IP you need to make sure that machine can access the same IP for the api server as the other machines in the environment | 15:52 |
dimitern | Redoubt, can I have some more details about your deployment please? | 15:52 |
dimitern | Redoubt, what's the output of juju api-endpoints --all --refresh for example? use paste.ubuntu.com | 15:53 |
wwitzel3 | lazyPower: sorry, in my standup, thanks dimitern | 15:54 |
Redoubt | dimitern: http://paste.ubuntu.com/9883603/ | 15:55 |
whit | last week for monitorama submissions: http://monitorama.com/#cfp | 15:55 |
Redoubt | dimitern: That address is the one that I'd _like_ to be used | 15:56 |
dimitern | Redoubt, and what happens instead? | 15:56 |
dimitern | Redoubt, there's a way to hack it manually - just edit /var/lib/juju/agents/<your manually provisioned machine subdir>/agent.conf | 15:58 |
Redoubt | When initially bootstrapped for first VM and add-machine'd for second VM, it worked fine. When the machines rebooted, they started using the other interface instead (IP 10.0.2.15), which is a network they cannot communicate on | 15:58 |
Redoubt | I tried that, but then when I restarted jujud, it overwrote the agent.conf | 15:58 |
dimitern | Redoubt, hmm.. | 15:59 |
Redoubt | dimitern: Indeed! | 15:59 |
dimitern | Redoubt, well for a really ugly hack you need to change the list of addresses of the api server directly in mongo | 16:00 |
Redoubt | Haha, I was afraid of that | 16:00 |
Redoubt | That would be on the bootstrap node, I assume? | 16:00 |
dimitern | Redoubt, yeah, I'm afraid your case is not very well supported, but please file a bug about it so we can keep track of it | 16:01 |
Redoubt | dimitern: Alright, good to know | 16:01 |
dimitern | Redoubt, in most of the cases like that we had so far the manually provisioned machines and the others were on the same network (or can see each other at least) | 16:02 |
Redoubt | dimitern: Yeah, I've run into similar problems with this network topology with MAAS and juju both. Virtualbox just isn't the best testing ground for those, eh? | 16:07 |
dimitern | Redoubt, have you tried kvm? :) | 16:08 |
Redoubt | dimitern: No. Perhaps it's time to start! | 16:09 |
dimitern | Redoubt, sorry I couldn't help you more with your issue :/ I'd appreciate it a lot if you find a time to file a bug against juju-core though | 16:10 |
Redoubt | dimitern: No problem! I appreciate your time! I'll do that now. I'm not really sure how I would sum up my problem though, other than "There's something wrong with the API server and multiple NICs" :P . Any suggestions? | 16:12 |
dimitern | Redoubt, how about "manually provisioned machines with multiple networks cannot connect to the API server after reboot" ? | 16:14 |
=== roadmr_afk is now known as roadmr | ||
Redoubt | dimitern: Good deal, thanks :) | 16:14 |
dimitern | Redoubt, :) np | 16:14 |
arosales | nicopace: marcoceppi, whit, lazyPower, and mbruzek are also good folks to ping on charm testing questions :-) | 16:33 |
lazyPower | o/ | 16:34 |
arosales | nicopace: thanks for your work on those | 16:34 |
marcoceppi | \o | 16:34 |
whit | heyo | 16:35 |
whit | hey marcoceppi moving the convo here | 16:37 |
whit | marcoceppi, I have no idea what a basket is. was that some other charm aggregation scheme? | 16:37 |
marcoceppi | whit: good, you shouldn't know what a basket is | 16:38 |
marcoceppi | whit: it was just an internal name, which referred to the bundles.yaml file, since bundles was an overloaded term | 16:38 |
marcoceppi | where you had a bundle (file) which could intern have multiple bundles | 16:38 |
marcoceppi | anyways, tldr, bundles are just a single deployment going forward | 16:39 |
marcoceppi | fwereade should have more information on the bundle format going forward | 16:39 |
whit | marcoceppi, currently we are using inheritance in the kubes bundle to support dev vs. released deploys | 16:40 |
whit | marcoceppi, or multiple bundle file which make bundletester vomit | 16:40 |
=== sarnold_ is now known as sarnold | ||
whit | *files | 16:40 |
marcoceppi | whit: openstack-charmers has the same issue | 16:40 |
marcoceppi | wrt to inheritance | 16:40 |
whit | well... if we keep using deployer it's not an issue | 16:40 |
marcoceppi | right, but I imagine after core gets support for this deployer won't be used. We'd at least make the switch to use core instead of deployer in amulet | 16:41 |
marcoceppi | to avoid too much skew | 16:41 |
marcoceppi | s/won't be used/won't be maintained/ | 16:41 |
marcoceppi | You can model inheritence still, it'd just have to be done externally in another tool | 16:41 |
whit | like deployer | 16:42 |
lazyPower | marcoceppi: is that more along the lines of bundle generation vs implied inheritance? | 16:42 |
whit | deployer is what works now | 16:42 |
lazyPower | ergo: you define some core-suite, and then add on services. | 16:42 |
marcoceppi | lazyPower: sure, that's one way | 16:42 |
whit | so the testing tools should support the old and new no? | 16:42 |
marcoceppi | whit: well, we'll support what core recommends. Right now it's deployer as the underpinnings. If deployer were to change so it maintained inheritance and used core underneath it, sure | 16:43 |
marcoceppi | but I'm not sure the current plans for deployer | 16:44 |
marcoceppi | or when this feature will land in core | 16:44 |
marcoceppi | I merely have approximate knowledge of everything ;) | 16:44 |
nicopace | arosales: great! if something comes up, i'll be talking to you guys marcoceppi whit lazyPower mbruzek | 16:44 |
whit | well... current plans for deployer are we are using it to get work done | 16:44 |
whit | will hack around current issues | 16:44 |
marcoceppi | well, amulet doesn't have a concept of multiple deployments per bundle.yaml in the load command since load was meant for more simplistic deployments | 16:45 |
marcoceppi | I'd be open to merge req to fix it, I don't have the time currently but it shouldn't be too hard | 16:45 |
arosales | nicopace: sounds good | 16:46 |
marcoceppi | whit: I'm hessitant to add a feature as it means I have to maintain it or break compat in a new major release. Trying to avoid compat breaks when possible in amulet | 16:46 |
whit | marcoceppi, cool | 16:46 |
Redoubt | dimitern: https://bugs.launchpad.net/juju-core/+bug/1414710 . Thanks again for your help :) | 16:47 |
mup | Bug #1414710: Manually provisioned machines with multiple networks cannot connect to API server after reboot <juju-core:New> <https://launchpad.net/bugs/1414710> | 16:47 |
marcoceppi | but if it was a real blocker for you guys, sure, I'd add it whit | 16:47 |
whit | marcoceppi, not a blocker, just a workflow annoyance | 16:48 |
dimitern | Redoubt, thank you for the bug report! :) | 16:48 |
hazmat | whit, marcoceppi which issues? i've got some coding left. | 17:26 |
hazmat | er. time | 17:26 |
marcoceppi | hazmat: the whole core not doing inheritence for bundles going forward when bundles get native support | 17:27 |
hazmat | marcoceppi, ah.. a core issue | 17:27 |
hazmat | marcoceppi, core/cstore could just actualize inheritance trees when storing and serving | 17:28 |
hazmat | albeit thats not a live ref to the parent | 17:28 |
whit | hazmat, yeah, we were trying to make a nice switch from local dev to personal namespace charms for development purposes and discovered that the future is unfriendly to such thing | 17:29 |
whit | s | 17:29 |
=== kadams54 is now known as kadams54-away | ||
=== nottrobin_ is now known as nottrobin | ||
=== balloons_ is now known as balloons | ||
=== kadams54-away is now known as kadams54 | ||
=== zz_CyberJacob is now known as CyberJacob | ||
=== CyberJacob is now known as zz_CyberJacob | ||
=== roadmr is now known as roadmr_afk | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== roadmr_afk is now known as roadmr | ||
=== kadams54 is now known as kadams54-away | ||
blr | is there a tool for templating mojo specs? | 22:23 |
=== kadams54-away is now known as kadams54 | ||
marcoceppi | blr: no, not that I'm aware of | 22:46 |
=== jcw4 is now known as jw4 | ||
=== kadams54 is now known as kadams54-away | ||
blr | marcoceppi: thanks | 22:56 |
beuno | sinzui, utlemming, ping | 23:49 |
beuno | we're trying to deploy to AWS Beijing region | 23:49 |
beuno | but juju seems to not know about that region, we think | 23:50 |
beuno | any tips? | 23:50 |
beuno | *cough* thumper *cough* | 23:50 |
sinzui | beuno, I cannot speak about os images. If that region cannot see streams.canonical.com, then you will need to publish your own streams to that region, or just use --upload-tools | 23:51 |
beuno | noodles775, ^ | 23:51 |
beuno | sinzui, but I thought you guys ran tests against that region? | 23:51 |
sinzui | beuno, no, I have said this repeatedly. Juju QA does not have access to that region | 23:52 |
beuno | sinzui, oh, you said that, but then utlemming tells me he does, and does QA against it | 23:52 |
beuno | so I guess I'm confused | 23:52 |
beuno | maybe you guys are doing separate things | 23:52 |
beuno | sorry to be dense here | 23:53 |
sinzui | beuno, I think that means os-images can be found, but not agent streams (which --upload-tools will solve) | 23:53 |
* beuno defers to noodles775 | 23:54 | |
beuno | thanks sinzui! | 23:54 |
thumper | beuno: o/ | 23:55 |
beuno | hey thumper! | 23:56 |
beuno | I was betting on sinzui not being awake | 23:56 |
noodles775 | I'll try with --upload-tools. Here's the --debug bootstrap output: https://pastebin.canonical.com/124267/ | 23:56 |
thumper | was making lunch | 23:56 |
beuno | the famous Penhey's lunches, I remember | 23:56 |
thumper | if it is tools related, try wallyworld_ | 23:57 |
thumper | he knows more | 23:57 |
thumper | beuno: :-) | 23:57 |
sinzui | beuno, noodles775 that is an os-image error | 23:57 |
* sinzui thinks | 23:57 | |
wallyworld_ | do w ehave image data for cn-north-1? | 23:58 |
noodles775 | sinzui: OK - which explains why it still fails with --upload-tools? https://pastebin.canonical.com/124268/ | 23:59 |
sinzui | beuno, noodles775 I think you need to set an alternate url in environments.yaml "image-metadata-url" to point to a stream in that region. You may need to use juju metadata generate-image from images you have downloaded from cloud-images.ubuntu.com | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!