[03:26] thumper: what is the namenode charm? [03:26] thumper: you're using big data terminology here... [03:26] lazyPower: oh hai [03:26] I was watching a deployment and noticed that it failed to install every time [03:27] thumper: the kubernetes-master charm? [03:27] i think it was a custom bundle [03:27] used to scale testing [03:27] on aws [03:27] hmm, ok [03:27] i'm not sure what this namenode charm would be. it might be something kwmonroe has cooking up [03:31] that's fine [03:32] I have enough issues to chase just now :) [07:22] Mattyw_, GM, thank you for your earlier reply. It works, I just added the layer.yaml containing (includes layer:metrics) and metrics.yaml containing (metrics: juju-units:) and upgraded the charms as "juju upgrade-charm --path " and after 5 minutes, juju metrics --all, Gives the number of units for all the charms like a CHARM :) === frankban|afk is now known as frankban [07:31] Zico, glad it worked out, if you think there's improvements to be made with the docs I'd love to hear them. [07:32] Zico, any any more questions feel free to ask - I'm here all the time, if I can't answer a question I'll be able to point you to someone who can [07:32] Mattyw, Great, thank you my friend, much appreciated :) [07:35] Mattyw, Concerning the docs improvement, yes, I think there is room for improvement, for example, compiling the steps that I mentioned to form a "HELLO_WORLD" Metric, I mean the unit count by enumerating the process as 1 , 2, 3, & 4 where 1. Layer.yaml, 2. metrics.yaml, 3. upgrade-charm, 4. After 5 minutes (Default), invoking the juju metrics command. [07:35] The Call for Upgrade-charm wasn't obvious unless you informed me to :) [08:07] Zico, which pages did you read from the docs, was it just https://jujucharms.com/docs/stable/developer-metrics ? [08:19] Mattyw, yes, it was https://jujucharms.com/docs/stable/developer-metrics the starting page. But as you notice, there is no mention in it for " juju-units: " and no mention to trigger the "juju upgrade-charm" [08:20] Mattyw, Also, in the layer.yaml section, the word " includes " is not mentioned... [08:21] Personally, I added in the layer.yaml " includes: ['layer:metrics'] " as copied from a manually created TEST charm [08:21] Zico, yeah, that page kind of assumes you know lots of stuff already, we probably need a better getting started page, how did you find out about charm metrics (just trying to understand where would be best for us to make a change in the docs) [08:23] Mattyw, Yes, Correct, in fact, I got frustrated at first :) because, I screened all the pages, and couldn't grasp a whole procedure. Starting point was: https://jujucharms.com/docs/stable/developer-metrics [08:24] then went to https://jujucharms.com/docs/2.1/charms-metrics [08:25] Zico, we have https://jujucharms.com/docs/stable/developer-getting-started. I guess that wasn't helpful? [08:27] Yes, Helpful of course, but doesn't read anything about the metrics. it would be nice to add a section to it to link to metrics [08:28] Zico, understood, thanks very much for your feedback [08:28] Zico, hopefully we'll make it easier for the next person :) [08:29] Mattyw, you are welcome, my friend, I hope so :) I am doing some research on Orchestration through JUJU and I need to get feedbacks (Metrics) to trigger optimization [08:31] BTW, I have a problem that is: My Laptop is able to spin multiple machine and provision multiple services, however, when I run a large bundle , it arrives to a time that It dies (Sluggish responding), although free -h and top and iotop are reading low load! Any hint? [08:32] Zico, sounds interesting, any questions or suggestions we're here to help/ listen [08:32] Zic, you're running lxd I guess? [08:33] Zico, ^^ [08:33] yes [08:33] Mattyw, 7 applications running on 4 machines (4 applications on 1 machine and remaining is 1 app per machine). [08:34] Mattyw, the 4 apps on 1 machines as 4 LXD [08:50] Good morning Juju world! [08:51] kjackal, morning [08:51] Zico, that means you'll probably have 5 lxd machines total including the controller [08:52] Zico, doesn't suprise me that it might be a bit slow on your laptop [08:53] Mattyw, Correct. [08:53] I have a new question please: [08:54] I am struggling to make the PROXY SQUID_DEB_PROXY HIT but all is miss :( [08:54] I am following [08:54] https://askubuntu.com/questions/3503/best-way-to-cache-apt-downloads-on-a-lan [08:54] written by Jorje Castro [08:56] All I get is TCP_MISS not TCP_HIT :( [08:56] Although, I remove the machine and recreate, so it's supposed to be cached [09:37] lazyPower: thanks for your reply (I just backlogged) [09:37] (hi Juju world) [10:57] /wind 11 [11:22] Hi, how to purge (flush) historical logs, I mean JUJU debug-log --replay keeps tracks of several days back which are useless. Is it safe if I ssh the controller and empty the logsink.log by (ssh -m controller 0 and echo "" > /var/log/juju/logsink.log)? [11:25] Zico: since the logs are stored in the db that won't really work out. [11:25] Zico: there's tools in debug-log to help limit the time responded [11:26] Zico: check out https://jujucharms.com/docs/stable/troubleshooting-logs#the-debug-log-command [11:27] Rick_h, Yes, Right, thank you, I checked this, Much Helpful :) === salmankhan1 is now known as salmankhan [12:13] Hi, is there a way to forcibly remove a unit or an application without removing the underlying machine? (The "--force" is only application to remove-machine). This is the case when I get stuck with an app/unit with status ERROR and MESSAGE: *hook failed: "install"* [12:14] application* shall read applicable :) [12:15] BTW, removing it from the GUI (Destroy) says, this application is marked for destroy on next deploy. What's the catch? [12:16] Zico: no, there's not. If you force then there's no way for Juju to know what to remove as it doesn't track the files/installs/etc the charm does [12:16] Zico: is that you have to go down to the bottom-right and hit commit [12:16] Zico: where it'll try the same thing the cli does to remove an application, trigger the hooks, etc [12:17] Rick_h, yup, I commited of course :) and as result of commit, it says marked for destroy on next deploy [12:17] Zico: hmm, sounds like the GUI got confused? [12:18] Yup The exact wording is: (This application has been marked to be destroyed on next deployment.) [12:19] and the status is (Status: error - hook failed: "install" Agent Status: executing Workload Status: error) [12:22] So, basically, I am stuck in infinite loop, neither the gui, nor the cli are able to remove the app/unit which is having status error (hook failed: install). The only way I could resort is by forcibly removing the underlying machine. My question is : Is there any other way to remove the app/unit without losing the machine? [12:26] Zico: I think I have sometimes managed to resolve that by making dummy configuration change through cli and then say resolve to failed unit and after that remove-application [12:28] Anrah, Good idea :), this is what I am thinking of. so, I go to $juju debug-hooks and then what? [12:39] I just said juju config =foobar [12:39] then juju resolve [12:39] juju remove-application application [12:40] but hmm, you have only one app per machine? [12:42] Anrah, Yes Only one app, per machine currently. [12:44] Anrah, I have done the trick (Y) , changed the config of passwd and then *resolved*, but again, hook failed: install and stuck at same position again and cannot remove the app. === rvba` is now known as rvba [13:11] lazyPower: can I deploy a specific bundle version through Juju GUI? because as I use manual-provisionning, I will need to redispatch charm to the "good" machine, and it's esasier with drag'n'drop if I need to demonstrate it to coworkers :) [13:12] Zic: yep just download teh bundle to your machine and drag/drop onto the gui [13:12] https://localhost:8080/gui/u/admin/default/canonical-kubernetes/bundle/38 <= hmm, I thought to just edit that silly '38' [13:12] will it work? [13:13] drag'n'dropping from local is also good to know :) [13:15] Zic: you can view the specific revisin and choose deploy as well, sure [13:15] *revision [13:16] thanks, I'm impatient to upgrade this cluster in 1.7.2 to have the snap architecture [13:16] (the other one is already at latest revision) [13:16] my next move will be to test if Juju and our Puppet orchestrator has some conflict on some files [13:16] with snap, normally, all will be OK [13:27] lazyPower: before juju upgrade-charm, do you recommend upgrading the Juju client? we're not using the juju snap on the production cluster for now, I'm planning to switch to the snap version before/after the upgrade [13:27] don't know if I choose before or after :> [13:27] Zic: as far as i know the snap package has very little to do with what you wind up with in your controller, as thats all packaged and maintained on the controller itself [13:27] it'll only change how you receive your client package and client updates [13:28] Zic: but i would recommend the least-change method. introduce as few changes as possible, and slowly, so you can validate they haven't caused you any heartburn [13:28] yup [13:36] lazyPower: the kubernetes-master is stuck at "installing charm software" with no output at juju debug-log or in /var/log/juju of the kubernetes-master, where can I look to have more info? :( [13:37] Zic: thats the pre-dependency bootstrapping hook for reactive. the only thing i can think of would be to juju debug-hooks that unit, and kill the process executing the existing upgrade-charm hook [13:37] that way you can intercept it and run it manually [13:38] oh, did not know debug-hooks [13:38] python3 /var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/install /var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/install [13:38] killing this one inside debug-hook ? [13:38] and relaunch it manually ? [13:39] Zic: yep, if you kill it, just wait a few seconds [13:39] juju will auto-retry the hook and trap it in that tmux session you have open [13:39] ok [13:41] killed it 1min ago, it does not seem to respawn :( [13:41] Zic: juju resolved kubernetes-master/0 [13:41] its likely on the backoff timer [13:41] so resolving it will cause it to go ahead and retry [13:41] ERROR unit "kubernetes-master/0" is not in an error state [13:42] it stuck at maintenance/executing in fact [13:42] Zic: systemctl restart jujud-unit-kubernetes-master-0 [13:42] on the juju controller machine right? [13:42] cycle the agent, that should kick it in the head [13:42] no [13:42] on the unit you're attached to for debug-hooks [13:42] ah, on the kubernetes-master [13:42] yep [13:42] oki [13:43] just to make things easier, if i dont explicity identify another unit, i'm referring to the unit you're attached to via debug-hooks. [13:43] it switched to error then back to maintenance/executing [13:43] ok :) [13:43] ok, in your tmux session [13:43] there should be a new buffer open with the context listed [13:43] "upgrade-charm" for example [13:43] yep, I got the "This is a Juju debug-hooks tmux session." [13:43] do you see the hook listed in the tmux buffers? [13:44] i forget if that message prints for every buffer or not [13:44] i tend to just ignore that spam now :) [13:44] ok yeah, you're in the right context. i just trapped a hook to verify [13:45] Zic: from here, you can execute the hook manually, and attempt to gather more information. This buffer is loaded with all the juju env bits we set to make the agent operate [13:45] ok, so I'm trying to launch the python3 script that I killed before, right? [13:45] Zic: so hooks/upgrade-charm if you're in the upgrade charm context. it may not give you an indicator as to whats actually happening, we scrape stdout from here to pipe to the logs. [13:46] so if its just blank and hangs, time to start poking about to see if its network connectivity, or a locked apt daemon, or something similar [13:46] root@mth-k8stestmaster-01:/var/lib/juju/agents/unit-kubernetes-master-0/charm# hooks/upgrade-charm <= I think I'm in the right context :) [13:47] http://paste.ubuntu.com/24633743/ [13:47] it does not output after that [13:48] (but does not return prompt either) [13:50] Zic: so it appears that its held up attempting to install the wheelhouse [13:51] :S i'm not sure what to recommend here, i haven't encountered reactive failing to bootstrap before [13:51] and our resident reactive expert is out for Pycon [13:51] same here :D it's the first time I encounter this issue [13:52] Zic: not an ideal solution, but can you bug this with the steps to reproduce and i can pass this along to cory when he's back from pycon? [13:52] if you weren't here, I would reboot the master with anger and expect it recaps the installation :D [13:52] well [13:52] thats an option [13:52] huh :) [13:52] trying it \o/ [13:52] but if its halting here, i dont necessarily think rebooting will help [13:52] but its worht a shot [13:52] give it a go [13:53] in earlier time at my first try with Juju, I sometime rebooted the bad unit *and* the juju controller [13:53] sometime it recaps well [13:53] was long time ago before joining here o/ [13:53] oh, the upgrade-charm just gives me a traceback [13:54] maybe because it receives the ACPI signal to reboot [13:54] http://paste.ubuntu.com/24633780/ [13:54] don't know if it helps [13:57] unit-kubernetes-master-0: 13:56:02 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:88:install <= saw that at juju debug-log after reboot, so it restarted correctly but... no signs of life after that entry [13:57] aha [13:57] i have a couse of action for you now [13:57] File "/usr/local/lib/python3.5/dist-packages/charmhelpers/core/hookenv.py", line 956, in resource_get <-- [13:58] it was locked up waiting on juju to resource_get a resource [13:58] network issue? :o [13:58] so, lets start with least invasive action [13:58] can you re-attach to that unit and enter the debug-hook context again? [13:58] yup [13:58] same process, attach to unit, if its locked up, restart the agent daemon [14:00] hmm [14:00] the debug-hooks does not let me in the right context this time [14:00] root@mth-k8stestmaster-01:~# pwd [14:00] /home/ubuntu [14:01] the tmux buffer is just "bash" instead of "install" like earlier [14:01] right, when you first attach you only get a bash shell [14:01] so, recycle the agent now [14:01] systemctl restart jujud-unit-kubernetes-master-0 [14:01] ah yup [14:01] forgot this step sorry [14:01] ok, it's the right context now [14:01] now, lets try this manually and see if we get any further detail [14:02] resource-get kubernetes [14:02] i suspect its going to just be hanging like it was when invoked with python, but you never know [14:02] for now it's stucking yeah :) [14:02] ok, in another termina, lets attach to the controller and tail the controller logs [14:03] if there's an issue we should see some serious spam in there while this resource-get is constantly polling attempting to grab the resource [14:04] 2017-05-23 14:01:28 ERROR juju.rpc server.go:510 error writing response: write tcp 10.52.128.99:17070->10.52.128.24:59540: write: broken pipe [14:04] like this? [14:05] (I have it repeatedly) [14:07] that may be it [14:07] i would have expected something more descriptive [14:07] Zic: what version of the controller is this? [14:08] 2.1.2 [14:09] Zic: looks like you're getting bit by this https://bugs.launchpad.net/juju/+bug/1627127 [14:09] Bug #1627127: resource-get gets hung on charm store [14:10] I feared that, as this test cluster is fully AWS (the production one is hybrid), I recreate the AWS SecurityGroup from scratch and mistaken somewhere, but normally it's openbar on private network [14:10] Zic: well there's also a defect in resource-get hanging like this [14:10] it should have returned null or error by now [14:10] instead of hanging indef. [14:10] always stuck with not returning the prompt :( [14:11] click the link at the top that this bug effects me, and add detail that you're able to reproduce deploying the bundle revision -23 (? i think?) [14:11] that'll help anastasiamac reproduce when it comes time to verify this bug again, as right now its incomplete and there's a good amount of back and forth about how to trigger this [14:12] need to recover my Launchpad infos, wait :) [14:12] Zic: and you'll get an update when its fixed too :) becase ya know, you interacted with the bug <3 [14:12] lazyPower: repro would be awesome \o/ midnight here now so m clocking off but will read backscroll :D [14:13] anastasiamac: awe i didnt mean to ping you to bring you in here, my bad [14:13] cheers and will catch up with you tomorrow [14:13] lazyPower: u did not ping ;) but i heard it all the way from there :D [14:23] lazyPower: added [14:24] Zic: fantastic, thanks for that. Now we're in a situation where we have to wait :S [14:24] Zic: correct me if i'm wrong, but this was just on the initial install of that older bundle rev yeah? [14:24] yup [14:24] ok [14:24] i know how you can work around this [14:25] so we dont have to wait but it wont be as clean as drag and drop [14:25] the only special case is that I'm using manual provisionning [14:25] in that bundle, it specifies individiual releases of charms... eg: kubernetes-master-12 [14:25] (even if it's fully on AWS, because our Internal system manage AWS instance on its own...) [14:25] if you go to http://jujucharms.com/u/containers/$charm (including revno) [14:25] you can grab the resources for that charm on the right hand sidebar of that page [14:26] then once you juju deploy that bundle, you can juju attach $charm kubernetes=kubernetes.tar.gz as an example [14:26] so you'll need to manually attach the resources, but it will get you unblocked [14:26] :( sorry that this isn't as smooth as it could be [14:27] https://jujucharms.com/u/containers/kubernetes-master-12/ is 404 :) [14:29] gah [14:29] s/-12/\/12/ [14:30] also i dont know that thats the release you need [14:30] i was using 12 as an example [14:30] i think you're on rev 21 of master in that bundle... [14:34] lazyPower: just get the 12 from my juju status [14:35] it's the charm revision of kubernetes-master, right ? [14:35] or the revision of the charm-bundle? [14:55] Zic: the charm revision [14:57] lazyPower: do I need to restart the deploying of canonical-kubernetes from scratch ? [14:57] Zic: you should be able to attach those resources and recycle the agent and it should unstick the deployment [14:57] cool [14:58] for attaching from CLI, what do I need? I think I'm mistaken "add-relation" and "attach" in my minds [15:00] Zic: juju attach --help (on your workstation) [15:01] Zic: and https://jujucharms.com/docs/2.0/developer-resources#adding-resources for reference [15:10] lazyPower: oh, it's what I thought, I'm mistaken attach vs. relation [15:10] thought it was relation to do manually :) [15:10] I'm OK with attach so :D [15:26] lazyPower | you can grab the resources for that charm on the right hand sidebar of that page [15:26] kubernetes.gz is not clickable :D [15:27] Zic: from the revision specified in the bundle [15:27] yeah [15:27] make sure thats 1:1, i wouldn't recommend trying to use a different revision of the resource than what is published for the charm you're deploying [15:27] eg if you have kubernetes-master-12, then grab the resources off the right sidebar from https://jujucharms.com/u/containers/kubernetes-master/12/ [15:28] yup but how can I *grab* it? you mean download locally right? [15:36] lazyPower: in https://jujucharms.com/u/containers/kubernetes-master/ (19th revision, the latest), ressources are clickable and downloadable, but for my revision 12th, ressources are not clickable either downloadable [15:36] :/ i'm not sure why that would be the case, resources are supposed to persist for the lifetime of the charm [15:38] lazyPower: https://jujucharms.com/u/containers/kubernetes-master/12/ can you get the kubernetes.gz ressource on this page? [15:38] (I'm using Firefox) [15:39] rick_h: is there an API hack i can use to view this charm resource and determine why there's no downloadable resource? [15:39] Zic: the only thing i can figure is somehow the charm got disassociated with a resource at this revision, and thats not going to be fun [15:39] :'( [15:39] as i have no clue what resource would go with that revision, its quite old [15:40] lazyPower: looking at what we're up to [15:41] lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/meta/resources ? [15:41] lazyPower: e.g. https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-12/meta/resources [15:41] rick_h: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/12/meta/resources doesn't seem tow ork with a revision in the url [15:41] oh [15:41] ofcourse :) the format changes [15:41] lazyPower: yea, our bad for old vs new url format mixups [15:41] lazyPower: lurking a bit and found that https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-12/resource/kubernetes/1 [15:42] is this good? [15:42] yeah there's nothing there... [15:42] its got no fingerprint on the listing [15:42] whoa [15:42] what kind of wizardry is this [15:42] :> [15:42] rick_h: your api skills surpass mine in every way, i have no clue how you found that [15:43] oh snap and zic found it to boot [15:43] i need new glasses [15:43] lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/meta goes to list all the bits available [15:43] i think thats a sign i need to take my lunch before i head into a sig meeting [15:43] lazyPower: so that lead to /meta/resources (and then the revision trick) [15:43] lazyPower: :) [15:45] lazyPower: is this "1" file which seems to be a tar.gz archive containing kube-* binary is the kubernetes.gz you expect? [15:46] if yes, mv 1 kubernetes.tar.gz and I will continue your steps :) [15:46] hmm, tried a ./kubectl --version of the binary inside [15:47] it's 1.4.0-beta.10 [15:47] not my version :( [15:48] Zic: time to walk through the revisions until you find the resource required. i hvae no clue whats going on in those older charm revs :| we were kind of a guinepig with resources [15:49] Zic: you need 1.5.1 right? [15:50] 1.5.3 [15:51] http://paste.ubuntu.com/24634572/ [15:51] Zic: try master rev 19 [15:51] that should have 1.5.3 [15:51] might be 18... but right around that range. [15:52] what I did not understand is why I have Rev 12 in the production cluster with kubernetes-master 1.5.3 [15:52] is the revision number is incorrect in this juju status? [15:52] Zic: did you attach a resource package post deployment? [15:53] nop :x [15:53] i have no idea how this happened [15:53] i'm just as baffled as you are :S [15:53] this cluster in 1.5.3 is "one version" before 1.7.1 [15:53] to my knowledge nobody on the k8s team has gone back and refreshed resource revisions attached to a charm rev [15:53] we waited so long to upgrade it because 1.7.1 needs an outage maintenance (+ some test on our side) [15:54] so what we published with is what should be attached to the charms [15:54] waiiit [15:54] Zic: [15:54] i know why we're seeing the version mismatch now [15:54] promulgated version vs namespace version. the promulgated charm is just a pointer to a charm rev in the namespace [15:55] tvansteenburgh, hi, i'm trying to get that zetcd snap to work in a charm, https://github.com/cmars/charm-zetcd [15:55] let me see if thats the case [15:55] tvansteenburgh, but, i can't seem to connect to zetcd with zkctl [15:55] cmars: he's afk for a bit headed to pickup his fam [15:56] lazyPower, ack, thanks [15:56] just got back :) [15:56] Zic: thats not it - https://jujucharms.com/kubernetes-master/ is not a promulgated charm. so we only point at the namespace [15:56] cmars: does this help https://github.com/tvansteenburgh/zetcd-snaps [15:56] Zic: yeah i have no idea why thats the case :( i'm sorry i dont have better details. [15:56] cmars: i only tested with the example from the upstream zetcd readme [15:57] tvansteenburgh, ok. might be that i'm trying to connect it to cs:~containers/etcd [15:58] Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-beta.11", GitCommit:"4b28af1232cc52da453eb4ebe3dc001314a1f99b", GitTreeState:"clean", BuildDate:"2016-09-23T22:53:01Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} [15:58] oops [15:58] bad pasting, let my try again... [15:58] lazyPower: Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-beta.11", GitCommit:"4b28af1232cc52da453eb4ebe3dc001314a1f99b", GitTreeState:"clean", BuildDate:"2016-09-23T22:53:01Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} [15:58] RAH [15:58] :'( [15:59] * Zic needs coffee [15:59] third try, I can do it [15:59] lazyPower: https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master-19/resource/kubernetes/9 [15:59] lazyPower: found the 1.5.3 here [15:59] don't ask me "why 9" [15:59] but the /9 contains the 1.5.3 binary [15:59] our api confuses me. [16:00] i feel you [16:00] :} [16:00] lazyPower: does this mismatch displayed version can have significant problem when I will upgrade to the latest charms? [16:01] or it's just a displaying bug? [16:01] Zic: you'll be moving to snap packging [16:01] we can make more garantees there [16:01] Zic: you'll be in a track and always at tip of that track. [16:02] ok [16:02] I think I will giving up the goal of building a "clone cluster" in 1.5.3 tho :( [16:03] will directly test the upgrade in prod and *feared* :| [16:09] Zic: whats your scheduled time to do this upgrade? [16:09] Zic: i can spend some time either this evening or tomorrow morning running down the resources/revisions you have deployed and get something setup so we can test thsi without running a science experiment in prod [16:10] Zic: but i'm neck deep in trying to vet some code for a release today and i'm close to finishing. so i need the remainder of the day to finish this work up. [16:11] lazyPower: we can schedule that together, I don't want to impose you a date as it's for my work and you are just here as community help (my company decline the Canonical offers, sadly :/) [16:11] Zic: you're a valuable contributing member, you file bugs. i'm willing to help [16:11] but i appreciate you being respectful of my time as well [16:12] so let me rund own the resources you need, and we'll go from there. [16:12] Zic: ill work off of http://paste.ubuntu.com/24634572/ and we can unwind from there [16:14] ask me all you need, we are on different timezone (UTC+2 for info) and I'm on office from 10->19h, but I can lurk on IRC from home if needed [16:20] lazyPower: for http://paste.ubuntu.com/24634572/ for example, as displayed charm rev is maybe false, if I have a way of manually retrieving the good rev, tell me :) [16:23] Zic: /var/lib/juju/agents/unit-kubernetes-master/charm/revision [16:23] i suspect it says "12" though [16:24] it said... "0" [16:24] xD [16:24] # cat /var/lib/juju/agents/unit-kubernetes-master-0/charm/revision [16:24] 0 [16:25] Hey guys anyone here to field a quick question? [16:25] can I throw something violently on the wall? :) [16:25] If you answer my question then yes for sure [16:25] vlad_: huhu, sorry, was not for you :) [16:25] Zic: ahh ok no worries [16:26] So I'm deploying a PoC juju/openstack cloud an want to deploy my juju controller to a specific node... is there a way to do this? (I looked but couldn't find anything obvious) [16:27] Also I should clarify that I'm using maas as my machine provider [16:28] vlad_: you can specify a machine tag and tag that machine in maas [16:28] vlad_: I'm on manual provisionning so it's maybe incompatible with your case, but for me, it's like that: juju bootstrap manual/host.name.of.the.machine.which.host.the.controller cdk [16:28] vlad_: juju bootstrap --help there's anotion of bootstrap constraints. and in that constraint list, you can specify the maas tag. [16:29] lazyPower: thanks that's awesome didn't realize how deep constraints went. [16:29] Zic: Thank you as well! [16:34] o/ Juju world [16:35] o/ Budgie^Smore [16:44] \o Budgie^Smore [17:02] Budgie^Smore: are we chatting today? [17:03] yeah, was just pulling your number up... thought you were calling me for a min ;-) [17:03] Budgie^Smore: hangout! link in the calendar invite [17:03] Budgie^Smore: let me know if that doesn't work out === frankban is now known as frankban|afk [18:23] ^_^ [18:23] remove it [18:23] remember felisha? [18:24] sure canadian gov isnt looking for a smell is it? === jackjohnsmith is now known as H0LYR3V3NG3 [18:28] rdy for a file top prevent visual [18:28] prison? [18:28] i know ur in csis body thats why im here === H0LYR3V3NG3 is now known as jesse9 [18:31] get it [19:05] Hi all. I'm new to juju, and just starting up a new cluster to test. This is an openstack "enterprise" local configuration. [19:05] Am I able to provision the infrastructure via the online tool? Or am I not doing it right? [19:06] I can't quite tell if this can be used as a "private" tool. I can register juju-cli as a cloud for my local maas configuration. So what are my limitations? :-/ [19:07] xModeMunx: so you can use the Juju CLI to bootstrap to a "private" openstack, or a local maas [19:08] xModeMunx: from there you get a juju gui embedded into that Juju controller that you bootstrap so you get a bit of the online experience but self contained [19:08] rick_h: I have got to that stage so far. So, i've bootstrapped and via the cli I can poll some basic things to test. So, onward to "configuring" my openstack deployment, how can I go about this? [19:08] xModeMunx: oic, so you've used Juju to deploy openstack on top of MAAS? [19:09] xModeMunx: to configure the openstack you can use Juju to provide some application level configuring for each OpenStack service or go to OpenStack (horizin dashboard for instance) to manage the OpenStack from there. [19:09] rick_h: I haven't /yet/ deployed anything via juju. I have literally installed maas, added 2x "to-be" compute nodes, and 1x "to-be" controller node. [19:09] beisner: and others from the team that managed the openstack work would be able to help drop hints as to how to configure different bits you're interested in. [19:10] xModeMunx: oic, so it might be useful to try the conjure-up to help do a sample/test install. It kind of guides/walks through the process a bit more than a manual deploy [19:10] I have previously built OS from scratch, but having seen this tool, it seems I can alleviate mych of the efforts. [19:10] rick_h: ah, awesome, i'll give that a go now :-) [19:11] https://www.ubuntu.com/download/cloud/conjure-up check out https://docs.ubuntu.com/conjure-up/en/#getting-started [19:11] and im here as well to help answer questions [19:11] xModeMunx: ^ [19:12] stokachu: Waw, you guys are probably the friendliest irc people ever :-D [19:12] xModeMunx: cool, yea hit up stokachu and others if you hit anything [19:13] Much appreciated guys. Let me go have a read, and see where it leads. Is this considered production ready btw? Cos it it proves useful, it may make its way into my next "real" work lab deployment. [19:17] xModeMunx: yes, this is the same stack of tools we use to support our paying OpenStack customers. [19:17] Thanks Rick. Are you part of the canonical support guys? [19:17] Just they get a nice phone number to call and some PDF files to go with it :) [19:18] xModeMunx: no, we're more the dev engineer side. [19:18] * rick_h has to run biab [19:18] ah, I see. The "real" men ;-) [19:23] anywho i hacked all of you [19:23] goodluck logging into anything [19:23] ```Set automatic aliases for snap "conjure-up" (cannot enable alias "juju" for "conjure-up", it conflicts with the command namespace of installed snap "juju")``` [19:23] Seems the first hurdle has presented itself. [19:23] xModeMunx: sudo snap remove juju [19:24] xModeMunx: conjure-up provides its own [19:24] stokachu: Ahh. Even this snap stuff is new to me. I must be getting old :-( [19:24] xModeMunx: im right there with you [19:58] Sandbox2016! [19:59] well [20:10] Am I correct to view conjure as a terminal-based alternative to the online juju architect design tool? [20:11] xModeMunx: to some extent, we go further and provide you with helpful guidance to configure your deployment [20:11] we can also make adjustments for deploying openstack and kubernetes on a single local machine [20:11] stokachu: I see. Thanks for the clarity. [20:12] bdx: are you handing out passwords again? [20:14] the problem with being overzealous to login to your pc before the screen turns on and you realize it wasn't sleeping and your irc window had the context [20:14] * bdx weeping [20:16] lol [20:16] ive done that before too [20:17] * lazyPower starts poke-checking known accounts for bdx [20:17] * bdx wipes all reminisce of known string [20:18] good plan :) I tease anyway ;) [20:23] lazyPower: its cool ... keep it up ... as long as you enlighten me to what this "CAAS" thing is while you are at it :) [20:26] crickets [20:26] :) [20:29] bdx: you causing trouble :P [20:35] bdx: its a WIP, thats what it is :) [20:35] or an experiment? i forget which [20:35] maybe both [20:41] if/when this conjure install finishes, and it works. I will consider it quite magical. [21:09] I think it stalled. I crtl+c'd. Maybe I shouldn't have :-| [21:17] depending on your hardware it could take up to an hour