/srv/irclogs.ubuntu.com/2017/05/17/#juju.txt

=== xavpaice_ is now known as xavpaice
=== rogpeppe1 is now known as rogpeppe
=== frankban|afk is now known as frankban
dakjicey: hi icey, are you here?07:30
iceyIndeed dakj07:32
dakjicey: do you have time to help me to resolve the issue with Ceph-Mon?07:36
iceydakj: it looked like the mons were clustered happily, except one was showing stale status07:36
iceydakj: did updating your ceph-osd configuration get disks working?07:36
dakjicey: on ceph-osd in old-devices there is /dev/vdb, while before the commit it was /dev/sdb. Units ceph-osd/12, ceph-osd13, and ceph-osd14 result blocked. On each node the command fdisk reports that https://paste.ubuntu.com/24591531/. Its juju status is here https://paste.ubuntu.com/24591542/07:45
iceydakj: can you log into one of the ceph-mon units (`juju ssh ceph-mon/12`) and run `sudo ceph -s` for me?07:46
Alex_____team, i am getting this while running yum command  http://pastebin.ubuntu.com/24588230/ in centos07:55
Alex_____any idea how to resolved this07:55
anrahAlex_____: are you bootstrapping controller?08:01
Alex_____@anrah08:01
Alex_____i was trying to run yum command giving some error08:02
Alex_____anrah:08:02
Alex_____anrah: i am getting this inside the vm08:03
anrahAlex_____: I mean how this is related to juju? And it seems like you are using Vagrant? You must configure your Vagrant box to have access to Internet08:05
Alex_____anrah: i just did vagrant config for some testing purpose.. and by default if i go inside the centos vm box i am not able to run the yum command itself.. can you help me on this08:07
dakjicey: here is it https://paste.ubuntu.com/24591604/08:07
iceydakj: so that answers part of it, the /12 machine is definitely not clustered :) could you also run that from one of the other 2 machines (ceph-mon/13 for example)?08:08
dakjIcey: here is it https://paste.ubuntu.com/24591622/08:11
iceydakj: can your ceph* nodes talk to each other on the network? it looks like it has failed to bring one of the mon nodes up, and liek the OSD nodes have never managed to register themselves with the mons08:12
jamespageicey, dakj: has the path between the units been verified for network MTU configuration etc... using iperf?08:27
jamespagefeels like the mon is trying to bootstrap, but failing due to some external reason08:27
jamespagesuspicion would be packet gfrag but I may be wrong08:27
iceyjamespage: that's what it looks like to me, it seems like 2 of the mons were fine (and successful), one of the mons and the OSDs are left in the cold08:27
dakjIcey: I'm making ping between the lxd node. All node response08:27
jamespagedakj: ping won't tell you the right things08:28
jamespageuse iperf with the mtu flag set08:28
jamespageit will give you perf data and validate the mtu settings08:28
dakjIcey: here is the paste of the ping between lxc machine https://paste.ubuntu.com/24591663/08:29
kjackalGood morning juju world!08:32
dakjJamespage: I can try that, but if the issue is MTU I think that also other lxc machine had to have the same issue.08:33
jamespagemaybe08:33
jamespagelets see08:33
dakjJames-age: where do I must to install that?08:33
dakjIce, James-age: now the situation of juju status is changing https://paste.ubuntu.com/24591673/08:35
dakjNow all ceph-mon and ceph-osd are in blocked08:35
jamespagedakj: you'll need to install iperf on the machines you want to test connectivity between08:40
jamespagedakj: and then on one run iperf -s -m and from another do iperf -c <IP of first machine> -m08:41
jamespageand then vica versa08:41
jamespagenetwork problems can be uni-directional08:41
dakjI've to install that on 2 o more LXC machine and test the MTU status, isn't it?08:42
dakjJames-age & icey: the juju status is return to original status (https://paste.ubuntu.com/24591699/)08:44
dakjJames-age, icey: the result between the VM with MAAS and the LXC vm of openstack-dashboard/4 is here https://paste.ubuntu.com/24591716/08:50
dakjHere is between open openstack-dashboard/4 and ceph-mom/13 https://paste.ubuntu.com/24591721/08:55
dakjIce, Jamespage: from the result I don't think the issue is about that.....I hope of having read  that well.09:03
kklimondais there a way to override how juju installs lxd and lxcfs?09:14
dakjIcey, Jamespage: any idea?09:17
=== salmankhan1 is now known as salmankhan
jamespagedakj: not sure - can you do the tests between the ceph-mon units please09:30
dakjJamespage: any response between ceph-mon/12 and ceph-mon/13 or /14, otherwise between /13 and /14 is fine https://paste.ubuntu.com/24591867/09:48
=== vds_ is now known as vds
jamespagedakj: all three machines need to be able to communicate with each other09:49
dakjI think that issue with /12 because is in maintenance09:49
dakjThen osd/12, osd/13 and osd/14 are in blocked09:50
dakjJuju status https://paste.ubuntu.com/24591878/09:51
anrahIs there a way for amulet to load charm configuration file from file?10:06
kklimondahow does jujucharms.com versioning work?  is it set in stone, and are there guarantees that a) charm will never be deleted and b) no two releases will have the same version? If so, does this hold for the charms not in the "main" namespace, for example cs:~sdn-charmers/keystone-0 ? If not, what's the correct way to guarantee charm version over the life of the10:06
kklimondadeployment?10:06
dakjJamespage: I was thinking and if the issue is on ceph-osd/12,ceph-osd/13, and ceph-osd/14?? Why their status on juju is in blocked and on gui have not error?10:11
dakjthe hypothesis10:11
stubkklimonda: It is guaranteed that no two releases will have the same version. It is possible for someone to revoke access to a charm, effectively deleting it (and maybe you can really delete it)11:08
stubkklimonda: This goes for both namespaced charms and the top level namespace11:08
stubkklimonda: If you want to pin a particular version, deploy cs:~sdn-charmers/keystone-0 and never run upgrade-charm11:09
stubkklimonda: If you want to upgrade to a particular version, use the --switch argument to juju upgrade-charm11:10
kklimondastub: is keystone-0 always point to the same version of the charm, assuming maintainer doesn't do anything stupid like deleting and uploading it again with diffent code?11:10
stubkklimonda: But most deployments want latest stable, which would be cs:~sdn-charmers/keystone (ie. no revision, default channel)11:11
stubkklimonda: It will always point to the same version of the charm. If the maintainer deleted it and uploaded it again, it would get a new revision11:11
kklimondastub: I assume most larger deployments want to have a tested version of the charm, to avoid any drift in a longer period11:12
kklimondaespecially if they have more than one deployment (for example testing and prod)11:13
stubIf they are doing their own testing, yes. They will deploy the last known good revision.11:14
kklimondamy only concern is that someone can revoke access to the charm, that sounds like a npm left-pad nightmare.11:14
kklimondaI'll have to reconsider maintaining local copy of all charms11:15
stubI don't think it would happen in the curated top level namespace, and may not be possible.11:15
stubThe eco system team or charm store team would know more11:15
kklimondamhm11:15
stubI think mortals can only control access to their own namespace. The charms promulgated to the top level namespace are maintained by the ~charmers team.11:16
stubPeople in the US timezone will know more11:17
stubNothing to stop you maintaining your own forks though if that is what you prefer.11:18
stubYou can even use the charm store to do it ;)11:19
stubWe actually deploy most of our stuff from a local copy (which we pull from the charmstore), because we need to inject site specific hooks before we deploy.11:20
dakjJamespage: is there any other task I can make to resolve that? thanks11:23
jamespagedakj: tbh without understanding why you're hitting the problem you have, I don't have a step forward for you atm11:24
jamespagestub: actually "The charms promulgated to the top level namespace are maintained by the ~charmers team" is not 100% true any longer11:24
jamespageits possible for any team to own promulgated charms (see ~openstack-charmers or ~ganglia-charmers for examples)11:25
stubI've only worked with my local namespaces, and had them magically promulgate to the top level.11:26
jamespagedakj: you could strace the ceph-mon on the unit that's failing to join - might give a sniff of what's up11:26
jamespagedakj: hmm just noticed this11:28
jamespage[  4]  0.0-19.1 sec   525 MBytes   231 Mbits/sec11:28
jamespagethat appears out of sync with the other two metrics you recorded11:28
dakjJames-age: do you think it's a network issue?11:32
dakjAnd why the other lxc vm work well??? the issue is only on lxc that have ceph-mon........11:34
dakjJames-age: do you know the credential for the login on Openstack dashboard?12:32
jamespagedakj: you need to prefix messages to me with 'jamespage' otherwise I don't get notifications12:46
jamespagedakj: if you're working from the openstack-base bundle then the username and password are12:46
jamespageadmin/openstack12:47
dakjjamespage: ok, sorry it's the autocorrection that change your nick.12:48
jamespagedakj: that's entirely likely12:51
dakjjamespage: I send you a private text12:53
jamespagedakj: your deployment is still not complete; I'm pretty sure that you have some sort of networky issue in your virtual lab setup, but its hard to be specific as to what exactly that is12:53
jamespagethe fact that hooks are still trying to run to complete the deployment sniffs like packet fragmentation type issues12:53
jamespagedakj: you might want to try to validate you virtual lab independently of trying to deploy openstack itself12:55
jamespagedakj: https://jujucharms.com/u/admcleod/magpie is useful here - you can deploy that charm to both physical machines and to lxd containers on the machines and it will check and report on performance and mtu configuration/mismatches12:56
jamespagedakj: fwiw this is what I do on hardware prior to even trying to deploy openstack12:58
jamespageas it flushed out issues that are hard to diagnose later when you see things behaving oddly12:59
jamespagedakj: ultimately I think MAAS will grow this type of feature, but for now its easy todo with a charm12:59
jamespagedakj: that last status output you pasted - some units had completely disappeared13:00
cnf\o jamespage13:02
cnfi am in london atm13:02
jamespagehey cnf13:02
jamespageenjoy the city ;)13:02
cnfeh, meeting / demos all day13:03
dakj jamespage: my lab is based on a IBM server with ESX as hypervisor and it's connected directly to firewall via switch. On ESX then I've created all environment MAAS, Landscape, Juju and Openstack all of them are vm.13:12
dakjMAAS has been configured with 2 vnet (https://paste.ubuntu.com/24592671/) the first one is for DNS/DHCP service the other one for public as suggest here https://jujucharms.com/openstack-base/13:17
jamespagedakj: all one physical server?13:18
dakjyes13:19
jamespagedajk: what type of vmware switch are you using?13:19
jwdhello13:20
dakjI've an IBM System x3650 M4 with 64GB of RAM and 4tb of storage13:20
jwdjust doing my first experiments with juju here13:21
rick_hjwd: welcome13:23
randomhackhaving a problem bootstrapping to openstack if the keystone endpoint doesn't fit this pattern https://host:port/version - my keystone is on https://host/keystone/version13:23
jwdrhx13:23
jwdthx13:23
randomhackERROR cannot set config: cannot create a client: invalid major version number /keystone/v3: strconv.Atoi: parsing "/keystone/v3": invalid syntax13:23
dakjjamespage: yes  it's a only physical server (I've an IBM System x3650 M4 with 64GB of RAM and 4tb of storage)13:24
jwdfrom watching the chat here most ppl seem to use it to deploy an openstack environment?13:24
jamespagedakj: yeah - I was really after the vmware virtual switch type and configuration being used13:25
jamespagejwd: agreed alot of the conversation would lead you towards that conclusion, but it does alot of other things as well13:26
dakjjamespage: I've 1 vSwitch with 2 vnet (10.20.81.0 'n 10.20.82.0) both are in Promiscuous mode actived13:27
jamespagedakj: that last nugget was what I was looking for13:28
jamespagethanks for confirming13:28
* jamespage remembered something from way back about having to have that enabled, otherwise the LXD containers never get network access13:28
SimonKLBis it possible to register a localhost/lxd controller on a remote client?13:46
SimonKLBim able to access the gui, but when trying to register gives me:13:47
SimonKLBERROR unable to connect to API: x509: certificate is valid for anything, juju-apiserver, juju-mongodb, localhost, not [dns]13:47
rick_hSimonKLB: no, not at this time. There's work going on to build on lxd to make it more cloud-like but it's in flight13:51
SimonKLBrick_h: got it! thanks13:51
lazyPower#TIL juju status now reports the progress of a lxd image import13:59
lazyPower\o/13:59
dakjjamespage: but MAAS subnets has to be configured in this way eth0(https://pasteboard.co/7oi5M27zS.png) and eth1 (https://pasteboard.co/7oitlHdJm.png)14:31
freyeshi marcoceppi , could you take a look to this PR when you have some time? https://github.com/marcoceppi/charm-ubuntu/pull/514:31
magicaltroutlazyPower: should i be able to update from CDK deployed a few months ago to current?14:32
magicaltroutits not a trick question, just want to know if i can run juju update-charm or whatever14:32
jwdcan a controller be deleted somehow?14:36
lazyPowermagicaltrout: yes, there's upgrade instructions on the k8s docs on how to do an in place upgrade14:38
jwdah found it14:38
magicaltroutoh yeah lazyPower14:39
magicaltroutbecause the next level up said "ubuntu" i figured it was ubuntu and not juju14:40
magicaltroutiykwim14:40
lazyPowermagicaltrout: https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/14:40
magicaltroutyeah i see it14:40
dokeryaHi14:41
jwddid i already mention that i like what i see so far ;-)14:45
SimonKLBjwd: it sure is awesome :)14:54
jwdyeah this will help me alot to speed up my building and testing processes14:55
SimonKLBjwd: are you deploying something from the charmstore are you planning on building charms for your own applications?14:56
jwdi will build my own soon14:56
SimonKLBjwd: cool! good luck :)14:57
jwdwe run a multi tier stack build of alot components. and i been in need to remodel all of that.14:57
SimonKLBjwd: then juju will be perfect for you :)14:58
jwdi will start with some of the charms i seen and model around those14:58
SimonKLBsounds like a great approach14:58
jwdso many components to rethink .hehe14:58
SimonKLBit might be quite a bit of work to get it all charmed, but in the end im sure that it's going to be extremely rewarding :)15:00
jwdi did all we have so far by hand and ansible roles. we run around 80 vms in a openebula cloud atm :-)15:00
jwdso anything helping me to model that faster is a benefit15:01
jwdi just need to think about how to get that into a production ready setup asap15:01
SimonKLBjwd: i actually think it's possible to re-use a lot of the work youve done with ansible already15:02
SimonKLBsomeone else can step in and correct me if im wrong15:02
jwdi am sure i can reuse my work.15:02
SimonKLBjwd: https://jujucharms.com/docs/2.0/about-juju :)15:03
jwdi started with this yesterday, so i will need to learn a bit about the basics15:03
jwdas usual. alot new stuff to learn :-)15:05
jwdhehe got my development notebook to its limits :-)15:19
jwdguess its time to claim some funds for a bigger lab environment15:21
ZiclazyPower: hi, hey, a simple question today: what is precisely "GA"? I saw this many times in Ubuntu Insights concerning CDK15:35
Zicit seems to be a development acronym that I don't distinguish in English :>15:35
lazyPowerZic: General availability (GA) is the marketing stage at which all necessary commercialization activities have been completed and a software product is available for purchase, depending, however, on language, region, electronic vs. media availability.15:36
Zicoh, ok :)15:37
Zicthanks15:37
lazyPowernp15:37
SaMnComagicaltrout: have a look at my last post: https://goo.gl/22invt and jump to the conclusion ;)16:57
lazyPowerSaMnCo: your post is already out of date :) that edge etcd charm is now in stable <316:58
SaMnCoaouch :D16:58
SaMnCofixing...16:58
SaMnCofixed16:59
lazyPower<3 like a boss sir. Thanks for keeping the world in the loop that we're one of if not the best solution to get moving with GPU's17:00
SaMnCooh yeah :D17:00
SaMnCoI think the best actually. Really not easy with other stuff as you need to prep the drivers from cloud-init, which means adding logic to id if a node has GPUs or not...17:01
lazyPower:) i was allowing room for other opinions, no matter how iffy they might be :D17:01
lazyPoweri'm clearly biased17:01
SaMnCome too, but I've been testing a few things to understand how our UX differs from others, and I really really really like how the GPU stuff comes in17:02
SaMnCothis is from my most objective self, so be proud17:02
SaMnCoI wouldn't blog about it otherwise, opinions are my own on medium17:03
lazyPower<3 its taken a village but the effort has certainly been worth it17:04
Budgie^Smoreo/ juju world17:04
lazyPower\o Budgie^Smore17:05
Budgie^Smoreare we having fun yet lazyPower?17:09
lazyPowerBudgie^Smore: yeah, i'm gutting TLS key authentication and replacing with basic-auth/token-auth w/ pass/token rotation.17:12
lazyPowerhow about yourself?17:12
Budgie^SmorelazyPower oh nice! so SSO should be easier to implement then? ... waiting for feedback from the interview yesterday, prepping for another tomorrow and a phone screen today17:14
lazyPowerBudgie^Smore: well its more like, i skipped all the authentiation/authorization steps and tried to do sso without having any o those primitives in place and then got mad when it didn't work17:14
Budgie^SmorelazyPower ain't that always the way ;-)17:15
lazyPoweri had some colleagues course correct me and check facts and we're now rebuilding that vector from teh ground up, because we made some pretty obtuse assumptions when we first landed our auth model17:15
Budgie^SmorelazyPower "oh this should be easy... damn it! why isn't this working?!?!"17:15
Budgie^SmorelazyPower well you know what they say about assuming anything ;-)17:17
lazyPowerit makes us all grow in the end?17:17
chatter29hey guys17:18
chatter29allah is doing17:19
chatter29sun is not doing allah is doing17:19
chatter29to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger17:19
Budgie^SmorelazyPower "assume = to make an 'ass' out of 'u' and 'me'." - http://www.urbandictionary.com/define.php?term=Assume17:22
lazyPowerBudgie^Smore: language! :P17:25
Budgie^SmorelazyPower sorry is 'arse' more acceptable ;-)17:26
=== frankban is now known as frankban|afk
magicaltroutlazyPower: Waiting for kube-system pods to start17:40
magicaltroutactually might just be slow standby17:40
magicaltroutsweet17:40
magicaltroutforget it17:40
magicaltroutupgrade works amazing17:41
lazyPowermagicaltrout: tweet that :)17:42
magicaltroutshipped17:44
lazyPowermagicaltrout: <317:57
jwdis there a way to disable the automatic updates when a charm creates a node?18:27
jrwrenjwd: a charm could have the side effect of disabling the underlying ubuntu unattended-upgrades, but there is no built in juju way, a charm would need that, preferably as a non-default config option.18:30
jwdoki18:30
jwdso much to learn :-)18:33
Budgie^Smorewow my kernel upgrade knowledge is way out of date! so how does juju handle livepatch?18:56
rick_hBudgie^Smore: it doesn't since you have to enable it and it has to be associated to your account?18:57
Budgie^Smoreoh I get that part but then there is system level things that need to happen. I suppose that could be a custom os level image... still pretty interesting tech, wonder how it performs against a 4.x kernel no reboot upgrade18:59
Budgie^Smorejust goes to show how long it has been since I researched that problem though lol ;-)19:01
umbSublimeIs there a node limit, for a self-hosted MaaS with juju to deploy openstack ?19:37
=== dpb1_ is now known as dpb1
rick_humbSublime: not really, it's up to the scaling of the maas/juju controller to handle the load.19:47
rick_humbSublime: what are you looking for?19:47
umbSublimeI think I got confused by the 10 Node free limit for autopilot19:47
=== tvansteenburgh1 is now known as tvansteenburgh

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!