/srv/irclogs.ubuntu.com/2014/06/23/#maas.txt

=== vladk|offline is now known as vladk
=== CyberJacob|Away is now known as CyberJacob
=== vladk is now known as vladk|offline
=== CyberJacob is now known as CyberJacob|Away
bigjoolsrvba: fancy a quick review? :)07:36
bigjoolshttps://code.launchpad.net/~julian-edwards/maas/cname-job-for-staticip/+merge/22406307:36
bigjoolsnot bug07:36
bigjoolsbig*07:36
rvbabigjools: sure07:36
bigjoolsrvba: FTR, the imports at the top were changed to avoid the dreaded circular import07:37
mfa298I've been experimenting with maas and juju and seem to have an issue where maas is remembering old ip addresses of nodes which are no longer valid.09:49
mfa298has anyone seen that before and/or know of any easy fix (I've tried deleting the nodes and recommissioning them but it still has the old data.)09:49
=== Solution-X is now known as Solution-X|AFK
=== vladk|offline is now known as vladk
jtvmfa298: that should only be possible if the dhcpd still has the old leases.10:28
mfa298I did intially have maas managing dhcp and dns but then changed it so that dhcp+dns where done elsewhere.10:37
mfa298I did spot the maas dhcpd.leases file which does list some early nodes with fixed addresses.10:38
mfa298it seems slighty odd that it's still telling things those addresses when it's no longer providing dhcp + dns.10:39
jtvProviding DHCP is the only way in which maas finds out about those leases.10:40
jtvSo when you disable that, it stops hearing updates, but it doesn't know how you've changed things.10:40
mfa298surely is should know that I've changed things as I've changed the dropdown in maas from manage DHCP and DNS to Unmanaged10:41
jtvYes, but it doesn't know what happens to the nodes' addresses after that.10:41
=== roadmr is now known as roadmr_afk
jtv(We emphatically recommend letting maas manage dhcp on the nodes' network)10:42
mfa298it would seem better than it then forgets the adresses it thinks it knows about rather than give out wrong info (I had DHCP + DNS setup with dynamic DNS so resolution by name did work. but I had a juju deploy process hang for the whole weekend as it was trying to use outdated IP information.10:43
mfa298I've now spotted the setting I had missed so I've gone back to having maas manage dhcp + dns for this test cluster to see if that works better (I'd missed the upstream dns setting so things weren't deploying originally)10:45
jtvThat does sound sensible.  You can probably work around it for now by replacing that leases file with a blank, and wait for the regular update cycle to parse the new file.10:45
=== roadmr_afk is now known as roadmr
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
loki27_Hey, i have setup MAAS and added 4 nodes, now i have installed juju-core on the MAAS Master node and configured to connect to my maas instance14:14
loki27_now when i run bootstrap , it takes ages and i get Attempting to connect to ...14:14
blake_rloki27_: are you using the fast-path installer?14:18
loki27_No14:18
loki27_All set them to default installer14:18
blake_rloki27_: fast-path is the default method, and the prefered method14:19
blake_rloki27_: default installer takes a very long time, as it has to go through the whole d-i process, and download each package from the archive14:19
loki27_well i have Mark node as using default isntaller OR Mark node as using fast-path installer14:19
blake_rloki27_: juju can even timeout waiting for a default node to finish installing14:19
blake_rloki27_: mark all nodes with fast-path installer14:20
loki27_that's what i notices, and im deploying on slow nodes (core 2 duo ) test lab..14:20
loki27_nice ..!14:22
loki27_thanks blake_r14:22
loki27_Im folloing https://help.ubuntu.com/community/UbuntuCloudInfrastructure , now i have deployed all charms and added relations, exposed openstack-dashboard and nova-cloud-controller14:48
loki27_when i run  juju status openstack-dashboard i get  server: 409 CONFLICT (No matching node is available.))'14:48
jtvloki27_: that means there's probably a constraint in there that excludes all your remaining nodes.14:52
loki27_juju get-constraints returns noting, dashbord will show status for all four nodes to "Allocated to david"14:52
jtvMight happen when units go wrong — e.g. if you specify "I want 1,000, memory" thinking it's kilobytes but the number gets interpreted as megabytes.14:53
jtvIf all four nodes are allocated to david, then there are simply no available nodes to give to you.14:53
loki27_Right now i did not speicify anything other then default values from the tutorial (just trying to learn)14:53
loki27_jtv I dont understand how should i determine how many nodes are required to deploy a standard OpenStack infrastrucutre with juju14:54
loki27_They were all "Ready state" before i just bootstrapped and deployed all the charms required for openstack14:55
jtvThen I guess that charm eats up more than 3 machines.  kirkland, maybe you can help with that?14:55
blake_rloki27_: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/14:56
blake_rloki27_: that might help14:56
loki27_Well i have 1 box dedicated to MAAS + Juju , and i added 4 physical nodes for the deployment14:56
blake_rloki27_: you need more than 4 to deploy openstack unless you are using lxc contains as well14:56
kirklandloki27_: I have a bundle that I deploy all day, every day, really nice OpenStack Ice House, which uses 5 nodes14:56
* marcoceppi laments using an IRC nick and domain name that match14:57
loki27_kirkland ok so i can kick in one more node and that should be enough ? Or is that bundle you are using available somewhere ? ;)14:57
jtvSo glad I didn't go with "the" as my IRC nick14:57
roadmrloki27_: if you're following that tutorial to the letter, keep in mind that unless told otherwise, juju will deploy one service per node. So 1) maas controller, 2) juju bootstrap, 3) mysql, 4) rabbitmq, 5) keystone, 6) nova-cloud-controller, 7) nova-volume, 8) nova-compute, 9) glance, 10) openstack-dashboard14:58
roadmrloki27_: you'd need 10 nodes if you want to blindly "juju deploy" all the needed charms14:58
loki27_ouch, i don't have anywhere close to that budget for openstack deployment right now :P14:58
loki27_i wass guessing 4 nodes was the absolute minimum for an HA environment with quorum and fail-over controllers14:59
roadmrloki27_: well you *can* deploy on less nodes, but as I said, you have to explicitly deploy to specific machines, and if you didn't, juju would expect you to have at least 10 nodes14:59
loki27_And that is why i am trying to deploy on 4 nodes right now ;)14:59
roadmrloki27_: ok, it can probably be done, I'm just looking at the likely reason why your deployment was not completing15:00
roadmrloki27_: (based on the way you said you were doing things)15:00
loki27_roadmr yes fine, and that helps me understand for sure..15:00
roadmrloki27_: the same document suggests a way to 'co-locate services', that'd only need 7 nodes I think (glance, rabbit and dashboard are co-located with other services)15:01
loki27_Well from my understanding the suggested co-location, is a 5 node deployment15:02
roadmrloki27_: yes but I think you're not counting the maas controller and the juju bootstrap node15:03
loki27_haaa well i have deployed both on the same phys node15:03
roadmrloki27_: ok, good thinking :)15:04
loki27_and i guess, it make sense to have the MAAS / Juju on 2 VPS in my datacenter that are standalone (not openstack based)15:04
loki27_Ok it make sense now :)15:04
loki27_So how would i roll back all the deployed charms now ? :/15:05
loki27_juju destroy-environment will be fine , thanks15:06
blake_rjtv: you will like this one15:23
blake_rjtv: https://code.launchpad.net/~blake-rouse/maas/maas-boot-poweroff/+merge/22415615:23
jtvblake_r: great!  Is the syslinux 4/6 transition an obstacle?15:45
jtv(Because it changes that list of boot loaders)15:46
=== vladk is now known as vladk|offline
blake_rjtv: poweroff.com is on trusty and utopic15:47
blake_rjtv: same path15:47
jtvI must admit to being a little mystified, by the way...  this is netbooting a node into a power-off state?15:47
=== vladk|offline is now known as vladk
blake_rjtv: yeah it just powers it off15:47
jtv(If it's on both trusty and utopic then that covers the transition.  Nice.)15:47
=== vladk is now known as vladk|offline
jtvblake_r: when you say a node gets turned on "out of sync from MAAS," you mean it gets turned on when maas doesn't expect it to turn on?15:48
blake_rjtv: yes15:48
jtvAhhhh cool15:48
jtvblake_r: reviewed.  Comment copiously.  :)15:55
alexpilottiblake_r: morning16:01
alexpilottitrying to join the meeting16:02
alexpilottihangout says “This party is over...” :-)16:02
blake_ralexpilotti: sending invite16:02
alexpilottitx16:02
alexpilottibtw Gabi is joining in as well, but in a few minutes16:02
blake_rhe is in now16:03
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
schegihi playing around with maas/juju for a couple of days now. want to deploy openstack/ceph HA cluster on a couple of machines. is there any way to customize the ceph deployment with the juju charm any firther. have to get it running on different networks than the one default publich network the charm uses.17:08
jtv1schegi: if #maas doesn't have a good answer for that, have a look in #juju.17:09
schegikk tthx17:09
=== vladk|offline is now known as vladk
MilesDenveranyone know how to add a group with a specific gid to preseed?17:56
MilesDenverfor groups I found "d-i passwd/user-uid string 113"17:56
MilesDenverI mean for users I found ^^^17:56
jtv1MilesDenver: if nobody here knows, try #ubuntu-server17:58
MilesDenverjtv1: thanks17:58
=== CyberJacob|Away is now known as CyberJacob
=== vladk is now known as vladk|offline
=== jfarschman is now known as MilesDenver
=== d_` is now known as systemsoverlord
=== systemsoverlord is now known as d_`
=== Guest38543 is now known as wallyworld
=== CyberJacob is now known as CyberJacob|Away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!