[07:36] <bigjools> rvba: fancy a quick review? :)
[07:36] <bigjools> https://code.launchpad.net/~julian-edwards/maas/cname-job-for-staticip/+merge/224063
[07:36] <bigjools> not bug
[07:36] <bigjools> big*
[07:36] <rvba> bigjools: sure
[07:37] <bigjools> rvba: FTR, the imports at the top were changed to avoid the dreaded circular import
[09:49] <mfa298> I've been experimenting with maas and juju and seem to have an issue where maas is remembering old ip addresses of nodes which are no longer valid.
[09:49] <mfa298> has anyone seen that before and/or know of any easy fix (I've tried deleting the nodes and recommissioning them but it still has the old data.)
[10:28] <jtv> mfa298: that should only be possible if the dhcpd still has the old leases.
[10:37] <mfa298> I did intially have maas managing dhcp and dns but then changed it so that dhcp+dns where done elsewhere.
[10:38] <mfa298> I did spot the maas dhcpd.leases file which does list some early nodes with fixed addresses.
[10:39] <mfa298> it seems slighty odd that it's still telling things those addresses when it's no longer providing dhcp + dns.
[10:40] <jtv> Providing DHCP is the only way in which maas finds out about those leases.
[10:40] <jtv> So when you disable that, it stops hearing updates, but it doesn't know how you've changed things.
[10:41] <mfa298> surely is should know that I've changed things as I've changed the dropdown in maas from manage DHCP and DNS to Unmanaged
[10:41] <jtv> Yes, but it doesn't know what happens to the nodes' addresses after that.
[10:42] <jtv> (We emphatically recommend letting maas manage dhcp on the nodes' network)
[10:43] <mfa298> it would seem better than it then forgets the adresses it thinks it knows about rather than give out wrong info (I had DHCP + DNS setup with dynamic DNS so resolution by name did work. but I had a juju deploy process hang for the whole weekend as it was trying to use outdated IP information.
[10:45] <mfa298> I've now spotted the setting I had missed so I've gone back to having maas manage dhcp + dns for this test cluster to see if that works better (I'd missed the upstream dns setting so things weren't deploying originally)
[10:45] <jtv> That does sound sensible.  You can probably work around it for now by replacing that leases file with a blank, and wait for the regular update cycle to parse the new file.
[14:14] <loki27_> Hey, i have setup MAAS and added 4 nodes, now i have installed juju-core on the MAAS Master node and configured to connect to my maas instance
[14:14] <loki27_> now when i run bootstrap , it takes ages and i get Attempting to connect to ...
[14:18] <blake_r> loki27_: are you using the fast-path installer?
[14:18] <loki27_> No
[14:18] <loki27_> All set them to default installer
[14:19] <blake_r> loki27_: fast-path is the default method, and the prefered method
[14:19] <blake_r> loki27_: default installer takes a very long time, as it has to go through the whole d-i process, and download each package from the archive
[14:19] <loki27_> well i have Mark node as using default isntaller OR Mark node as using fast-path installer
[14:19] <blake_r> loki27_: juju can even timeout waiting for a default node to finish installing
[14:20] <blake_r> loki27_: mark all nodes with fast-path installer
[14:20] <loki27_> that's what i notices, and im deploying on slow nodes (core 2 duo ) test lab..
[14:22] <loki27_> nice ..!
[14:22] <loki27_> thanks blake_r
[14:48] <loki27_> Im folloing https://help.ubuntu.com/community/UbuntuCloudInfrastructure , now i have deployed all charms and added relations, exposed openstack-dashboard and nova-cloud-controller
[14:48] <loki27_> when i run  juju status openstack-dashboard i get  server: 409 CONFLICT (No matching node is available.))'
[14:52] <jtv> loki27_: that means there's probably a constraint in there that excludes all your remaining nodes.
[14:52] <loki27_> juju get-constraints returns noting, dashbord will show status for all four nodes to "Allocated to david"
[14:53] <jtv> Might happen when units go wrong — e.g. if you specify "I want 1,000, memory" thinking it's kilobytes but the number gets interpreted as megabytes.
[14:53] <jtv> If all four nodes are allocated to david, then there are simply no available nodes to give to you.
[14:53] <loki27_> Right now i did not speicify anything other then default values from the tutorial (just trying to learn)
[14:54] <loki27_> jtv I dont understand how should i determine how many nodes are required to deploy a standard OpenStack infrastrucutre with juju
[14:55] <loki27_> They were all "Ready state" before i just bootstrapped and deployed all the charms required for openstack
[14:55] <jtv> Then I guess that charm eats up more than 3 machines.  kirkland, maybe you can help with that?
[14:56] <blake_r> loki27_: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
[14:56] <blake_r> loki27_: that might help
[14:56] <loki27_> Well i have 1 box dedicated to MAAS + Juju , and i added 4 physical nodes for the deployment
[14:56] <blake_r> loki27_: you need more than 4 to deploy openstack unless you are using lxc contains as well
[14:56] <kirkland> loki27_: I have a bundle that I deploy all day, every day, really nice OpenStack Ice House, which uses 5 nodes
[14:57]  * marcoceppi laments using an IRC nick and domain name that match
[14:57] <loki27_> kirkland ok so i can kick in one more node and that should be enough ? Or is that bundle you are using available somewhere ? ;)
[14:57] <jtv> So glad I didn't go with "the" as my IRC nick
[14:58] <roadmr> loki27_: if you're following that tutorial to the letter, keep in mind that unless told otherwise, juju will deploy one service per node. So 1) maas controller, 2) juju bootstrap, 3) mysql, 4) rabbitmq, 5) keystone, 6) nova-cloud-controller, 7) nova-volume, 8) nova-compute, 9) glance, 10) openstack-dashboard
[14:58] <roadmr> loki27_: you'd need 10 nodes if you want to blindly "juju deploy" all the needed charms
[14:58] <loki27_> ouch, i don't have anywhere close to that budget for openstack deployment right now :P
[14:59] <loki27_> i wass guessing 4 nodes was the absolute minimum for an HA environment with quorum and fail-over controllers
[14:59] <roadmr> loki27_: well you *can* deploy on less nodes, but as I said, you have to explicitly deploy to specific machines, and if you didn't, juju would expect you to have at least 10 nodes
[14:59] <loki27_> And that is why i am trying to deploy on 4 nodes right now ;)
[15:00] <roadmr> loki27_: ok, it can probably be done, I'm just looking at the likely reason why your deployment was not completing
[15:00] <roadmr> loki27_: (based on the way you said you were doing things)
[15:00] <loki27_> roadmr yes fine, and that helps me understand for sure..
[15:01] <roadmr> loki27_: the same document suggests a way to 'co-locate services', that'd only need 7 nodes I think (glance, rabbit and dashboard are co-located with other services)
[15:02] <loki27_> Well from my understanding the suggested co-location, is a 5 node deployment
[15:03] <roadmr> loki27_: yes but I think you're not counting the maas controller and the juju bootstrap node
[15:03] <loki27_> haaa well i have deployed both on the same phys node
[15:04] <roadmr> loki27_: ok, good thinking :)
[15:04] <loki27_> and i guess, it make sense to have the MAAS / Juju on 2 VPS in my datacenter that are standalone (not openstack based)
[15:04] <loki27_> Ok it make sense now :)
[15:05] <loki27_> So how would i roll back all the deployed charms now ? :/
[15:06] <loki27_> juju destroy-environment will be fine , thanks
[15:23] <blake_r> jtv: you will like this one
[15:23] <blake_r> jtv: https://code.launchpad.net/~blake-rouse/maas/maas-boot-poweroff/+merge/224156
[15:45] <jtv> blake_r: great!  Is the syslinux 4/6 transition an obstacle?
[15:46] <jtv> (Because it changes that list of boot loaders)
[15:47] <blake_r> jtv: poweroff.com is on trusty and utopic
[15:47] <blake_r> jtv: same path
[15:47] <jtv> I must admit to being a little mystified, by the way...  this is netbooting a node into a power-off state?
[15:47] <blake_r> jtv: yeah it just powers it off
[15:47] <jtv> (If it's on both trusty and utopic then that covers the transition.  Nice.)
[15:48] <jtv> blake_r: when you say a node gets turned on "out of sync from MAAS," you mean it gets turned on when maas doesn't expect it to turn on?
[15:48] <blake_r> jtv: yes
[15:48] <jtv> Ahhhh cool
[15:55] <jtv> blake_r: reviewed.  Comment copiously.  :)
[16:01] <alexpilotti> blake_r: morning
[16:02] <alexpilotti> trying to join the meeting
[16:02] <alexpilotti> hangout says “This party is over...” :-)
[16:02] <blake_r> alexpilotti: sending invite
[16:02] <alexpilotti> tx
[16:02] <alexpilotti> btw Gabi is joining in as well, but in a few minutes
[16:03] <blake_r> he is in now
[17:08] <schegi> hi playing around with maas/juju for a couple of days now. want to deploy openstack/ceph HA cluster on a couple of machines. is there any way to customize the ceph deployment with the juju charm any firther. have to get it running on different networks than the one default publich network the charm uses.
[17:09] <jtv1> schegi: if #maas doesn't have a good answer for that, have a look in #juju.
[17:09] <schegi> kk tthx
[17:56] <MilesDenver> anyone know how to add a group with a specific gid to preseed?
[17:56] <MilesDenver> for groups I found "d-i passwd/user-uid string 113"
[17:56] <MilesDenver> I mean for users I found ^^^
[17:58] <jtv1> MilesDenver: if nobody here knows, try #ubuntu-server
[17:58] <MilesDenver> jtv1: thanks